Slide 1

Slide 1 text

Ben Mabey VP of Engineering Modern Data Pipelines using Kafka Streaming and Kubernetes Scott Nielsen Director of Data Engineering Utah Data Engineering Meetup, October 2018

Slide 2

Slide 2 text

Decoding Biology to Radically Improve Lives

Slide 3

Slide 3 text

© 2017 Recursion Pharmaceuticals 1000s of untreated genetic diseases Photo of our wall?

Slide 4

Slide 4 text

Why is this needed?

Slide 5

Slide 5 text

0.00001 0.0001 0.001 0.01 0.1 1 10 100 1000 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Transistor Area (% of 1970 values) Moore’s Law

Slide 6

Slide 6 text

0.00001 0.0001 0.001 0.01 0.1 1 10 100 1000 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Transistor Area (% of 1970 values) Moore’s Law Eroom’s Law

Slide 7

Slide 7 text

0.00001 0.0001 0.001 0.01 0.1 1 10 100 1000 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Transistor Area (% of 1970 values) 1 10 100 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 R&D Spend / Drug (% of 2007 values) Moore’s Law Eroom’s Law

Slide 8

Slide 8 text

How?

Slide 9

Slide 9 text

RecursionPharma.com

Slide 10

Slide 10 text

RecursionPharma.com hoechst (DNA)

Slide 11

Slide 11 text

RecursionPharma.com concanavalin A (ER)

Slide 12

Slide 12 text

RecursionPharma.com mitotracker (mitochondria)

Slide 13

Slide 13 text

RecursionPharma.com WGA (golgi apparatus, cell membrane)

Slide 14

Slide 14 text

RecursionPharma.com SYTO 14 (RNA, nucleoli)

Slide 15

Slide 15 text

RecursionPharma.com phalloidin (actin fibers)

Slide 16

Slide 16 text

RecursionPharma.com combined

Slide 17

Slide 17 text

RecursionPharma.com

Slide 18

Slide 18 text

RecursionPharma.com

Slide 19

Slide 19 text

RecursionPharma.com

Slide 20

Slide 20 text

RecursionPharma.com Over 2 million per week 25 cents each

Slide 21

Slide 21 text

Images are rich.

Slide 22

Slide 22 text

Images are rich. fast.

Slide 23

Slide 23 text

Images are rich. fast. cheap.

Slide 24

Slide 24 text

Images are rich. fast. cheap. Fix drug discovery?

Slide 25

Slide 25 text

Healthy child Child with rare genetic disease (Cornelia de Lange Syndrome)

Slide 26

Slide 26 text

Healthy child Healthy cells Child with rare genetic disease (Cornelia de Lange Syndrome) Genetic disease model cells (Cornelia de Lange Syndrome)

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

Healthy Disease

Slide 29

Slide 29 text

Healthy Disease Disease + Drug?

Slide 30

Slide 30 text

Experiment A Experiment B Experiment C Experiment D

Slide 31

Slide 31 text

86mm 2mm 86mm 2mm 86mm 2mm 308 wells/plate

Slide 32

Slide 32 text

86mm 2mm 4 sites/well 308 wells/plate

Slide 33

Slide 33 text

86mm 2mm 86mm 2mm 86mm 2mm 86mm 2mm 6 channels (images)/site 7,392 images per plate 4 sites/well 308 wells/plate

Slide 34

Slide 34 text

86mm 2mm 86mm 2mm 86mm 2mm 86mm 2mm 6 channels (images)/site 7,392 images per plate 4 sites/well 308 wells/plate ~69GB per plate

Slide 35

Slide 35 text

Images / channel level

Slide 36

Slide 36 text

Images / channel level image level metrics

Slide 37

Slide 37 text

Images / channel level site (all channels/images) thumbnails image level metrics

Slide 38

Slide 38 text

Images / channel level site (all channels/images) thumbnails site level features image level metrics

Slide 39

Slide 39 text

Images / channel level site (all channels/images) thumbnails site level features image level metrics

Slide 40

Slide 40 text

Images / channel level site (all channels/images) thumbnails site level features image level metrics site metrics

Slide 41

Slide 41 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features image level metrics site metrics

Slide 42

Slide 42 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features image level metrics site metrics metrics

Slide 43

Slide 43 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features image level metrics site metrics metrics plate level features metrics

Slide 44

Slide 44 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics

Slide 45

Slide 45 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc

Slide 46

Slide 46 text

Traditional, low throughput, biology

Slide 47

Slide 47 text

© 2017 Recursion Pharmaceuticals High-throughput experiments Robots photo

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

100 6.9TB

Slide 50

Slide 50 text

100 6.9TB 300 20TB

Slide 51

Slide 51 text

100 6.9TB 300 20TB 700 48TB 1,300 90TB 1,700 118TB 1,900 132 TB

Slide 52

Slide 52 text

100 6.9TB 300 20TB 700 48TB 1,300 90TB 1,700 118TB 1,900 132 TB 3,600 250 TB

Slide 53

Slide 53 text

Systems early 2017

Slide 54

Slide 54 text

Systems early 2017 •Microservices written in Python and Go. Some were AWS lambdas while others were containerized, running on kubernetes.

Slide 55

Slide 55 text

Systems early 2017 •Microservices written in Python and Go. Some were AWS lambdas while others were containerized, running on kubernetes. •Main job queue ran on Google pub/sub with autoscaling feature. Experimenting with Kubernetes jobs for other use cases.

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

Systems early 2017 •Microservices written in Python and Go. Some were AWS lambdas while others were containerized, running on kubernetes. •Main job queue ran on Google pub/sub with autoscaling feature. Experimenting with Kubernetes jobs for other use cases.

Slide 59

Slide 59 text

Systems early 2017 •Experiments processed in batch once an experiment was complete. •Microservices written in Python and Go. Some were AWS lambdas while others were containerized, running on kubernetes. •Main job queue ran on Google pub/sub with autoscaling feature. Experimenting with Kubernetes jobs for other use cases.

Slide 60

Slide 60 text

Experiment A Experiment B Experiment C Experiment D Plates are not imaged in order

Slide 61

Slide 61 text

Lab wanted realtime feedback…

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

Why not ?

Slide 66

Slide 66 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode)

Slide 67

Slide 67 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics)

Slide 68

Slide 68 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data']))

Slide 69

Slide 69 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics)

Slide 70

Slide 70 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well)

Slide 71

Slide 71 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well) plate_features = well_site_features.groupBy(lambda w: w['plate']) plate_metrics = plate_features.map(calc_plate_features)

Slide 72

Slide 72 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well) plate_features = well_site_features.groupBy(lambda w: w['plate']) plate_metrics = plate_features.map(calc_plate_features) experiment_features = plate_features.groupBy(lambda p: p['experiment']) experiment_metrics = (experiment_features .map(lambda e: e[‘experiment']).map(calc_exp_metrics))

Slide 73

Slide 73 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc (pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well) plate_features = well_site_features.groupBy(lambda w: w['plate']) plate_metrics = plate_features.map(calc_plate_features) experiment_features = plate_features.groupBy(lambda p: p['experiment']) experiment_metrics = (experiment_features .map(lambda e: e[‘experiment']).map(calc_exp_metrics)) reports = experiment_features.map(lambda e: e['experiment']).map(run_report)

Slide 74

Slide 74 text

(pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well) plate_features = well_site_features.groupBy(lambda w: w['plate']) plate_metrics = plate_features.map(calc_plate_features) experiment_features = plate_features.groupBy(lambda p: p['experiment']) experiment_metrics = (experiment_features .map(lambda e: e[‘experiment']).map(calc_exp_metrics)) reports = experiment_features.map(lambda e: e['experiment']).map(run_report)

Slide 75

Slide 75 text

(pseudocode) images = get_images_rdd_for_experiment('foo') image_metrics = images.map(compute_image_metrics) sites = images.groupBy(lambda i: (i['plate'], i['site'])) site_features = sites.map(lambda i: extract_features(i['data'])) site_metrics = site_features.map(compute_site_metrics) well_site_features = site_features.groupBy(lambda s: s['well']) well_features = well_site_features.map(aggregate_site_to_well) plate_features = well_site_features.groupBy(lambda w: w['plate']) plate_metrics = plate_features.map(calc_plate_features) experiment_features = plate_features.groupBy(lambda p: p['experiment']) experiment_metrics = (experiment_features .map(lambda e: e[‘experiment']).map(calc_exp_metrics)) reports = experiment_features.map(lambda e: e['experiment']).map(run_report) ?

Slide 76

Slide 76 text

Why not ?

Slide 77

Slide 77 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally.

Slide 78

Slide 78 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally. •We didn’t want to rewrite any of the microservices.

Slide 79

Slide 79 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally. •We didn’t want to rewrite any of the microservices. •Some of our “map” operations are dependency heavy and have high variation in memory usage which requires fine tuning of workers for that particular function/task.

Slide 80

Slide 80 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally. •We didn’t want to rewrite any of the microservices. •Some of our “map” operations are dependency heavy and have high variation in memory usage which requires fine tuning of workers for that particular function/task. •Cloud providers didn’t have container support. No Kubernetes support then either. (now in beta)

Slide 81

Slide 81 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally. •We didn’t want to rewrite any of the microservices. •Some of our “map” operations are dependency heavy and have high variation in memory usage which requires fine tuning of workers for that particular function/task. •Cloud providers didn’t have container support. No Kubernetes support then either. (now in beta)

Slide 82

Slide 82 text

Why not ? •Spark Streaming in 2017 with the mini batch model would not allow us to express the workflow naturally. •We didn’t want to rewrite any of the microservices. •Some of our “map” operations are dependency heavy and have high variation in memory usage which requires fine tuning of workers for that particular function/task. •Cloud providers didn’t have container support. No Kubernetes support then either. (now in beta)

Slide 83

Slide 83 text

What about ?

Slide 84

Slide 84 text

What about ? •Probably the closest to what we wanted/needed. But…

Slide 85

Slide 85 text

What about ? •Probably the closest to what we wanted/needed. But… •The migration path was still unclear with all of our microservices.

Slide 86

Slide 86 text

What about ? •Probably the closest to what we wanted/needed. But… •The migration path was still unclear with all of our microservices. •Lots of operational complexity around running a Storm cluster. No Kubernetes support.

Slide 87

Slide 87 text

What about ? •Probably the closest to what we wanted/needed. But… •The migration path was still unclear with all of our microservices. •Lots of operational complexity around running a Storm cluster. No Kubernetes support. •Popularity seemed to be fading.

Slide 88

Slide 88 text

What about ? •Probably the closest to what we wanted/needed. But… •The migration path was still unclear with all of our microservices. •Lots of operational complexity around running a Storm cluster. No Kubernetes support. •Popularity seemed to be fading. •The real reason… it was 2017. Better cluster and streaming primitives existed.

Slide 89

Slide 89 text

What about ? •Probably the closest to what we wanted/needed. But… •The migration path was still unclear with all of our microservices. •Lots of operational complexity around running a Storm cluster. No Kubernetes support. •Popularity seemed to be fading. •The real reason… it was 2017. Better cluster and streaming primitives existed.

Slide 90

Slide 90 text

With all of these stream processors you still needed to provide a stream (queue)…

Slide 91

Slide 91 text

No content

Slide 92

Slide 92 text

No content

Slide 93

Slide 93 text

Kafka Streams was just released…

Slide 94

Slide 94 text

ANATOMY OF A KAFKA TOPIC Partition 0 Partition 0 Partition 0 Partition 1 Partition 1 Partition 1 Partition 2 Partition 2 Partition 2 A partitioned and replicated structured commit log

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

CONSUMER GROUPS Parallelism is only limited by the number of partitions

Slide 97

Slide 97 text

KAFKA STREAMS Obligatory Word Count Example

Slide 98

Slide 98 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long();

Slide 99

Slide 99 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long(); KStream textLines = builder.stream("streams-plaintext-input", Consumed.with(stringSerde, stringSerde);

Slide 100

Slide 100 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long(); KStream textLines = builder.stream("streams-plaintext-input", Consumed.with(stringSerde, stringSerde); KTable wordCounts = textLines .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))

Slide 101

Slide 101 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long(); KStream textLines = builder.stream("streams-plaintext-input", Consumed.with(stringSerde, stringSerde); KTable wordCounts = textLines .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+"))) .groupBy((key, value) -> value)

Slide 102

Slide 102 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long(); KStream textLines = builder.stream("streams-plaintext-input", Consumed.with(stringSerde, stringSerde); KTable wordCounts = textLines .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+"))) .groupBy((key, value) -> value) .count()

Slide 103

Slide 103 text

KAFKA STREAMS Obligatory Word Count Example final Serde stringSerde = Serdes.String(); final Serde longSerde = Serdes.Long(); KStream textLines = builder.stream("streams-plaintext-input", Consumed.with(stringSerde, stringSerde); KTable wordCounts = textLines .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+"))) .groupBy((key, value) -> value) .count() wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));

Slide 104

Slide 104 text

dagger workflow library written on top of Kafka Streams that orchestrates microservices

Slide 105

Slide 105 text

dagger workflow library written on top of Kafka Streams that orchestrates microservices Dagger, ya know, because it is all about the workflows represented as directed acyclic graphs, i.e. DAGs.

Slide 106

Slide 106 text

dagger workflow library written on top of Kafka Streams that orchestrates microservices

Slide 107

Slide 107 text

How big is it?

Slide 108

Slide 108 text

core logic ~2700 LOC How big is it?

Slide 109

Slide 109 text

core logic ~2700 LOC All of our our DAGs, including schema, task, and workflow definition ~1700 LOC How big is it?

Slide 110

Slide 110 text

core logic ~2700 LOC All of our our DAGs, including schema, task, and workflow definition ~1700 LOC How big is it?

Slide 111

Slide 111 text

No content

Slide 112

Slide 112 text

DAGGER CONCEPTUAL OVERVIEW

Slide 113

Slide 113 text

DAGGER CONCEPTUAL OVERVIEW Schemas - What does my data look like

Slide 114

Slide 114 text

DAGGER CONCEPTUAL OVERVIEW Schemas - What does my data look like Topics - Where is my data coming from / going to

Slide 115

Slide 115 text

DAGGER CONCEPTUAL OVERVIEW Schemas - What does my data look like Topics - Where is my data coming from / going to External Tasks - Triggers actions outside of the streams application

Slide 116

Slide 116 text

DAGGER CONCEPTUAL OVERVIEW Schemas - What does my data look like Topics - Where is my data coming from / going to External Tasks - Triggers actions outside of the streams application DAGs - Combines schemas, topics, and tasks into a complete workflow

Slide 117

Slide 117 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc

Slide 118

Slide 118 text

images_channel

Slide 119

Slide 119 text

images_channel Kafka topic, images_channel, a message for each image

Slide 120

Slide 120 text

images_channel (d/register-schema! system (d/record "channel_level" ["experiment_id" "string"] ["cell_type" "string"] ["plate_number" "int"] ["plate_barcode" "string"] ["well" "string"] ["site" "int"] ["channel" "int"] ["location" "string"])) Kafka topic, images_channel, a message for each image

Slide 121

Slide 121 text

images_channel (d/register-schema! system (d/record "channel_level" ["experiment_id" "string"] ["cell_type" "string"] ["plate_number" "int"] ["plate_barcode" "string"] ["well" "string"] ["site" "int"] ["channel" "int"] ["location" "string"])) Everything is serialized to Avro Kafka topic, images_channel, a message for each image

Slide 122

Slide 122 text

images_channel (d/register-schema! system (d/record "channel_level" ["experiment_id" "string"] ["cell_type" "string"] ["plate_number" "int"] ["plate_barcode" "string"] ["well" "string"] ["site" "int"] ["channel" "int"] ["location" "string"])) Everything is serialized to Avro In the future will use the Confluent Schema Registry Kafka topic, images_channel, a message for each image

Slide 123

Slide 123 text

images_channel Kafka topic, images_channel, a message for each image (d/register-topic! system {::d/name "images_channel" ::d/key-schema :string ::d/value-schema "channel_level"})

Slide 124

Slide 124 text

images_channel Kafka topic, images_channel, a message for each image (d/register-topic! system {::d/name "images_channel" ::d/key-schema :string ::d/value-schema "channel_level"}) Specifies the schema to use for the key and value of a Kafka topic

Slide 125

Slide 125 text

images_channel Kafka topic, images_channel, a message for each image (d/register-topic! system {::d/name "images_channel" ::d/key-schema :string ::d/value-schema "channel_level"}) Specifies the schema to use for the key and value of a Kafka topic In the future will also use the Confluent Schema Registry

Slide 126

Slide 126 text

images_channel image level metrics (d/register-task! system metrics-registry {::d/name “image-level-metrics“ ::d/doc “Extracts descriptive pixel stats from images" ::d/input {::d/schema “channel_level"} ::d/output {::d/name “image_stats_results" ::d/schema “image_stats_results"}}) ::d/input {::d/schema “channel_level"}

Slide 127

Slide 127 text

images_channel image level metrics (d/register-task! system metrics-registry {::d/name “image-level-metrics“ ::d/doc “Extracts descriptive pixel stats from images" ::d/input {::d/schema “channel_level"} ::d/output {::d/name “image_stats_results" ::d/schema “image_stats_results"}}) Creates a task input topic to be consumed by an external service Dagger DAG (Kafka Streams App) Kafka Dagger Task (External Service) task input topic ::d/input {::d/schema “channel_level"}

Slide 128

Slide 128 text

images_channel image level metrics (d/register-task! system metrics-registry {::d/name “image-level-metrics“ ::d/doc “Extracts descriptive pixel stats from images" ::d/input {::d/schema “channel_level"} ::d/output {::d/name “image_stats_results" ::d/schema “image_stats_results"}}) Optionally creates a task output topic where the external service will publish results Creates a task input topic to be consumed by an external service Dagger DAG (Kafka Streams App) Kafka Dagger Task (External Service) task input topic task output topic ::d/output {::d/name “image_stats_results" ::d/schema “image_stats_results"}})

Slide 129

Slide 129 text

(d/register-http-task! system metrics-registry {::d/name "image-thumbnails" ::d/doc "Create composite thumbnail images for the given well." ::d/input {::d/schema "well_level"} ::d/output {::d/schema “ack"} ::d/request-fn (fn [cb well] {:method :post :url "https://lambda.amazonaws.com/prod/thumbnails" :headers {"X-Amz-Invocation-Type" "Event"} :body (json/generate-string well)}) ::d/max-inflight 400 ::d/retries 5 ::d/response-fn (fn [req] (and (= (:status req) 200) (not (#{"Handled" "Unhandled"} (get-in req [:headers :x-amx-function-error])))))}) EXTERNAL TASKS

Slide 130

Slide 130 text

(d/register-http-task! system metrics-registry {::d/name "image-thumbnails" ::d/doc "Create composite thumbnail images for the given well." ::d/input {::d/schema "well_level"} ::d/output {::d/schema “ack"} ::d/request-fn (fn [cb well] {:method :post :url "https://lambda.amazonaws.com/prod/thumbnails" :headers {"X-Amz-Invocation-Type" "Event"} :body (json/generate-string well)}) ::d/max-inflight 400 ::d/retries 5 ::d/response-fn (fn [req] (and (= (:status req) 200) (not (#{"Handled" "Unhandled"} (get-in req [:headers :x-amx-function-error])))))}) EXTERNAL TASKS A HTTP layer on top of tasks

Slide 131

Slide 131 text

(d/register-http-task! system metrics-registry {::d/name "image-thumbnails" ::d/doc "Create composite thumbnail images for the given well." ::d/input {::d/schema "well_level"} ::d/output {::d/schema “ack"} ::d/request-fn (fn [cb well] {:method :post :url "https://lambda.amazonaws.com/prod/thumbnails" :headers {"X-Amz-Invocation-Type" "Event"} :body (json/generate-string well)}) ::d/max-inflight 400 ::d/retries 5 ::d/response-fn (fn [req] (and (= (:status req) 200) (not (#{"Handled" "Unhandled"} (get-in req [:headers :x-amx-function-error])))))}) EXTERNAL TASKS A HTTP layer on top of tasks Starts an in process consumer which consumes from the task input topic and sends HTTP requests to an external service

Slide 132

Slide 132 text

(d/register-http-task! system metrics-registry {::d/name "image-thumbnails" ::d/doc "Create composite thumbnail images for the given well." ::d/input {::d/schema "well_level"} ::d/output {::d/schema “ack"} ::d/request-fn (fn [cb well] {:method :post :url "https://lambda.amazonaws.com/prod/thumbnails" :headers {"X-Amz-Invocation-Type" "Event"} :body (json/generate-string well)}) ::d/max-inflight 400 ::d/retries 5 ::d/response-fn (fn [req] (and (= (:status req) 200) (not (#{"Handled" "Unhandled"} (get-in req [:headers :x-amx-function-error])))))}) EXTERNAL TASKS A HTTP layer on top of tasks Starts an in process consumer which consumes from the task input topic and sends HTTP requests to an external service Uses green threads to control the maximum number of inflight requests to the service

Slide 133

Slide 133 text

(d/register-http-task! system metrics-registry {::d/name "image-thumbnails" ::d/doc "Create composite thumbnail images for the given well." ::d/input {::d/schema "well_level"} ::d/output {::d/schema “ack"} ::d/request-fn (fn [cb well] {:method :post :url "https://lambda.amazonaws.com/prod/thumbnails" :headers {"X-Amz-Invocation-Type" "Event"} :body (json/generate-string well)}) ::d/max-inflight 400 ::d/retries 5 ::d/response-fn (fn [req] (and (= (:status req) 200) (not (#{"Handled" "Unhandled"} (get-in req [:headers :x-amx-function-error])))))}) EXTERNAL TASKS A HTTP layer on top of tasks Starts an in process consumer which consumes from the task input topic and sends HTTP requests to an external service Uses green threads to control the maximum number of inflight requests to the service Automatically backs off and retries on failure

Slide 134

Slide 134 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc

Slide 135

Slide 135 text

site level features images_channel topic experiment_metadata topic cellprofiler_features topic site_images stream

Slide 136

Slide 136 text

images_channel topic

Slide 137

Slide 137 text

images_channel topic (d/register-dag! system {::d/name "standard-cellprofiler" ::d/graph {:images-channel (topic-stream “images_channel”)

Slide 138

Slide 138 text

images_channel topic experiment_metadata topic (d/register-dag! system {::d/name "standard-cellprofiler" ::d/graph {:images-channel (topic-stream “images_channel”) :experiment-metadata (topic-table “experiment_metadata” “exp—store”)

Slide 139

Slide 139 text

site_images stream images_channel topic experiment_metadata topic (d/register-dag! system {::d/name "standard-cellprofiler" ::d/graph {:images-channel (topic-stream “images_channel”) :experiment-metadata (topic-table “experiment_metadata” “exp—store”) :images-site (stream-operation {:channel-level :images-channel :experiment-metadata :experiment-metadata} (agg/site-level agg/preserve-key "images-site-agg") :long "site_level")

Slide 140

Slide 140 text

site level features site_images stream images_channel topic experiment_metadata topic (d/register-dag! system {::d/name "standard-cellprofiler" ::d/graph {:images-channel (topic-stream “images_channel”) :experiment-metadata (topic-table “experiment_metadata” “exp—store”) :images-site (stream-operation {:channel-level :images-channel :experiment-metadata :experiment-metadata} (agg/site-level agg/preserve-key "images-site-agg") :long "site_level") :features-site (external-task :images-site "cellprofiler" {:input-mapper (partial standard-cp-instruction config) :output-mapper unpack-cp-response})

Slide 141

Slide 141 text

site level features site_images stream images_channel topic experiment_metadata topic cellprofiler_features topic (d/register-dag! system {::d/name "standard-cellprofiler" ::d/graph {:images-channel (topic-stream “images_channel”) :experiment-metadata (topic-table “experiment_metadata” “exp—store”) :images-site (stream-operation {:channel-level :images-channel :experiment-metadata :experiment-metadata} (agg/site-level agg/preserve-key "images-site-agg") :long "site_level") :features-site (external-task :images-site "cellprofiler" {:input-mapper (partial standard-cp-instruction config) :output-mapper unpack-cp-response}) :features-output (publish :features-site "cellprofiler_features")}})

Slide 142

Slide 142 text

site level features site_images stream images_channel topic experiment_metadata topic cellprofiler_features topic

Slide 143

Slide 143 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc

Slide 144

Slide 144 text

86mm 2mm well level features Images / channel level site (all channels/images) thumbnails site level features experiment features image level metrics site metrics metrics plate level features metrics metrics, models, reports, etc

Slide 145

Slide 145 text

No content

Slide 146

Slide 146 text

The majority of our work are external tasks on a job queue…

Slide 147

Slide 147 text

Systems early 2017 •Experiments processed in batch once an experiment was complete. •Microservices written in Python and Go. Some were AWS lambdas while others were containerized, running on kubernetes. •Main job queue ran on Google pub/sub with autoscaling feature. Experimenting with Kubernetes jobs for other use cases.

Slide 148

Slide 148 text

Job queue desiderata

Slide 149

Slide 149 text

Job queue desiderata •Language agnostic

Slide 150

Slide 150 text

Job queue desiderata •Language agnostic •Container support, ideally on top of Kubernetes

Slide 151

Slide 151 text

Job queue desiderata •Language agnostic •Container support, ideally on top of Kubernetes •Autoscaling

Slide 152

Slide 152 text

Job queue desiderata •Language agnostic •Container support, ideally on top of Kubernetes •Autoscaling •Sane retry and backoff semantics to handle common failure modes

Slide 153

Slide 153 text

We looked and looked but couldn’t find one….

Slide 154

Slide 154 text

So, we built one. We call it taskstore.

Slide 155

Slide 155 text

So, we built one. We call it taskstore.

Slide 156

Slide 156 text

So, we built one. We call it taskstore. server ~2300 LOC

Slide 157

Slide 157 text

So, we built one. We call it taskstore. server ~2300 LOC worker ~800 LOC

Slide 158

Slide 158 text

Kubernetes Master Node API Scheduler Controller Manager Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod KUBERNETES: AN OS FOR THE CLUSTER

Slide 159

Slide 159 text

PODS • The base schedulable unit of compute and memory

Slide 160

Slide 160 text

CONTROLLER RESOURCES Manage pods with higher level semantics

Slide 161

Slide 161 text

CONTROLLER RESOURCES Manage pods with higher level semantics Replication Controller - runs N copies of a pod across the cluster

Slide 162

Slide 162 text

CONTROLLER RESOURCES Manage pods with higher level semantics Replication Controller - runs N copies of a pod across the cluster Deployment - uses multiple replication controllers to provide rolling deployments

Slide 163

Slide 163 text

CONTROLLER RESOURCES Manage pods with higher level semantics Replication Controller - runs N copies of a pod across the cluster Deployment - uses multiple replication controllers to provide rolling deployments DaemonSet - runs one copy of a pod on each node in the cluster

Slide 164

Slide 164 text

CONTROLLER RESOURCES Manage pods with higher level semantics Replication Controller - runs N copies of a pod across the cluster Deployment - uses multiple replication controllers to provide rolling deployments DaemonSet - runs one copy of a pod on each node in the cluster Job - runs M copies of a pod until it has completed N times

Slide 165

Slide 165 text

KUBERNETES: AN OS FOR THE CLUSTER Kubernetes Master Node API Scheduler Controller Manager Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod

Slide 166

Slide 166 text

KUBERNETES: AN OS FOR THE CLUSTER Node Pool (n1-standard-4, min: 2, max: 100) Autoscaler Node Pool (n1-standard-64, min: 0, max: 300) Kubernetes Master Node API Scheduler Controller Manager Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod Node Pod Pod Pod Pod Pod

Slide 167

Slide 167 text

Server Client

Slide 168

Slide 168 text

Server Client Group A Group X POST /groups A Group is an ordered queue of tasks to be executed.

Slide 169

Slide 169 text

Server Client Group A Group X POST /groups A Group is an ordered queue of tasks to be executed. Max time before a task is presumed hanging and execution is halted

Slide 170

Slide 170 text

Server Client Group A Group X POST /groups A Group is an ordered queue of tasks to be executed. Autoscaling settings dictate how many workers per tasks should be spun up. Max time before a task is presumed hanging and execution is halted

Slide 171

Slide 171 text

Server Client Group A Group X POST /groups A Group is an ordered queue of tasks to be executed. Autoscaling settings dictate how many workers per tasks should be spun up. Max time before a task is presumed hanging and execution is halted Retry settings handle common failure modes. More on this later.

Slide 172

Slide 172 text

Server Client Group A Group X

Slide 173

Slide 173 text

Server Client Group A Group X POST /tasks { "cmd": ["my-program.py", "url-to-data", "settings"], "group": "Group A", "labels": {"my-label": "is-good", "this-label": "is-helpful"} }

Slide 174

Slide 174 text

Server Group A Group X Request new workers

Slide 175

Slide 175 text

Server Group A Group X Request new workers Worker A Worker X

Slide 176

Slide 176 text

Server Group A Group X Worker A Worker X POST /tasks/claim { "groups": ["Group A"], "client-id": "client-123", "duration": 30000 } Request A Worker claims a task to work on for a period of time.

Slide 177

Slide 177 text

Server Group A Group X Worker A Worker X POST /tasks/claim { "groups": ["Group A"], "client-id": "client-123", "duration": 30000 } Request Response { "cmd": [“my-program.py","url-to-data", "settings"], "group": "Group A", "labels": {"my-label": "is-good", "this-label": "is-helpful"} "version": 1, "id": "5292d800-cdda-11e8-87d7-9d45611d "status": "available" } A Worker claims a task to work on for a period of time.

Slide 178

Slide 178 text

Server Group A Group X Worker A Worker X POST /tasks/claim Request A Worker claims a task to work on for a period of time. POST /tasks/extend-claim It must extend the lease of the task or else it will become available for another worker to claim it. { "client-id": "client-123", "duration": 30000, "id": "5292d800-cdda-11e8-87d7-9d45611de99b", "version": 1 }

Slide 179

Slide 179 text

Server Group A Group X Worker A Worker X POST /tasks/claim Request Response { … "version": 2, "id": "5292d800-cdda-11e8-87d7-9d45611de99b", } A Worker claims a task to work on for a period of time. POST /tasks/extend-claim It must extend the lease of the task or else it will become available for another worker to claim it. { "client-id": "client-123", "duration": 30000, "id": "5292d800-cdda-11e8-87d7-9d45611de99b", "version": 1 }

Slide 180

Slide 180 text

Server Group A Group X Worker A Worker X POST /tasks/success or POST /tasks/failure A Worker reports back when a task is finished executing. { "client-id": "client-123", “elapsed-time": 300232000, "id": "5292d800-cdda-11e8-87d7-9d45611de99b", "version": 43 }

Slide 181

Slide 181 text

TASK LIFECYCLE

Slide 182

Slide 182 text

No content

Slide 183

Slide 183 text

No content

Slide 184

Slide 184 text

Lessons learned…

Slide 185

Slide 185 text

No content

Slide 186

Slide 186 text

•The public cloud tide is rising

Slide 187

Slide 187 text

•The public cloud tide is rising •Crushing storage costs

Slide 188

Slide 188 text

•The public cloud tide is rising •Crushing storage costs •Faster, better, and cheaper cloud databases (e.g. BigQuery)

Slide 189

Slide 189 text

•The public cloud tide is rising •Crushing storage costs •Faster, better, and cheaper cloud databases (e.g. BigQuery) •Python and R data science running on containers and Kubernetes

Slide 190

Slide 190 text

•The public cloud tide is rising •Crushing storage costs •Faster, better, and cheaper cloud databases (e.g. BigQuery) •Python and R data science running on containers and Kubernetes As recently as this week, the big Hadoop vendors’ advice has been “translate Python/R code into Scala/Java,” which sounds like King Hadoop commanding the Python/R machine learning tide to go back out again. Containers and Kubernetes work just as well with Python and R as they do with Java and Scala, and provide a far more flexible and powerful framework for distributed computation. And it’s where software development teams are heading anyway – they’re not looking to distribute new microservice applications on top of Hadoop/Spark. Too complicated and limiting.

Slide 191

Slide 191 text

Come help us decode biology!