Slide 1

Slide 1 text

@ @gamussa @virtualJUG @confluentinc @gamussa @virtualJUG @confluentinc

Slide 2

Slide 2 text

@ @gamussa @virtualJUG @confluentinc

Slide 3

Slide 3 text

@ @gamussa @virtualJUG @confluentinc Solutions Architect Developer Advocate @gamussa in internetz Hey you, yes, you, go follow me in twitter © Who am I?

Slide 4

Slide 4 text

@ @gamussa @virtualJUG @confluentinc Kafka & Confluent

Slide 5

Slide 5 text

@ @gamussa @virtualJUG @confluentinc We are hiring! https://www.confluent.io/careers/

Slide 6

Slide 6 text

@ @gamussa @virtualJUG @confluentinc A company is build on DATA FLOWS but All we have is DATA STORES

Slide 7

Slide 7 text

@ @gamussa @virtualJUG @confluentinc

Slide 8

Slide 8 text

@ @gamussa @virtualJUG @confluentinc

Slide 9

Slide 9 text

@ @gamussa @virtualJUG @confluentinc

Slide 10

Slide 10 text

@ @gamussa @virtualJUG @confluentinc Kafka Serving Layer (Cassandra, KV-storage, cache, etc.) Kafka Streams / KSQL Continuous Computation High Throughput Messaging API based clustering Origins in Stream Processing

Slide 11

Slide 11 text

@ @gamussa @virtualJUG @confluentinc Streaming Platform 1.Pub / Sub 2.Store 3.Process

Slide 12

Slide 12 text

@ @gamussa @virtualJUG @confluentinc Kafka is a Streaming Platform The Log Connectors Connectors Producer Consumer Streaming Engine

Slide 13

Slide 13 text

@ @gamussa @virtualJUG @confluentinc authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 14

Slide 14 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 15

Slide 15 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 16

Slide 16 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 17

Slide 17 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 18

Slide 18 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 19

Slide 19 text

@ @gamussa @virtualJUG @confluentinc CREATE STREAM possible_fraud AS SELECT card_number, count(*) FROM authorization_attempts WINDOW TUMBLING (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; authorization_attempts possible_fraud What exactly is Stream Processing?

Slide 20

Slide 20 text

@ @gamussa @virtualJUG @confluentinc Streaming is the toolset for dealing with events as they move!

Slide 21

Slide 21 text

@ @gamussa @virtualJUG @confluentinc What is a Streaming Platform? The Log Connectors Connectors Producer Consumer Streaming Engine

Slide 22

Slide 22 text

@ @gamussa @virtualJUG @confluentinc Kafka’s Distributed Log The Log Connectors Connectors Producer Consumer Streaming Engine

Slide 23

Slide 23 text

@ @gamussa @virtualJUG @confluentinc Similar to a traditional messaging system (ActiveMQ, Rabbit etc) but with: (a) Far better scalability (b) Built in fault tolerance / HA (c) Storage The log is a type of durable messaging system

Slide 24

Slide 24 text

@ @gamussa @virtualJUG @confluentinc The log is a simple idea Messages are added at the end of the log Old New

Slide 25

Slide 25 text

@ @gamussa @virtualJUG @confluentinc Consumers have a position all of their own Sally is here George is here Fred is here Old New Scan Scan Scan

Slide 26

Slide 26 text

@ @gamussa @virtualJUG @confluentinc Only Sequential Access Old New Read to offset & scan

Slide 27

Slide 27 text

@ @gamussa @virtualJUG @confluentinc Scaling Out

Slide 28

Slide 28 text

@ @gamussa @virtualJUG @confluentinc Shard data to get scalability Messages are sent to different partitions Producer (1) Producer (2) Producer (3) Cluster of machines Partitions live on different machines

Slide 29

Slide 29 text

@ @gamussa @virtualJUG @confluentinc Replicate to get fault tolerance replicate msg msg leader Machine A Machine B

Slide 30

Slide 30 text

@ @gamussa @virtualJUG @confluentinc Partition Leadership and Replication Broker 1 Topic1 partition1 Broker 2 Broker 3 Broker 4 Topic1 partition1 Topic1 partition1 Leader Follower Topic1 partition2 Topic1 partition2 Topic1 partition2 Topic1 partition3 Topic1 partition4 Topic1 partition3 Topic1 partition3 Topic1 partition4 Topic1 partition4

Slide 31

Slide 31 text

@ @gamussa @virtualJUG @confluentinc Replication provides resiliency A ‘replica’ takes over on machine failure

Slide 32

Slide 32 text

@ @gamussa @virtualJUG @confluentinc Partition Leadership and Replication - node failure Broker 1 Topic1 partition1 Broker 2 Broker 3 Broker 4 Topic1 partition1 Topic1 partition1 Leader Follower Topic1 partition2 Topic1 partition2 Topic1 partition2 Topic1 partition3 Topic1 partition4 Topic1 partition3 Topic1 partition3 Topic1 partition4 Topic1 partition4

Slide 33

Slide 33 text

@ @gamussa @virtualJUG @confluentinc Linearly Scalable Architecture Single topic: - Many producers machines - Many consumer machines - Many Broker machines No Bottleneck!! Consumers Producers

Slide 34

Slide 34 text

@ @gamussa @virtualJUG @confluentinc Worldwide, localized views 34 NY London Tokyo Replicator Replicator Replicator

Slide 35

Slide 35 text

@ @gamussa @virtualJUG @confluentinc The Connect API The Log Connectors Connectors Producer Consumer Streaming Engine

Slide 36

Slide 36 text

@ @gamussa @virtualJUG @confluentinc Ingest / Egest into any data source Kafka Connect Kafka Connect

Slide 37

Slide 37 text

@ @gamussa @virtualJUG @confluentinc Ingest/Egest data from/to data sources Amazon S3 Elasticsearch HDFS JDBC Couchbase Cassandra Oracle SAP Vertica Blockchain JMX Kenesis MongoDB MQTT NATS Postgres Rabbit Redis Twitter Bintray DynamoDB FTP Github BigQuery Google Pub Sub RethinkDB Salesforce Solr Splunk

Slide 38

Slide 38 text

@ @gamussa @virtualJUG @confluentinc Kafka Streams and KSQL The Log Connectors Connectors Producer Consumer Streaming Engine

Slide 39

Slide 39 text

@ @gamussa @virtualJUG @confluentinc SELECT card_number, count(*) FROM authorization_attempts WINDOW (SIZE 5 MINUTE) GROUP BY card_number HAVING count(*) > 3; Engine for Continuous Computation

Slide 40

Slide 40 text

@ @gamussa @virtualJUG @confluentinc But it’s just an API public static void main(String[] args) { StreamsBuilder builder = new StreamsBuilder(); builder.stream("caterpillars") .map(StreamsApp::coolTransformation) .to("butterflies"); new KafkaStreams(builder.build(), props()).start(); }

Slide 41

Slide 41 text

@ @gamussa @virtualJUG @confluentinc Compacted Topic Join Stream Table Kafka Kafka Streams / KSQL Topic Join Streams and Tables

Slide 42

Slide 42 text

@ @gamussa @virtualJUG @confluentinc KAFKA Payments Orders Buffer 5 mins Emailer Windows / Retention – Handle Late Events In an asynchronous world, will the payment come first, or the order? Join by Key

Slide 43

Slide 43 text

@ @gamussa @virtualJUG @confluentinc Windows / Retention – Handle Late Events KAFKA Payments Orders Buffer 5 mins Emailer Join by Key KStream orders = builder.stream("Orders"); KStream payments = builder.stream("Payments"); orders.join(payments, KeyValue::new, JoinWindows.of(1 * MIN)) .peek((key, pair) -> emailer.sendMail(pair));

Slide 44

Slide 44 text

@ @gamussa @virtualJUG @confluentinc A KTable is just a stream with infinite retention KAFKA Emailer Orders, Payments Customers Join

Slide 45

Slide 45 text

@ @gamussa @virtualJUG @confluentinc A KTable is a stream with infinite retention KAFKA Emailer Orders, Payments Customers Join Materialize a table in two lines of code! KStream orders = builder.stream("Orders"); KStream payments = builder.stream("Payments"); KTable customers = builder.table("Customers"); orders.join(payments, EmailTuple::new, JoinWindows.of(1*MIN)) .join(customers, (tuple, cust) -> tuple.setCust(cust)) .peek((key, tuple) -> emailer.sendMail(tuple));

Slide 46

Slide 46 text

@ @gamussa @virtualJUG @confluentinc The Log Connectors Connectors Producer Consumer Streaming Engine Kafka is a complete Streaming Platform

Slide 47

Slide 47 text

@ @gamussa @virtualJUG @confluentinc Find your local Meetup Group https://cnfl.io/kafka-meetups Join us in Slack http://cnfl.io/slack Grab Stream Processing books https://cnfl.io/book-bundle

Slide 48

Slide 48 text

@ @gamussa @virtualJUG @confluentinc www.kafka-summit.org promo: Gamov20

Slide 49

Slide 49 text

@ @gamussa @virtualJUG @confluentinc https://www.confluent.io/download/

Slide 50

Slide 50 text

@ @gamussa @virtualJUG @confluentinc One more thing…

Slide 51

Slide 51 text

@ @gamussa @virtualJUG @confluentinc

Slide 52

Slide 52 text

@ @gamussa @virtualJUG @confluentinc

Slide 53

Slide 53 text

@ @gamussa @virtualJUG @confluentinc A Major New Paradigm

Slide 54

Slide 54 text

@ @gamussa @virtualJUG @confluentinc Thanks! @gamussa [email protected] We are hiring! https://www.confluent.io/careers/