Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Apache Kafka
Search
Eko Kurniawan Khannedy
August 30, 2017
Technology
1
4.4k
Apache Kafka
JVM Meetup #5 - Apache Kafka at Blibli.com
Eko Kurniawan Khannedy
August 30, 2017
Tweet
Share
More Decks by Eko Kurniawan Khannedy
See All by Eko Kurniawan Khannedy
Monolith to Event-Driven Microservices
khannedy
1
260
Refactoring
khannedy
0
340
Multi-Datacenter Kafka at Blibli.com
khannedy
2
1.5k
QA Tools - Research and Development
khannedy
0
290
Reactive Puzzle
khannedy
0
210
Event-Driven Architecture
khannedy
1
2k
Resilience Engineering with Hystrix and Spring
khannedy
1
570
Mocking for Unit Test using Mockito
khannedy
1
340
Centralized Configuration using Consul and Spring Cloud
khannedy
2
710
Other Decks in Technology
See All in Technology
【AWS re:Invent 2025速報】AIビルダー向けアップデートをまとめて解説!
minorun365
4
480
Ruby で作る大規模イベントネットワーク構築・運用支援システム TTDB
taketo1113
1
210
A Compass of Thought: Guiding the Future of Test Automation ( #jassttokai25 , #jassttokai )
teyamagu
PRO
1
250
法人支出管理領域におけるソフトウェアアーキテクチャに基づいたテスト戦略の実践
ogugu9
1
210
Haskell を武器にして挑む競技プログラミング ─ 操作的思考から意味モデル思考へ
naoya
6
1k
「Managed Instances」と「durable functions」で広がるAWS Lambdaのユースケース
lamaglama39
0
280
AWS Trainium3 をちょっと身近に感じたい
bigmuramura
1
130
第4回 「メタデータ通り」 リアル開催
datayokocho
0
120
乗りこなせAI駆動開発の波
eltociear
1
1k
品質のための共通認識
kakehashi
PRO
3
220
エンジニアリングマネージャー はじめての目標設定と評価
halkt
0
260
会社紹介資料 / Sansan Company Profile
sansan33
PRO
11
390k
Featured
See All Featured
Typedesign – Prime Four
hannesfritz
42
2.9k
Scaling GitHub
holman
464
140k
Rebuilding a faster, lazier Slack
samanthasiow
84
9.3k
Optimizing for Happiness
mojombo
379
70k
Practical Orchestrator
shlominoach
190
11k
Docker and Python
trallard
47
3.7k
Bootstrapping a Software Product
garrettdimon
PRO
307
120k
Thoughts on Productivity
jonyablonski
73
5k
Responsive Adventures: Dirty Tricks From The Dark Corners of Front-End
smashingmag
253
22k
Product Roadmaps are Hard
iamctodd
PRO
55
12k
Large-scale JavaScript Application Architecture
addyosmani
515
110k
Building a Modern Day E-commerce SEO Strategy
aleyda
45
8.3k
Transcript
APACHE KAFKA EKO KURNIAWAN KHANNEDY
APACHE KAFKA EKO KURNIAWAN KHANNEDY ▸ Principal Software Development Engineer
at Blibli.com ▸ Part of RnD Team at Blibli.com ▸
[email protected]
APACHE KAFKA AGENDA ▸ Kafka Intro ▸ Kafka Internals ▸
Installing Kafka ▸ Kafka Producer ▸ Kafka Consumer ▸ Kafka in blibli.com ▸ Demo ▸ Conclusion
KAFKA INTRO APACHE KAFKA
APACHE KAFKA BEFORE PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK
PAYMENT … ERP FINANCE …
APACHE KAFKA PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK PAYMENT
… ERP FINANCE … MESSAGING SYSTEM / MESSAGE BROKER
None
APACHE KAFKA WHAT IS KAFKA ▸ Apache Kafka is a
publish/subscribe messaging system, or more recently a “distributing streaming platform” ▸ Opensource project under Apache Software Foundation.
APACHE KAFKA KAFKA HISTORY ▸ Kafka was born to solve
the data pipeline problem in LinkedIn. ▸ The development team at LinkedIn was led by Jay Kreps, now CEO of Confluent. ▸ Kafka was released as an Open Source project on Github in late 2010, and join Apache Software Foundation in 2011.
KAFKA INTERNALS APACHE KAFKA
APACHE KAFKA BROKER TOPIC A PARTITION 0 TOPIC A PARTITION
1 KAFKA BROKER
APACHE KAFKA CLUSTER TOPIC A PARTITION 0 TOPIC A PARTITION
1 (LEADER) KAFKA BROKER 1 TOPIC A PARTITION 0 TOPIC A PARTITION 1 (LEADER) KAFKA BROKER 2
APACHE KAFKA TOPICS ▸ Messages in Kafka are categorized into
Topics. ▸ The closest analogy for topic is a database table, or a folder in filesystem.
APACHE KAFKA PARTITIONS
APACHE KAFKA REPLICATION FACTOR TOPIC A PARTITION 0 TOPIC A
PARTITION 1 KAFKA BROKER 1 TOPIC A PARTITION 0 KAFKA BROKER 2 TOPIC A PARTITION 1 KAFKA BROKER 3 TOPIC A PARTITION 0 TOPIC A PARTITION 1 KAFKA BROKER 4
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA CONSUMER GROUP (2)
APACHE KAFKA RETENTION POLICY ▸ A key feature of Apache
Kafka is that of retention, or the durable storage of messages for some period of time. ▸ We can set retention policy per topics by time or by size.
APACHE KAFKA MIRROR MAKER
INSTALLING KAFKA APACHE KAFKA
APACHE KAFKA JAVA ▸ Kafka using Java 8.
APACHE KAFKA ZOOKEEPER KAFKA BROKER PRODUCER CONSUMER ZOOKEEPER Metadata
APACHE KAFKA KAFKA BROKER # Minimum Broker Configuration broker.id=0 #
must unique in cluster zookeeper.connect=localhost:2181 log.dirs=data/kafka-logs
APACHE KAFKA CREATE / UPDATE TOPIC kafka-topics.sh --create --zookeeper localhost:2181
-- replication-factor 1 --partitions 1 --topic topic_name kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_name --partitions 2 --replication-factor 2
KAFKA PRODUCER APACHE KAFKA
APACHE KAFKA PRODUCER RECORD PRODUCER RECORD TOPIC PARTITION KEY VALUE
APACHE KAFKA SERIALIZER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
APACHE KAFKA PARTITIONER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
PARTITIONER Send to Broker
APACHE KAFKA KAFKA PRODUCER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("key.serializer", “org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps);
APACHE KAFKA SEND MESSAGE record = new ProducerRecord<>(topicName, key, value);
producer.send(record);
KAFKA CONSUMER APACHE KAFKA
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA CONSUMER RECORD & DESERIALIZER CONSUMER RECORD TOPIC PARTITION
KEY VALUE DESERIALIZER From Broker
APACHE KAFKA KAFKA CONSUMER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("group.id", "GroupName"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); consumer = new KafkaConsumer<String, String>(props);
APACHE KAFKA GET MESSAGES consumer.subscribe(Collections.singletonList("topicName")); Long timeout = 1000L; ConsumerRecords<String,
String> records = consumer.poll(timeout);
KAFKA IN BLIBLI APACHE KAFKA
APACHE KAFKA API GATEWAY EVENT API GATEWAY MEMBER API GATEWAY
COMMON API GATEWAY … KAFKA ANALYTICS … …
APACHE KAFKA CURRENT PRODUCT (CODENAME X) X MEMBER X CART
X AUTH X WISHLIST API GATEWAY X YYYY X XXX X ORDER X PRODUCT
APACHE KAFKA NEW PRODUCT (CODENAME VERONICA) VERONICA MEMBER VERONICA CORE
VERONICA MERCHANT KAFKA VERONICA NOTIFICATION API GATEWAY
DEMO
CONCLUSION APACHE KAFKA
APACHE KAFKA WHY KAFKA? ▸ Multiple Consumer ▸ Flexible Scalability
▸ Flexible Durability ▸ High Performance ▸ Multi-Datacenter
WE ARE HIRING!
[email protected]
APACHE KAFKA
APACHE KAFKA REFERENCES ▸ http://kafka.apache.org/ ▸ https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million- writes-second-three-cheap-machines ▸ https://engineering.linkedin.com/kafka/running-kafka-scale