Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Apache Kafka
Search
Eko Kurniawan Khannedy
August 30, 2017
Technology
1
4.3k
Apache Kafka
JVM Meetup #5 - Apache Kafka at Blibli.com
Eko Kurniawan Khannedy
August 30, 2017
Tweet
Share
More Decks by Eko Kurniawan Khannedy
See All by Eko Kurniawan Khannedy
Monolith to Event-Driven Microservices
khannedy
1
250
Refactoring
khannedy
0
320
Multi-Datacenter Kafka at Blibli.com
khannedy
2
1.5k
QA Tools - Research and Development
khannedy
0
280
Reactive Puzzle
khannedy
0
200
Event-Driven Architecture
khannedy
1
1.9k
Resilience Engineering with Hystrix and Spring
khannedy
1
560
Mocking for Unit Test using Mockito
khannedy
1
340
Centralized Configuration using Consul and Spring Cloud
khannedy
2
680
Other Decks in Technology
See All in Technology
プロダクトエンジニアリング組織への歩み、その現在地 / Our journey to becoming a product engineering organization
hiro_torii
0
110
Amplifyとゼロからはじめた AIコーディング 成果と展望
mkdev10
1
360
Microsoft Build 2025 技術/製品動向 for Microsoft Startup Tech Community
torumakabe
1
210
(非公式) AWS Summit Japan と 海浜幕張 の歩き方 2025年版
coosuke
PRO
1
340
Oracle Cloud Infrastructure:2025年6月度サービス・アップデート
oracle4engineer
PRO
2
140
_第3回__AIxIoTビジネス共創ラボ紹介資料_20250617.pdf
iotcomjpadmin
0
150
Amazon ECS & AWS Fargate 運用アーキテクチャ2025 / Amazon ECS and AWS Fargate Ops Architecture 2025
iselegant
16
4.7k
CI/CDとタスク共有で加速するVibe Coding
tnbe21
0
230
rubygem開発で鍛える設計力
joker1007
1
130
Абьюзим random_bytes(). Фёдор Кулаков, разработчик Lamoda Tech
lamodatech
0
290
米国国防総省のDevSecOpsライフサイクルをAWSのセキュリティサービスとOSSで実現
syoshie
2
820
【TiDB GAME DAY 2025】Shadowverse: Worlds Beyond にみる TiDB 活用術
cygames
0
900
Featured
See All Featured
Being A Developer After 40
akosma
90
590k
Optimizing for Happiness
mojombo
379
70k
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
657
60k
Building Better People: How to give real-time feedback that sticks.
wjessup
367
19k
GitHub's CSS Performance
jonrohan
1031
460k
Why You Should Never Use an ORM
jnunemaker
PRO
56
9.4k
How GitHub (no longer) Works
holman
314
140k
Build The Right Thing And Hit Your Dates
maggiecrowley
36
2.8k
[RailsConf 2023 Opening Keynote] The Magic of Rails
eileencodes
29
9.5k
Intergalactic Javascript Robots from Outer Space
tanoku
271
27k
Speed Design
sergeychernyshev
31
1k
Visualization
eitanlees
146
16k
Transcript
APACHE KAFKA EKO KURNIAWAN KHANNEDY
APACHE KAFKA EKO KURNIAWAN KHANNEDY ▸ Principal Software Development Engineer
at Blibli.com ▸ Part of RnD Team at Blibli.com ▸
[email protected]
APACHE KAFKA AGENDA ▸ Kafka Intro ▸ Kafka Internals ▸
Installing Kafka ▸ Kafka Producer ▸ Kafka Consumer ▸ Kafka in blibli.com ▸ Demo ▸ Conclusion
KAFKA INTRO APACHE KAFKA
APACHE KAFKA BEFORE PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK
PAYMENT … ERP FINANCE …
APACHE KAFKA PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK PAYMENT
… ERP FINANCE … MESSAGING SYSTEM / MESSAGE BROKER
None
APACHE KAFKA WHAT IS KAFKA ▸ Apache Kafka is a
publish/subscribe messaging system, or more recently a “distributing streaming platform” ▸ Opensource project under Apache Software Foundation.
APACHE KAFKA KAFKA HISTORY ▸ Kafka was born to solve
the data pipeline problem in LinkedIn. ▸ The development team at LinkedIn was led by Jay Kreps, now CEO of Confluent. ▸ Kafka was released as an Open Source project on Github in late 2010, and join Apache Software Foundation in 2011.
KAFKA INTERNALS APACHE KAFKA
APACHE KAFKA BROKER TOPIC A PARTITION 0 TOPIC A PARTITION
1 KAFKA BROKER
APACHE KAFKA CLUSTER TOPIC A PARTITION 0 TOPIC A PARTITION
1 (LEADER) KAFKA BROKER 1 TOPIC A PARTITION 0 TOPIC A PARTITION 1 (LEADER) KAFKA BROKER 2
APACHE KAFKA TOPICS ▸ Messages in Kafka are categorized into
Topics. ▸ The closest analogy for topic is a database table, or a folder in filesystem.
APACHE KAFKA PARTITIONS
APACHE KAFKA REPLICATION FACTOR TOPIC A PARTITION 0 TOPIC A
PARTITION 1 KAFKA BROKER 1 TOPIC A PARTITION 0 KAFKA BROKER 2 TOPIC A PARTITION 1 KAFKA BROKER 3 TOPIC A PARTITION 0 TOPIC A PARTITION 1 KAFKA BROKER 4
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA CONSUMER GROUP (2)
APACHE KAFKA RETENTION POLICY ▸ A key feature of Apache
Kafka is that of retention, or the durable storage of messages for some period of time. ▸ We can set retention policy per topics by time or by size.
APACHE KAFKA MIRROR MAKER
INSTALLING KAFKA APACHE KAFKA
APACHE KAFKA JAVA ▸ Kafka using Java 8.
APACHE KAFKA ZOOKEEPER KAFKA BROKER PRODUCER CONSUMER ZOOKEEPER Metadata
APACHE KAFKA KAFKA BROKER # Minimum Broker Configuration broker.id=0 #
must unique in cluster zookeeper.connect=localhost:2181 log.dirs=data/kafka-logs
APACHE KAFKA CREATE / UPDATE TOPIC kafka-topics.sh --create --zookeeper localhost:2181
-- replication-factor 1 --partitions 1 --topic topic_name kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_name --partitions 2 --replication-factor 2
KAFKA PRODUCER APACHE KAFKA
APACHE KAFKA PRODUCER RECORD PRODUCER RECORD TOPIC PARTITION KEY VALUE
APACHE KAFKA SERIALIZER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
APACHE KAFKA PARTITIONER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
PARTITIONER Send to Broker
APACHE KAFKA KAFKA PRODUCER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("key.serializer", “org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps);
APACHE KAFKA SEND MESSAGE record = new ProducerRecord<>(topicName, key, value);
producer.send(record);
KAFKA CONSUMER APACHE KAFKA
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA CONSUMER RECORD & DESERIALIZER CONSUMER RECORD TOPIC PARTITION
KEY VALUE DESERIALIZER From Broker
APACHE KAFKA KAFKA CONSUMER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("group.id", "GroupName"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); consumer = new KafkaConsumer<String, String>(props);
APACHE KAFKA GET MESSAGES consumer.subscribe(Collections.singletonList("topicName")); Long timeout = 1000L; ConsumerRecords<String,
String> records = consumer.poll(timeout);
KAFKA IN BLIBLI APACHE KAFKA
APACHE KAFKA API GATEWAY EVENT API GATEWAY MEMBER API GATEWAY
COMMON API GATEWAY … KAFKA ANALYTICS … …
APACHE KAFKA CURRENT PRODUCT (CODENAME X) X MEMBER X CART
X AUTH X WISHLIST API GATEWAY X YYYY X XXX X ORDER X PRODUCT
APACHE KAFKA NEW PRODUCT (CODENAME VERONICA) VERONICA MEMBER VERONICA CORE
VERONICA MERCHANT KAFKA VERONICA NOTIFICATION API GATEWAY
DEMO
CONCLUSION APACHE KAFKA
APACHE KAFKA WHY KAFKA? ▸ Multiple Consumer ▸ Flexible Scalability
▸ Flexible Durability ▸ High Performance ▸ Multi-Datacenter
WE ARE HIRING!
[email protected]
APACHE KAFKA
APACHE KAFKA REFERENCES ▸ http://kafka.apache.org/ ▸ https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million- writes-second-three-cheap-machines ▸ https://engineering.linkedin.com/kafka/running-kafka-scale