Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Apache Kafka
Search
Eko Kurniawan Khannedy
August 30, 2017
Technology
1
4.4k
Apache Kafka
JVM Meetup #5 - Apache Kafka at Blibli.com
Eko Kurniawan Khannedy
August 30, 2017
Tweet
Share
More Decks by Eko Kurniawan Khannedy
See All by Eko Kurniawan Khannedy
Monolith to Event-Driven Microservices
khannedy
1
260
Refactoring
khannedy
0
330
Multi-Datacenter Kafka at Blibli.com
khannedy
2
1.5k
QA Tools - Research and Development
khannedy
0
280
Reactive Puzzle
khannedy
0
200
Event-Driven Architecture
khannedy
1
1.9k
Resilience Engineering with Hystrix and Spring
khannedy
1
560
Mocking for Unit Test using Mockito
khannedy
1
340
Centralized Configuration using Consul and Spring Cloud
khannedy
2
700
Other Decks in Technology
See All in Technology
ヒューリスティック評価を用いたゲームQA実践事例
gree_tech
PRO
0
430
ガチな登山用デバイスからこんにちは
halka
1
200
Nstockの一人目エンジニアが 3年間かけて向き合ってきた セキュリティのこととこれから〜あれから半年〜
yo41sawada
0
180
ライブサービスゲームQAのパフォーマンス検証による品質改善の取り組み
gree_tech
PRO
0
430
進捗
ydah
2
230
「魔法少女まどか☆マギカ Magia Exedra」での負荷試験の実践と学び
gree_tech
PRO
0
450
「魔法少女まどか☆マギカ Magia Exedra」のグローバル展開を支える、開発チームと翻訳チームの「意識しない協創」を実現するローカライズシステム
gree_tech
PRO
0
440
20250903_1つのAWSアカウントに複数システムがある環境におけるアクセス制御をABACで実現.pdf
yhana
2
270
【Grafana Meetup Japan #6】Grafanaをリバプロ配下で動かすときにやること ~ Grafana Liveってなんだ ~
yoshitake945
0
220
新規案件の立ち上げ専門チームから見たAI駆動開発の始め方
shuyakinjo
0
650
DuckDB-Wasmを使って ブラウザ上でRDBMSを動かす
hacusk
1
140
Grafana Meetup Japan Vol. 6
kaedemalu
1
200
Featured
See All Featured
Building Applications with DynamoDB
mza
96
6.6k
Docker and Python
trallard
45
3.5k
Connecting the Dots Between Site Speed, User Experience & Your Business [WebExpo 2025]
tammyeverts
8
510
What's in a price? How to price your products and services
michaelherold
246
12k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
11
1.1k
Producing Creativity
orderedlist
PRO
347
40k
A Modern Web Designer's Workflow
chriscoyier
696
190k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
139
34k
GraphQLとの向き合い方2022年版
quramy
49
14k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
29
2.8k
YesSQL, Process and Tooling at Scale
rocio
173
14k
Site-Speed That Sticks
csswizardry
10
800
Transcript
APACHE KAFKA EKO KURNIAWAN KHANNEDY
APACHE KAFKA EKO KURNIAWAN KHANNEDY ▸ Principal Software Development Engineer
at Blibli.com ▸ Part of RnD Team at Blibli.com ▸
[email protected]
APACHE KAFKA AGENDA ▸ Kafka Intro ▸ Kafka Internals ▸
Installing Kafka ▸ Kafka Producer ▸ Kafka Consumer ▸ Kafka in blibli.com ▸ Demo ▸ Conclusion
KAFKA INTRO APACHE KAFKA
APACHE KAFKA BEFORE PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK
PAYMENT … ERP FINANCE …
APACHE KAFKA PUBLISH / SUBSCRIBE MESSAGING MEMBER ORDER RISK PAYMENT
… ERP FINANCE … MESSAGING SYSTEM / MESSAGE BROKER
None
APACHE KAFKA WHAT IS KAFKA ▸ Apache Kafka is a
publish/subscribe messaging system, or more recently a “distributing streaming platform” ▸ Opensource project under Apache Software Foundation.
APACHE KAFKA KAFKA HISTORY ▸ Kafka was born to solve
the data pipeline problem in LinkedIn. ▸ The development team at LinkedIn was led by Jay Kreps, now CEO of Confluent. ▸ Kafka was released as an Open Source project on Github in late 2010, and join Apache Software Foundation in 2011.
KAFKA INTERNALS APACHE KAFKA
APACHE KAFKA BROKER TOPIC A PARTITION 0 TOPIC A PARTITION
1 KAFKA BROKER
APACHE KAFKA CLUSTER TOPIC A PARTITION 0 TOPIC A PARTITION
1 (LEADER) KAFKA BROKER 1 TOPIC A PARTITION 0 TOPIC A PARTITION 1 (LEADER) KAFKA BROKER 2
APACHE KAFKA TOPICS ▸ Messages in Kafka are categorized into
Topics. ▸ The closest analogy for topic is a database table, or a folder in filesystem.
APACHE KAFKA PARTITIONS
APACHE KAFKA REPLICATION FACTOR TOPIC A PARTITION 0 TOPIC A
PARTITION 1 KAFKA BROKER 1 TOPIC A PARTITION 0 KAFKA BROKER 2 TOPIC A PARTITION 1 KAFKA BROKER 3 TOPIC A PARTITION 0 TOPIC A PARTITION 1 KAFKA BROKER 4
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA CONSUMER GROUP (2)
APACHE KAFKA RETENTION POLICY ▸ A key feature of Apache
Kafka is that of retention, or the durable storage of messages for some period of time. ▸ We can set retention policy per topics by time or by size.
APACHE KAFKA MIRROR MAKER
INSTALLING KAFKA APACHE KAFKA
APACHE KAFKA JAVA ▸ Kafka using Java 8.
APACHE KAFKA ZOOKEEPER KAFKA BROKER PRODUCER CONSUMER ZOOKEEPER Metadata
APACHE KAFKA KAFKA BROKER # Minimum Broker Configuration broker.id=0 #
must unique in cluster zookeeper.connect=localhost:2181 log.dirs=data/kafka-logs
APACHE KAFKA CREATE / UPDATE TOPIC kafka-topics.sh --create --zookeeper localhost:2181
-- replication-factor 1 --partitions 1 --topic topic_name kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_name --partitions 2 --replication-factor 2
KAFKA PRODUCER APACHE KAFKA
APACHE KAFKA PRODUCER RECORD PRODUCER RECORD TOPIC PARTITION KEY VALUE
APACHE KAFKA SERIALIZER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
APACHE KAFKA PARTITIONER PRODUCER RECORD TOPIC PARTITION KEY VALUE SERIALIZER
PARTITIONER Send to Broker
APACHE KAFKA KAFKA PRODUCER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("key.serializer", “org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps);
APACHE KAFKA SEND MESSAGE record = new ProducerRecord<>(topicName, key, value);
producer.send(record);
KAFKA CONSUMER APACHE KAFKA
APACHE KAFKA CONSUMER GROUP
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA PARTITION REBALANCE
APACHE KAFKA CONSUMER RECORD & DESERIALIZER CONSUMER RECORD TOPIC PARTITION
KEY VALUE DESERIALIZER From Broker
APACHE KAFKA KAFKA CONSUMER Properties props = new Properties(); props.put("bootstrap.servers",
"broker1:9092,broker2:9092"); props.put("group.id", "GroupName"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); consumer = new KafkaConsumer<String, String>(props);
APACHE KAFKA GET MESSAGES consumer.subscribe(Collections.singletonList("topicName")); Long timeout = 1000L; ConsumerRecords<String,
String> records = consumer.poll(timeout);
KAFKA IN BLIBLI APACHE KAFKA
APACHE KAFKA API GATEWAY EVENT API GATEWAY MEMBER API GATEWAY
COMMON API GATEWAY … KAFKA ANALYTICS … …
APACHE KAFKA CURRENT PRODUCT (CODENAME X) X MEMBER X CART
X AUTH X WISHLIST API GATEWAY X YYYY X XXX X ORDER X PRODUCT
APACHE KAFKA NEW PRODUCT (CODENAME VERONICA) VERONICA MEMBER VERONICA CORE
VERONICA MERCHANT KAFKA VERONICA NOTIFICATION API GATEWAY
DEMO
CONCLUSION APACHE KAFKA
APACHE KAFKA WHY KAFKA? ▸ Multiple Consumer ▸ Flexible Scalability
▸ Flexible Durability ▸ High Performance ▸ Multi-Datacenter
WE ARE HIRING!
[email protected]
APACHE KAFKA
APACHE KAFKA REFERENCES ▸ http://kafka.apache.org/ ▸ https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million- writes-second-three-cheap-machines ▸ https://engineering.linkedin.com/kafka/running-kafka-scale