Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kafka jako strumień i co się z tym wiąże
Search
Wojciech Marusarz
October 24, 2019
Programming
0
15
Kafka jako strumień i co się z tym wiąże
Wojciech Marusarz
October 24, 2019
Tweet
Share
More Decks by Wojciech Marusarz
See All by Wojciech Marusarz
Kafka jako strumień i co się z tym wiąże
marwoj
0
100
Other Decks in Programming
See All in Programming
AI によるインシデント初動調査の自動化を行う AI インシデントコマンダーを作った話
azukiazusa1
1
750
今こそ知るべき耐量子計算機暗号(PQC)入門 / PQC: What You Need to Know Now
mackey0225
3
390
生成AIを使ったコードレビューで定性的に品質カバー
chiilog
1
280
AI & Enginnering
codelynx
0
120
CSC307 Lecture 03
javiergs
PRO
1
490
CSC307 Lecture 07
javiergs
PRO
1
560
コマンドとリード間の連携に対する脅威分析フレームワーク
pandayumi
1
470
Vibe Coding - AI 驅動的軟體開發
mickyp100
0
180
QAフローを最適化し、品質水準を満たしながらリリースまでの期間を最短化する #RSGT2026
shibayu36
2
4.4k
登壇資料を作る時に意識していること #登壇資料_findy
konifar
4
1.7k
dchart: charts from deck markup
ajstarks
3
1k
疑似コードによるプロンプト記述、どのくらい正確に実行される?
kokuyouwind
0
390
Featured
See All Featured
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
141
34k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
32
2.1k
StorybookのUI Testing Handbookを読んだ
zakiyama
31
6.6k
Connecting the Dots Between Site Speed, User Experience & Your Business [WebExpo 2025]
tammyeverts
11
830
The Psychology of Web Performance [Beyond Tellerrand 2023]
tammyeverts
49
3.3k
Ruling the World: When Life Gets Gamed
codingconduct
0
150
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
49
9.9k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
38
2.7k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
287
14k
GraphQLとの向き合い方2022年版
quramy
50
14k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
10
1.1k
<Decoding/> the Language of Devs - We Love SEO 2024
nikkihalliwell
1
130
Transcript
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
None
Programiści...
Modyfikacja zapytań http:// { "totalTime": "1855", "result": "16" }
Kodowanie wartości { "totalTime": "gXe", "result": "bM" } { "totalTime":
"1855", "result": "16" }
Obfuskacja
var _0xaed1 = ["\x6C\x65\x6E\x67\x74\x68","\x65\x6E\x75\x6D\x65\x72\x61\x62\x6C\x65","\x 63\x6F\x6E\x66\x69\x67\x75\x72\x61\x62\x6C\x65","\x76\x61\x6C\x75\x65","\ x77\x72\x69\x74\x61\x62\x6C\x65","\x6B\x65\x79","\x64\x65\x66\x69\x6E\x65 \x50\x72\x6F\x70\x65\x72\x74\x79", ...
Uparci programiści...
Przesyłanie modelu gry { "totalTime": "gXe", "result": "bM", "levels": [
{ "rectangle": "dpPTVXT3XI7p" }, { "rectangle": "4ajhYLh23FdQ" } ] }
Walidacja modelu gry na serwerze { "totalTime": "750", "result": "2",
"levels": [ { "rectangle": "[10,10,80,80]" } { "rectangle": "[0,0,100,100]" }, ] }
Brace yourself, gra nadchodzi...
None
{ "totalTime": "12478", "result": "2", "levels": [ { "rectangle": "[0,20,90,80]"
}, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "3",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "4",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Challenge accepted…
•Przesyłanie klocków pojedynczo
•Przesyłanie klocków pojedynczo •Przechowywanie klocków
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć •Zapis w bazie danych
Request/Response HTTP
Request/Response HTTP Progress
Request/Response Progress
Przechowywanie klocków
Przechowywanie klocków Przetwarzanie modelu gry
Przechowywanie klocków Przetwarzanie modelu gry Przechowywanie wyników
Czy takie rozwiązanie ma sens?
Po co Apache Kafka ?
Szyna danych z warstwą persystencji
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania 99.999%
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania
None
None
None
None
None
None
12 000 /s
500 /s
Technikalia
#1 Problem - Architektura
Producent Konsument Topic
Producent Konsument “blocks”
Broker A Producent Konsument Topic
Broker A Topic A B C D E ... A
B C ... A B C D ... Partition_0 Partition_1 Partition_2 Producent Konsument
Topic Broker C Broker B Broker A A B C
D E ... A B C ... Partition_0 Partition_1 A B C D E ... A B C D ... Partition_0 Partition_2 Producent Konsument A B C D ... Partition_2 A B C D E ... Partition_0
Topic Broker C Broker B Broker A A B C
D E ... A B C D Partition_0 Partition_1 A B C D E ... Partition_0 Producent Konsument A B C D E ... Partition_0
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker B IP: 192.168.0.2 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper Broker A IP: 192.168.0.1 Broker C IP: 192.168.0.3
IP: 192.168.0.1 IP: 192.168.0.3 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper IP: 192.168.0.2 IP: 192.168.0.3 IP: 192.168.0.1 IP: 192.168.0.2
Broker A Producent Konsument TOPIC-PARTITION-ADDRESS 192.168.0.1 - blocks-partition_0 192.168.0.2 -
blocks-partition_2 192.168.0.3 - blocks-partition_1
A B C D E Partition_0 Partition_1 B C D
A E
B A E C D Partition_0 Partition_1 B C D
A E
Partition_0 Partition_1 1 2 3 4 5 1 3 5
2 4
Partition_0 Partition_1 1 3 5 2 4 1 3 5
2 4
KEY-2 KEY-1 Partition_0 Partition_1 C A E B A E
C D B A E C D Partition_0 Partition_1 A E KEY-1 B KEY-1 D D
A E Partition_0 Partition_1 E E A E E A
Partition_0 Partition_1 A KEY-2
Partition_0 Partition_1 Klucz Modulo 2 1 2 3 4 5
1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4 1 2
3 4 5
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 1 3 5 2 4
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 2 4 1 3 5
None
None
#1 Problem - Architektura
Po co Kafka Streams ?
App App App Consumers Producers Stream Processors Kafka Cluster
•Stream processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
#2 Problem - Stream Processing
Single event-processing
Single event-processing Filter Map
Single event-processing Nie przechowuje stanu
Processing with local state
Local-state Aggregate
Local-state Aggregate
None
Local-state Aggregate
Local-state Aggregate Restart
topic-partition-from_begining Offset: blocks-Partition_0-3 Partition_0
Local-state Aggregate
None
Stream Wyliczony stan Aggregate
Stream Wyliczony stan Aggregate
Duality of Streams and Tables
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,2) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Source code
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props;
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props; “localhost:9021,localhost:9022,localhost:9023”
Przechowywanie bloków
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
Topologia: przetwarzanie bloków
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topologia: przetwarzanie gry
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
None
#2 Problem - Stream Processing
System reaktywny
None
None
None
None
Kafka Stream s
#3 Problem - nadmiar danych
Kafka Stream s
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); props.setProperty(StreamsConfig.NUM_STREAM_THREADS_CONFIG, "10"); return props;
http
http topic: “games”
topic: “blocks” topic: “games” http topic: “games”
topic: “blocks” topic: “games” http topic: “games” topic: “games”
Testy
•API Blokujące - wątek zapytania
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
x 200 x 80
•API Blokujące - wątek zapytania
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
None
None
None
nexoblocks https://bit.ly/2oN3L2q
nexoblocks https://bit.ly/2oN3L2q
Documentation Reference Guide Developers Guide Kafka Reactor Kafka Kafka Streams
Project Reactor Any Tool... +
nexoblocks https://bit.ly/2oN3L2q Dziękuję! Wojciech Marusarz // nexocode