Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kafka jako strumień i co się z tym wiąże
Search
Wojciech Marusarz
October 24, 2019
Programming
0
15
Kafka jako strumień i co się z tym wiąże
Wojciech Marusarz
October 24, 2019
Tweet
Share
More Decks by Wojciech Marusarz
See All by Wojciech Marusarz
Kafka jako strumień i co się z tym wiąże
marwoj
0
100
Other Decks in Programming
See All in Programming
ソフトウェア設計の課題・原則・実践技法
masuda220
PRO
20
11k
詳細の決定を遅らせつつ実装を早くする
shimabox
1
1.3k
All(?) About Point Sets
hole
0
190
Evolving NEWT’s TypeScript Backend for the AI-Driven Era
xpromx
0
110
問題の見方を変える「システム思考」超入門
panda_program
0
300
歴史から学ぶ「Why PHP?」 PHPを書く理由を改めて理解する / Learning from History: “Why PHP?” Rediscovering the Reasons for Writing PHP
seike460
PRO
0
160
最新のDirectX12で使えるレイトレ周りの機能追加について
projectasura
0
270
SUZURIの規約違反チェックにおけるクリエイタフィードバックの試⾏錯誤/Trial and Error in Creator Feedback for SUZURI's Terms of Service Violation Checks
ae14watanabe
1
160
複数チーム並行開発下でのコード移行アプローチ ~手動 Codemod から「生成AI 活用」への進化
andpad
0
170
AIと協働し、イベントソーシングとアクターモデルで作る後悔しないアーキテクチャ Regret-Free Architecture with AI, Event Sourcing, and Actors
tomohisa
2
4.8k
ノーコードからの脱出 -地獄のデスロード- / Escape from Base44
keisuke69
1
750
FlutterKaigi 2025 システム裏側
yumnumm
0
1.1k
Featured
See All Featured
GitHub's CSS Performance
jonrohan
1032
470k
Designing Dashboards & Data Visualisations in Web Apps
destraynor
231
54k
VelocityConf: Rendering Performance Case Studies
addyosmani
333
24k
Six Lessons from altMBA
skipperchong
29
4.1k
A designer walks into a library…
pauljervisheath
210
24k
実際に使うSQLの書き方 徹底解説 / pgcon21j-tutorial
soudai
PRO
192
56k
Producing Creativity
orderedlist
PRO
348
40k
Responsive Adventures: Dirty Tricks From The Dark Corners of Front-End
smashingmag
253
22k
What's in a price? How to price your products and services
michaelherold
246
12k
YesSQL, Process and Tooling at Scale
rocio
174
15k
Code Review Best Practice
trishagee
72
19k
How to train your dragon (web standard)
notwaldorf
97
6.4k
Transcript
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
None
Programiści...
Modyfikacja zapytań http:// { "totalTime": "1855", "result": "16" }
Kodowanie wartości { "totalTime": "gXe", "result": "bM" } { "totalTime":
"1855", "result": "16" }
Obfuskacja
var _0xaed1 = ["\x6C\x65\x6E\x67\x74\x68","\x65\x6E\x75\x6D\x65\x72\x61\x62\x6C\x65","\x 63\x6F\x6E\x66\x69\x67\x75\x72\x61\x62\x6C\x65","\x76\x61\x6C\x75\x65","\ x77\x72\x69\x74\x61\x62\x6C\x65","\x6B\x65\x79","\x64\x65\x66\x69\x6E\x65 \x50\x72\x6F\x70\x65\x72\x74\x79", ...
Uparci programiści...
Przesyłanie modelu gry { "totalTime": "gXe", "result": "bM", "levels": [
{ "rectangle": "dpPTVXT3XI7p" }, { "rectangle": "4ajhYLh23FdQ" } ] }
Walidacja modelu gry na serwerze { "totalTime": "750", "result": "2",
"levels": [ { "rectangle": "[10,10,80,80]" } { "rectangle": "[0,0,100,100]" }, ] }
Brace yourself, gra nadchodzi...
None
{ "totalTime": "12478", "result": "2", "levels": [ { "rectangle": "[0,20,90,80]"
}, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "3",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "4",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Challenge accepted…
•Przesyłanie klocków pojedynczo
•Przesyłanie klocków pojedynczo •Przechowywanie klocków
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć •Zapis w bazie danych
Request/Response HTTP
Request/Response HTTP Progress
Request/Response Progress
Przechowywanie klocków
Przechowywanie klocków Przetwarzanie modelu gry
Przechowywanie klocków Przetwarzanie modelu gry Przechowywanie wyników
Czy takie rozwiązanie ma sens?
Po co Apache Kafka ?
Szyna danych z warstwą persystencji
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania 99.999%
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania
None
None
None
None
None
None
12 000 /s
500 /s
Technikalia
#1 Problem - Architektura
Producent Konsument Topic
Producent Konsument “blocks”
Broker A Producent Konsument Topic
Broker A Topic A B C D E ... A
B C ... A B C D ... Partition_0 Partition_1 Partition_2 Producent Konsument
Topic Broker C Broker B Broker A A B C
D E ... A B C ... Partition_0 Partition_1 A B C D E ... A B C D ... Partition_0 Partition_2 Producent Konsument A B C D ... Partition_2 A B C D E ... Partition_0
Topic Broker C Broker B Broker A A B C
D E ... A B C D Partition_0 Partition_1 A B C D E ... Partition_0 Producent Konsument A B C D E ... Partition_0
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker B IP: 192.168.0.2 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper Broker A IP: 192.168.0.1 Broker C IP: 192.168.0.3
IP: 192.168.0.1 IP: 192.168.0.3 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper IP: 192.168.0.2 IP: 192.168.0.3 IP: 192.168.0.1 IP: 192.168.0.2
Broker A Producent Konsument TOPIC-PARTITION-ADDRESS 192.168.0.1 - blocks-partition_0 192.168.0.2 -
blocks-partition_2 192.168.0.3 - blocks-partition_1
A B C D E Partition_0 Partition_1 B C D
A E
B A E C D Partition_0 Partition_1 B C D
A E
Partition_0 Partition_1 1 2 3 4 5 1 3 5
2 4
Partition_0 Partition_1 1 3 5 2 4 1 3 5
2 4
KEY-2 KEY-1 Partition_0 Partition_1 C A E B A E
C D B A E C D Partition_0 Partition_1 A E KEY-1 B KEY-1 D D
A E Partition_0 Partition_1 E E A E E A
Partition_0 Partition_1 A KEY-2
Partition_0 Partition_1 Klucz Modulo 2 1 2 3 4 5
1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4 1 2
3 4 5
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 1 3 5 2 4
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 2 4 1 3 5
None
None
#1 Problem - Architektura
Po co Kafka Streams ?
App App App Consumers Producers Stream Processors Kafka Cluster
•Stream processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
#2 Problem - Stream Processing
Single event-processing
Single event-processing Filter Map
Single event-processing Nie przechowuje stanu
Processing with local state
Local-state Aggregate
Local-state Aggregate
None
Local-state Aggregate
Local-state Aggregate Restart
topic-partition-from_begining Offset: blocks-Partition_0-3 Partition_0
Local-state Aggregate
None
Stream Wyliczony stan Aggregate
Stream Wyliczony stan Aggregate
Duality of Streams and Tables
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,2) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Source code
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props;
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props; “localhost:9021,localhost:9022,localhost:9023”
Przechowywanie bloków
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
Topologia: przetwarzanie bloków
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topologia: przetwarzanie gry
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
None
#2 Problem - Stream Processing
System reaktywny
None
None
None
None
Kafka Stream s
#3 Problem - nadmiar danych
Kafka Stream s
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); props.setProperty(StreamsConfig.NUM_STREAM_THREADS_CONFIG, "10"); return props;
http
http topic: “games”
topic: “blocks” topic: “games” http topic: “games”
topic: “blocks” topic: “games” http topic: “games” topic: “games”
Testy
•API Blokujące - wątek zapytania
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
x 200 x 80
•API Blokujące - wątek zapytania
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
None
None
None
nexoblocks https://bit.ly/2oN3L2q
nexoblocks https://bit.ly/2oN3L2q
Documentation Reference Guide Developers Guide Kafka Reactor Kafka Kafka Streams
Project Reactor Any Tool... +
nexoblocks https://bit.ly/2oN3L2q Dziękuję! Wojciech Marusarz // nexocode