Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kafka jako strumień i co się z tym wiąże
Search
Wojciech Marusarz
October 24, 2019
Programming
0
14
Kafka jako strumień i co się z tym wiąże
Wojciech Marusarz
October 24, 2019
Tweet
Share
More Decks by Wojciech Marusarz
See All by Wojciech Marusarz
Kafka jako strumień i co się z tym wiąże
marwoj
0
96
Other Decks in Programming
See All in Programming
GraphRAGの仕組みまるわかり
tosuri13
8
530
5つのアンチパターンから学ぶLT設計
narihara
1
160
効率的な開発手段として VRTを活用する
ishkawa
0
130
猫と暮らす Google Nest Cam生活🐈 / WebRTC with Google Nest Cam
yutailang0119
0
110
Modern Angular with Signals and Signal Store:New Rules for Your Architecture @enterJS Advanced Angular Day 2025
manfredsteyer
PRO
0
210
ISUCON研修おかわり会 講義スライド
arfes0e2b3c
1
440
Azure AI Foundryではじめてのマルチエージェントワークフロー
seosoft
0
160
Google Agent Development Kit でLINE Botを作ってみた
ymd65536
2
240
技術同人誌をMCP Serverにしてみた
74th
1
630
GitHub Copilot and GitHub Codespaces Hands-on
ymd65536
2
150
High-Level Programming Languages in AI Era -Human Thought and Mind-
hayat01sh1da
PRO
0
770
ruby.wasmで多人数リアルタイム通信ゲームを作ろう
lnit
3
460
Featured
See All Featured
Building Applications with DynamoDB
mza
95
6.5k
Building Better People: How to give real-time feedback that sticks.
wjessup
367
19k
The Invisible Side of Design
smashingmag
301
51k
Java REST API Framework Comparison - PWX 2021
mraible
31
8.7k
Side Projects
sachag
455
42k
It's Worth the Effort
3n
185
28k
StorybookのUI Testing Handbookを読んだ
zakiyama
30
5.9k
Unsuck your backbone
ammeep
671
58k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
34
3.1k
Six Lessons from altMBA
skipperchong
28
3.9k
How To Stay Up To Date on Web Technology
chriscoyier
790
250k
How to Think Like a Performance Engineer
csswizardry
25
1.7k
Transcript
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
None
Programiści...
Modyfikacja zapytań http:// { "totalTime": "1855", "result": "16" }
Kodowanie wartości { "totalTime": "gXe", "result": "bM" } { "totalTime":
"1855", "result": "16" }
Obfuskacja
var _0xaed1 = ["\x6C\x65\x6E\x67\x74\x68","\x65\x6E\x75\x6D\x65\x72\x61\x62\x6C\x65","\x 63\x6F\x6E\x66\x69\x67\x75\x72\x61\x62\x6C\x65","\x76\x61\x6C\x75\x65","\ x77\x72\x69\x74\x61\x62\x6C\x65","\x6B\x65\x79","\x64\x65\x66\x69\x6E\x65 \x50\x72\x6F\x70\x65\x72\x74\x79", ...
Uparci programiści...
Przesyłanie modelu gry { "totalTime": "gXe", "result": "bM", "levels": [
{ "rectangle": "dpPTVXT3XI7p" }, { "rectangle": "4ajhYLh23FdQ" } ] }
Walidacja modelu gry na serwerze { "totalTime": "750", "result": "2",
"levels": [ { "rectangle": "[10,10,80,80]" } { "rectangle": "[0,0,100,100]" }, ] }
Brace yourself, gra nadchodzi...
None
{ "totalTime": "12478", "result": "2", "levels": [ { "rectangle": "[0,20,90,80]"
}, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "3",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "4",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Challenge accepted…
•Przesyłanie klocków pojedynczo
•Przesyłanie klocków pojedynczo •Przechowywanie klocków
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć •Zapis w bazie danych
Request/Response HTTP
Request/Response HTTP Progress
Request/Response Progress
Przechowywanie klocków
Przechowywanie klocków Przetwarzanie modelu gry
Przechowywanie klocków Przetwarzanie modelu gry Przechowywanie wyników
Czy takie rozwiązanie ma sens?
Po co Apache Kafka ?
Szyna danych z warstwą persystencji
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania 99.999%
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania
None
None
None
None
None
None
12 000 /s
500 /s
Technikalia
#1 Problem - Architektura
Producent Konsument Topic
Producent Konsument “blocks”
Broker A Producent Konsument Topic
Broker A Topic A B C D E ... A
B C ... A B C D ... Partition_0 Partition_1 Partition_2 Producent Konsument
Topic Broker C Broker B Broker A A B C
D E ... A B C ... Partition_0 Partition_1 A B C D E ... A B C D ... Partition_0 Partition_2 Producent Konsument A B C D ... Partition_2 A B C D E ... Partition_0
Topic Broker C Broker B Broker A A B C
D E ... A B C D Partition_0 Partition_1 A B C D E ... Partition_0 Producent Konsument A B C D E ... Partition_0
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker B IP: 192.168.0.2 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper Broker A IP: 192.168.0.1 Broker C IP: 192.168.0.3
IP: 192.168.0.1 IP: 192.168.0.3 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper IP: 192.168.0.2 IP: 192.168.0.3 IP: 192.168.0.1 IP: 192.168.0.2
Broker A Producent Konsument TOPIC-PARTITION-ADDRESS 192.168.0.1 - blocks-partition_0 192.168.0.2 -
blocks-partition_2 192.168.0.3 - blocks-partition_1
A B C D E Partition_0 Partition_1 B C D
A E
B A E C D Partition_0 Partition_1 B C D
A E
Partition_0 Partition_1 1 2 3 4 5 1 3 5
2 4
Partition_0 Partition_1 1 3 5 2 4 1 3 5
2 4
KEY-2 KEY-1 Partition_0 Partition_1 C A E B A E
C D B A E C D Partition_0 Partition_1 A E KEY-1 B KEY-1 D D
A E Partition_0 Partition_1 E E A E E A
Partition_0 Partition_1 A KEY-2
Partition_0 Partition_1 Klucz Modulo 2 1 2 3 4 5
1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4 1 2
3 4 5
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 1 3 5 2 4
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 2 4 1 3 5
None
None
#1 Problem - Architektura
Po co Kafka Streams ?
App App App Consumers Producers Stream Processors Kafka Cluster
•Stream processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
#2 Problem - Stream Processing
Single event-processing
Single event-processing Filter Map
Single event-processing Nie przechowuje stanu
Processing with local state
Local-state Aggregate
Local-state Aggregate
None
Local-state Aggregate
Local-state Aggregate Restart
topic-partition-from_begining Offset: blocks-Partition_0-3 Partition_0
Local-state Aggregate
None
Stream Wyliczony stan Aggregate
Stream Wyliczony stan Aggregate
Duality of Streams and Tables
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,2) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Source code
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props;
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props; “localhost:9021,localhost:9022,localhost:9023”
Przechowywanie bloków
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
Topologia: przetwarzanie bloków
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topologia: przetwarzanie gry
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
None
#2 Problem - Stream Processing
System reaktywny
None
None
None
None
Kafka Stream s
#3 Problem - nadmiar danych
Kafka Stream s
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); props.setProperty(StreamsConfig.NUM_STREAM_THREADS_CONFIG, "10"); return props;
http
http topic: “games”
topic: “blocks” topic: “games” http topic: “games”
topic: “blocks” topic: “games” http topic: “games” topic: “games”
Testy
•API Blokujące - wątek zapytania
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
x 200 x 80
•API Blokujące - wątek zapytania
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
None
None
None
nexoblocks https://bit.ly/2oN3L2q
nexoblocks https://bit.ly/2oN3L2q
Documentation Reference Guide Developers Guide Kafka Reactor Kafka Kafka Streams
Project Reactor Any Tool... +
nexoblocks https://bit.ly/2oN3L2q Dziękuję! Wojciech Marusarz // nexocode