$30 off During Our Annual Pro Sale. View Details »
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kafka jako strumień i co się z tym wiąże
Search
Wojciech Marusarz
October 24, 2019
Programming
0
15
Kafka jako strumień i co się z tym wiąże
Wojciech Marusarz
October 24, 2019
Tweet
Share
More Decks by Wojciech Marusarz
See All by Wojciech Marusarz
Kafka jako strumień i co się z tym wiąże
marwoj
0
100
Other Decks in Programming
See All in Programming
生成AI時代を勝ち抜くエンジニア組織マネジメント
coconala_engineer
0
610
宅宅自以為的浪漫:跟 AI 一起為自己辦的研討會寫一個售票系統
eddie
0
530
Claude Codeの「Compacting Conversation」を体感50%減! CLAUDE.md + 8 Skills で挑むコンテキスト管理術
kmurahama
1
630
Patterns of Patterns
denyspoltorak
0
280
実は歴史的なアップデートだと思う AWS Interconnect - multicloud
maroon1st
0
250
AI 駆動開発ライフサイクル(AI-DLC):ソフトウェアエンジニアリングの再構築 / AI-DLC Introduction
kanamasa
11
3.7k
Cell-Based Architecture
larchanjo
0
140
認証・認可の基本を学ぼう後編
kouyuume
0
250
AtCoder Conference 2025「LLM時代のAHC」
imjk
2
570
脳の「省エネモード」をデバッグする ~System 1(直感)と System 2(論理)の切り替え~
panda728
PRO
0
120
堅牢なフロントエンドテスト基盤を構築するために行った取り組み
shogo4131
8
2.5k
GISエンジニアから見たLINKSデータ
nokonoko1203
0
180
Featured
See All Featured
Gemini Prompt Engineering: Practical Techniques for Tangible AI Outcomes
mfonobong
2
230
Automating Front-end Workflow
addyosmani
1371
200k
職位にかかわらず全員がリーダーシップを発揮するチーム作り / Building a team where everyone can demonstrate leadership regardless of position
madoxten
50
41k
WCS-LA-2024
lcolladotor
0
390
Un-Boring Meetings
codingconduct
0
160
Typedesign – Prime Four
hannesfritz
42
2.9k
Mind Mapping
helmedeiros
PRO
0
38
The untapped power of vector embeddings
frankvandijk
1
1.5k
How to audit for AI Accessibility on your Front & Back End
davetheseo
0
120
Joys of Absence: A Defence of Solitary Play
codingconduct
1
260
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
31
3k
How to Get Subject Matter Experts Bought In and Actively Contributing to SEO & PR Initiatives.
livdayseo
0
29
Transcript
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
Kafka jako strumień i co się z tym wiąże Wojciech
Marusarz // nexocode
None
Programiści...
Modyfikacja zapytań http:// { "totalTime": "1855", "result": "16" }
Kodowanie wartości { "totalTime": "gXe", "result": "bM" } { "totalTime":
"1855", "result": "16" }
Obfuskacja
var _0xaed1 = ["\x6C\x65\x6E\x67\x74\x68","\x65\x6E\x75\x6D\x65\x72\x61\x62\x6C\x65","\x 63\x6F\x6E\x66\x69\x67\x75\x72\x61\x62\x6C\x65","\x76\x61\x6C\x75\x65","\ x77\x72\x69\x74\x61\x62\x6C\x65","\x6B\x65\x79","\x64\x65\x66\x69\x6E\x65 \x50\x72\x6F\x70\x65\x72\x74\x79", ...
Uparci programiści...
Przesyłanie modelu gry { "totalTime": "gXe", "result": "bM", "levels": [
{ "rectangle": "dpPTVXT3XI7p" }, { "rectangle": "4ajhYLh23FdQ" } ] }
Walidacja modelu gry na serwerze { "totalTime": "750", "result": "2",
"levels": [ { "rectangle": "[10,10,80,80]" } { "rectangle": "[0,0,100,100]" }, ] }
Brace yourself, gra nadchodzi...
None
{ "totalTime": "12478", "result": "2", "levels": [ { "rectangle": "[0,20,90,80]"
}, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "3",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Debug mode - ON !!! { "totalTime": "12478", "result": "4",
"levels": [ { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,20,90,80]" }, { "rectangle": "[0,0,100,100]" } ] }
Challenge accepted…
•Przesyłanie klocków pojedynczo
•Przesyłanie klocków pojedynczo •Przechowywanie klocków
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć
•Przesyłanie klocków pojedynczo •Przechowywanie klocków •Budowa modelu gry •Walidacja &
wykrywanie nadużyć •Zapis w bazie danych
Request/Response HTTP
Request/Response HTTP Progress
Request/Response Progress
Przechowywanie klocków
Przechowywanie klocków Przetwarzanie modelu gry
Przechowywanie klocków Przetwarzanie modelu gry Przechowywanie wyników
Czy takie rozwiązanie ma sens?
Po co Apache Kafka ?
Szyna danych z warstwą persystencji
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania 99.999%
Szyna danych z warstwą persystencji cechująca się wysoką dostępnością oraz
szybkością działania
None
None
None
None
None
None
12 000 /s
500 /s
Technikalia
#1 Problem - Architektura
Producent Konsument Topic
Producent Konsument “blocks”
Broker A Producent Konsument Topic
Broker A Topic A B C D E ... A
B C ... A B C D ... Partition_0 Partition_1 Partition_2 Producent Konsument
Topic Broker C Broker B Broker A A B C
D E ... A B C ... Partition_0 Partition_1 A B C D E ... A B C D ... Partition_0 Partition_2 Producent Konsument A B C D ... Partition_2 A B C D E ... Partition_0
Topic Broker C Broker B Broker A A B C
D E ... A B C D Partition_0 Partition_1 A B C D E ... Partition_0 Producent Konsument A B C D E ... Partition_0
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker A A B C D E ... A B
C ... Partition_0 Partition_1 Broker B Broker C A B C ... A B C D ... Partition_1 Partition_2 A C D E ... A B C D ... Partition_0 Partition_2 B
Broker B IP: 192.168.0.2 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper Broker A IP: 192.168.0.1 Broker C IP: 192.168.0.3
IP: 192.168.0.1 IP: 192.168.0.3 Broker C IP: 192.168.0.3 Broker B
IP: 192.168.0.2 Broker A IP: 192.168.0.1 Zookeeper IP: 192.168.0.2 IP: 192.168.0.3 IP: 192.168.0.1 IP: 192.168.0.2
Broker A Producent Konsument TOPIC-PARTITION-ADDRESS 192.168.0.1 - blocks-partition_0 192.168.0.2 -
blocks-partition_2 192.168.0.3 - blocks-partition_1
A B C D E Partition_0 Partition_1 B C D
A E
B A E C D Partition_0 Partition_1 B C D
A E
Partition_0 Partition_1 1 2 3 4 5 1 3 5
2 4
Partition_0 Partition_1 1 3 5 2 4 1 3 5
2 4
KEY-2 KEY-1 Partition_0 Partition_1 C A E B A E
C D B A E C D Partition_0 Partition_1 A E KEY-1 B KEY-1 D D
A E Partition_0 Partition_1 E E A E E A
Partition_0 Partition_1 A KEY-2
Partition_0 Partition_1 Klucz Modulo 2 1 2 3 4 5
1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4
Consumer Partition_0 Partition_1 1 3 5 2 4 1 2
3 4 5
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 1 3 5 2 4
Partition_0 Partition_1 Consumer Group Consumer Consumer 1 2 3 4
5 2 4 1 3 5
None
None
#1 Problem - Architektura
Po co Kafka Streams ?
App App App Consumers Producers Stream Processors Kafka Cluster
•Stream processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
•Stream processing •Request/Response •Batch processing
#2 Problem - Stream Processing
Single event-processing
Single event-processing Filter Map
Single event-processing Nie przechowuje stanu
Processing with local state
Local-state Aggregate
Local-state Aggregate
None
Local-state Aggregate
Local-state Aggregate Restart
topic-partition-from_begining Offset: blocks-Partition_0-3 Partition_0
Local-state Aggregate
None
Stream Wyliczony stan Aggregate
Stream Wyliczony stan Aggregate
Duality of Streams and Tables
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Stream, (as record Stream) (”
[email protected]
”,1) (”
[email protected]
”,1) (”
[email protected]
”,1)
[email protected]
1 (”
[email protected]
”,1)
(”
[email protected]
”,2) (”
[email protected]
”,1)
[email protected]
1
[email protected]
1
[email protected]
2
[email protected]
1
Source code
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props;
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); return props; “localhost:9021,localhost:9022,localhost:9023”
Przechowywanie bloków
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
KafkaProducer blocksProducer = new KafkaProducer<>(props) ProducerRecord record = new ProducerRecord<>(”blocks”,block.getGameGuid(),
block) blocksProducer.send(record);
Topologia: przetwarzanie bloków
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel())) .groupByKey() .aggregate(GameAggregate::new,
(gameGuid, newBlock, gameAggregate) -> { gameAggregate.add(newBlock); return gameAggregate; }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate())) .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete()) .toStream() .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topologia: przetwarzanie gry
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.stream(”games”,Consumed.with(Serdes.String(),GSerdes.GameAggregate())) .peek(gameAggregateHandler::handle);
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
Topology topology = streamsBuilder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
None
#2 Problem - Stream Processing
System reaktywny
None
None
None
None
Kafka Stream s
#3 Problem - nadmiar danych
Kafka Stream s
Properties props = new Properties(); props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, env.getKafkaBrokers()); props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, env.getAppId()); props.setProperty(StreamsConfig.CLIENT_ID_CONFIG,
env.getClientId()); props.setProperty(StreamsConfig.NUM_STREAM_THREADS_CONFIG, "10"); return props;
http
http topic: “games”
topic: “blocks” topic: “games” http topic: “games”
topic: “blocks” topic: “games” http topic: “games” topic: “games”
Testy
•API Blokujące - wątek zapytania
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
x 200 x 80
•API Blokujące - wątek zapytania
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
None
None
•API Blokujące - wątek zapytania •API Nieblokujące - pula wątków
•Kafka Streams
None
None
None
nexoblocks https://bit.ly/2oN3L2q
nexoblocks https://bit.ly/2oN3L2q
Documentation Reference Guide Developers Guide Kafka Reactor Kafka Kafka Streams
Project Reactor Any Tool... +
nexoblocks https://bit.ly/2oN3L2q Dziękuję! Wojciech Marusarz // nexocode