Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
measuring api performance using druid
Search
Ananth Packkildurai
November 28, 2017
Programming
0
1.7k
measuring api performance using druid
Druid with auto scale, monitoring metrics to build trust with our clients and wishlist from Druid.
Ananth Packkildurai
November 28, 2017
Tweet
Share
More Decks by Ananth Packkildurai
See All by Ananth Packkildurai
Data Contracts & Domain Ownership
vananth22
0
120
Data Catalogs - Rebuild the Broken Promise
vananth22
0
84
Functional Data Engineering - A Blueprint for adopting functional principles in data pipeline
vananth22
0
570
Back To The Future: Emerging Trends in Data Engineering
vananth22
0
1.3k
Murron: A Reliable Monitoring Pipeline
vananth22
0
410
The_journey_towards_Pinot.pdf
vananth22
0
230
Reliable_Event_Pipeline___scale.pdf
vananth22
0
210
Operating Data Pipeline with Airflow @ Slack
vananth22
1
2.5k
Streaming data pipelines @ Slack
vananth22
2
2.5k
Other Decks in Programming
See All in Programming
その面倒な作業、「Dart」にやらせませんか? Flutter開発者のための業務効率化
yordgenome03
0
110
NixOS + Kubernetesで構築する自宅サーバーのすべて
ichi_h3
0
470
どの様にAIエージェントと 協業すべきだったのか?
takefumiyoshii
2
630
XP, Testing and ninja testing ZOZ5
m_seki
3
580
Web フロントエンドエンジニアに開かれる AI Agent プロダクト開発 - Vercel AI SDK を観察して AI Agent と仲良くなろう! #FEC余熱NIGHT
izumin5210
3
490
スマホから Youtube Shortsを見られないようにする
lemolatoon
12
7.1k
バッチ処理を「状態の記録」から「事実の記録」へ
panda728
PRO
0
140
Things You Thought You Didn’t Need To Care About That Have a Big Impact On Your Job
hollycummins
0
210
Devoxx BE - Local Development in the AI Era
kdubois
0
120
デミカツ切り抜きで面倒くさいことはPythonにやらせよう
aokswork3
0
220
GitHub Actions × AWS OIDC連携の仕組みと経緯を理解する
ota1022
0
250
複雑化したリポジトリをなんとかした話 pipenvからuvによるモノレポ構成への移行
satoshi256kbyte
1
990
Featured
See All Featured
Understanding Cognitive Biases in Performance Measurement
bluesmoon
29
2.7k
Speed Design
sergeychernyshev
32
1.1k
Building a Scalable Design System with Sketch
lauravandoore
462
33k
Docker and Python
trallard
46
3.6k
Rails Girls Zürich Keynote
gr2m
95
14k
How to Ace a Technical Interview
jacobian
280
24k
Designing Dashboards & Data Visualisations in Web Apps
destraynor
231
53k
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
127
53k
Product Roadmaps are Hard
iamctodd
PRO
54
11k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
34
6.1k
How to Create Impact in a Changing Tech Landscape [PerfNow 2023]
tammyeverts
54
3k
Site-Speed That Sticks
csswizardry
11
890
Transcript
Ananth Packkildurai November 28, 2017 1 Measuring Slack API performance
using Druid
Public launch: 2014 800+ employees across 7 countries worldwide HQ
in San Francisco Diverse set of industries including software/technology, retail, media, telecom and professional services. About Slack
An unprecedented adoption rate
Agenda 1. A bit history. 2. Druid infrastructure & usecases
3. Challenges.
A bit history
March 2016 5 350+ 2M Data Engineers Slack employees Active
users
October 2017 10 800+ 6M Data Engineers Slack employees Active
users
Data usage 1 in 3 per week 500+ tables 400k
access data warehouse Tables Events per sec
It is all about Slogs
Well, not exactly
Slog
Slog
Druid infrastructure & usecases
What can go wrong?
We want more...
Performance & Experimentation • Engineering & CE team should be
able to detect the performance bottleneck proactively. • Engineers should be able to see their experimentation performance in near real-time.
Near Real time Pipeline
Keep the load in DW Kafka predictable. More comfortable to
upgrade and verify newer Kafka version. Smaller Kafka cluster is relatively more straightforward to operate. Why Analytics Kafka
Druid Architecture
Middle manager Autoscale based on number of running tasks. Historical
node autoscale based on the segment size. Fault tolerance deployment for overlord & Coordinator Brokers autoscale and load balanced by ELB. Druid Architecture
Challenges
Cascading failures
Forward Index fields
SQL
Bridge the gap between batch and realtime tables.
Thank You! 26