Lock in $30 Savings on PRO—Offer Ends Soon! ⏳
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
AWS SQS queues & Kubernetes Autoscaling Pitfall...
Search
Eric Khun
October 26, 2020
Programming
2
460
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories
Talk at the Cloud Native Computing Foundation meetup @dcard.tw
Eric Khun
October 26, 2020
Tweet
Share
More Decks by Eric Khun
See All by Eric Khun
From PHP to Golang: Migrating a real-time data replication service
erickhun
1
150
Other Decks in Programming
See All in Programming
Context is King? 〜Verifiability時代とコンテキスト設計 / Beyond "Context is King"
rkaga
10
1.3k
Developing static sites with Ruby
okuramasafumi
0
300
非同期処理の迷宮を抜ける: 初学者がつまづく構造的な原因
pd1xx
1
720
AIの誤りが許されない業務システムにおいて“信頼されるAI” を目指す / building-trusted-ai-systems
yuya4
6
3.7k
Cell-Based Architecture
larchanjo
0
130
JETLS.jl ─ A New Language Server for Julia
abap34
1
410
実はマルチモーダルだった。ブラウザの組み込みAI🧠でWebの未来を感じてみよう #jsfes #gemini
n0bisuke2
3
1.2k
ハイパーメディア駆動アプリケーションとIslandアーキテクチャ: htmxによるWebアプリケーション開発と動的UIの局所的適用
nowaki28
0
420
Claude Codeの「Compacting Conversation」を体感50%減! CLAUDE.md + 8 Skills で挑むコンテキスト管理術
kmurahama
0
280
俺流レスポンシブコーディング 2025
tak_dcxi
14
8.9k
tparseでgo testの出力を見やすくする
utgwkk
2
230
認証・認可の基本を学ぼう後編
kouyuume
0
240
Featured
See All Featured
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
128
54k
4 Signs Your Business is Dying
shpigford
186
22k
Why You Should Never Use an ORM
jnunemaker
PRO
61
9.6k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
How to Think Like a Performance Engineer
csswizardry
28
2.4k
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.6k
The Success of Rails: Ensuring Growth for the Next 100 Years
eileencodes
47
7.9k
Intergalactic Javascript Robots from Outer Space
tanoku
273
27k
[SF Ruby Conf 2025] Rails X
palkan
0
540
Bootstrapping a Software Product
garrettdimon
PRO
307
120k
Making the Leap to Tech Lead
cromwellryan
135
9.7k
Rebuilding a faster, lazier Slack
samanthasiow
85
9.3k
Transcript
AWS SQS queues & Kubernetes Autoscaling Pitfalls Stories Cloud Native
Foundation meetup @dcard.tw @eric_khun
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Make it work, Make it right, Make it fast kent
beck (agile manifesto - extreme programming)
Buffer
None
Buffer • 80 employees , 12 time zones, all remote
Quick intro
None
Main pipelines flow
it can look like ... golang Talk @Maicoin :
None
None
How do we send posts to social medias?
A bit of history... 2010 -> 2012: Joel (founder/ceo) 1
cronjob on a Linode server $20/mo 512 mb of RAM 2012 -> 2017 : Sunil (ex-CTO) Crons running on AWS ElasticBeanstalk / supervisord 2017 -> now: Kubernetes / CronJob controller
AWS Elastic Beanstalk: Kubernetes:
At what scale? ~ 3 million SQS messages per hour
Different patterns for many queues
Are our workers (consumers of the SQS queues ) efficients?
Are our workers efficients?
Are our workers efficients?
Empty messages? > Workers tries to pull messages from SQS,
but receive “nothing” to process
Number of empty messages per queue
Sum of empty messages on all queues
None
1,000,000 API calls to AWS costs 0.40$ We have 7,2B
calls/month for “empty messages” It costs ~$25k/year > Me:
None
AWS SQS Doc
None
Or in the AWS console
Results?
empty messages
AWS
None
$120 > $50 saved daily > $2000 / month >
$25,000 / year (it’s USD, not TWD)
Paid for querying “nothing”
(for the past 8 years )
Benefits - Saving money - Less CPU usage (less empty
requests) - Less throttling (misleading) - Less containers > Better resources allocation: memory/cpu request
Why did that happen?
Default options
None
Never questioning what’s working decently or the way it’s been
always done
What could have helped? Infra as code (explicit options /
standardization) SLI/SLOs (keep re-evaluating what’s important) AWS architecture reviews (taging/recommendations from aws solutions architects)
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Do you remember?
None
None
None
Need to analytics on Twitter/FB/IG/LKD… on millions on posts faster
workers consuming time
None
What’s the problem?
Resources allocated and not doing anything most of the time
Developer trying to put find compromises on the number of workers
How to solve it?
Autoscaling! (with Keda.sh) Supported by IBM / Redhat / Microsoft
None
Results
None
But notice anything?
Before autoscaling
After autoscaling
After autoscaling
What’s happening?
Downscaling
Why?
delete pod lifecycle
what went wrong - Workers didn’t manage SIGTERM sent by
k8s - Kept processing messages - Messages were halfway processed and killed - Messages were sent back to the the queue again - Less workers because of downscaling
solution - When receiving SIGTERM stop processing new messages -
Set a graceful period long enough to process the current message if (SIGTERM) { // finish current processing and stop receiving new messages }
None
None
And it can also help with sqs empty messages
Make it work, Make it right, Make it fast
Make it work, Make it right, Make it fast
Thanks!
Questions? monitory.io taiwangoldcard.com travelhustlers.co ✈