Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Agile performance testing
Search
Andreas Bjärlestam
March 30, 2016
Programming
0
110
Agile performance testing
Agile performance testing with Amazon AWS, tmux and siege
Andreas Bjärlestam
March 30, 2016
Tweet
Share
More Decks by Andreas Bjärlestam
See All by Andreas Bjärlestam
Climate compensate with a pull request
bjarlestam
0
35
SPDY and HTTP2
bjarlestam
1
1.2k
SPDY or maybe HTTP2.0
bjarlestam
4
65
jquery mobile
bjarlestam
0
72
Devise - taking care of your users
bjarlestam
0
74
REST with JAX-RS
bjarlestam
1
92
REST
bjarlestam
2
170
Other Decks in Programming
See All in Programming
副作用をどこに置くか問題:オブジェクト指向で整理する設計判断ツリー
koxya
1
610
LLM Observabilityによる 対話型音声AIアプリケーションの安定運用
gekko0114
2
430
Automatic Grammar Agreementと Markdown Extended Attributes について
kishikawakatsumi
0
190
Oxlintはいいぞ
yug1224
5
1.3k
インターン生でもAuth0で認証基盤刷新が出来るのか
taku271
0
190
CSC307 Lecture 05
javiergs
PRO
0
500
Lambda のコードストレージ容量に気をつけましょう
tattwan718
0
120
AtCoder Conference 2025
shindannin
0
1.1k
IFSによる形状設計/デモシーンの魅力 @ 慶應大学SFC
gam0022
1
300
Basic Architectures
denyspoltorak
0
670
CSC307 Lecture 04
javiergs
PRO
0
660
Best-Practices-for-Cortex-Analyst-and-AI-Agent
ryotaroikeda
1
100
Featured
See All Featured
Groundhog Day: Seeking Process in Gaming for Health
codingconduct
0
92
Primal Persuasion: How to Engage the Brain for Learning That Lasts
tmiket
0
250
We Analyzed 250 Million AI Search Results: Here's What I Found
joshbly
1
710
Building a Modern Day E-commerce SEO Strategy
aleyda
45
8.6k
SERP Conf. Vienna - Web Accessibility: Optimizing for Inclusivity and SEO
sarafernandez
1
1.3k
Done Done
chrislema
186
16k
Scaling GitHub
holman
464
140k
Chasing Engaging Ingredients in Design
codingconduct
0
110
Designing for humans not robots
tammielis
254
26k
A Soul's Torment
seathinner
5
2.3k
Rails Girls Zürich Keynote
gr2m
96
14k
The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs
inesmontani
PRO
3
3k
Transcript
Understanding your system Andreas Bjärlestam! 2016-03-30! _____ .__.__ / _
\ ____ |__| | ____ / /_\ \ / ___\| | | _/ __ \ / | \/ /_/ > | |_\ ___/ \____|__ /\___ /|__|____/\___ > \//_____/ \/ _____ ______ ____________/ ____\___________ _____ _____ ____ ____ ____ \____ \_/ __ \_ __ \ __\/ _ \_ __ \/ \\__ \ / \_/ ___\/ __ \ | |_> > ___/| | \/| | ( <_> ) | \/ Y Y \/ __ \| | \ \__\ ___/ | __/ \___ >__| |__| \____/|__| |__|_| (____ /___| /\___ >___ > |__| \/ \/ \/ \/ \/ \/ __ __ .__ _/ |_ ____ _______/ |_|__| ____ ____ \ __\/ __ \ / ___/\ __\ |/ \ / ___\ | | \ ___/ \___ \ | | | | | \/ /_/ > |__| \___ >____ > |__| |__|___| /\___ / \/ \/ \//_____/
Do you know on top of your head: • How
many req/s your system can handle? • What response >me it has? • How it scales? • What the bo@lenecks are? • How stable it is over >me?
Many s>ll do big bang performance tes>ng
Stop seeing performance tests as verifica>on It should be an
integrated part of your development cycle Con>nuously analyze your system
You should do it all the >me!
I’m lazy, so performance tes>ng must be quick and simple
isola>on
Your test client should do nothing but loadtes>ng, no interference
Amazon AWS + tmux = win!
You can leave it on and come back to check
every now and then
Combine with monitoring: Newrelic Kibana Graphite
> sudo yum install siege
> siege -c10 -t30s -d1 -i -f urls.txt
siege • Quick and simple • Instant visual feedback •
Good summary of most important metrics • Good enough for most scenarios = You can work in quick itera>ons
Create a Traffic Model
Your best guess of how the system will be used
When replacing a system Get access logs from the old
system Analyze which parts of the system that generate the most load Plot them over >me to get a feeling for peaks and average load
If you replay access logs against your system with siege
you can gain a lot of insights and find problems like unhandled urls, unnecessary redirects etc
Build a urls.txt file Based on traffic model Fill a
file with urls that represent your expected traffic, one url per line Think about the propor>ons of different types of urls
Build a urls.txt file Expec>ng broad traffic -> many urls
Expec>ng narrow traffic -> not so many
Build a urls.txt file h@p://example.com h@p://example.com/user/s>na h@p://example.com/user/olof h@p://example.com/user/sven h@p://example.com/user/siv h@p://example.com/user/ellen
h@p://example.com/country/sweden h@p://example.com/country/norway
Build a urls.txt file curl cut jq grep etc are
your friends
Build a urls.txt file You can send POST as well
h@p://example.com/user/s>na POST age=23 h@p://example.com/user/s>na POST a=1&b=2 h@p://example.com/user/s>na POST <./s>na.txt
Finding the system limits
10 req/s OK 100 req/s OK 500 req/s Slooow 300
req/s OK
Scale up with more CPU or processors and try again
Does it scale linearly?
This is a good >me to think about the bo@lenecks
of your system I/O CPU Sync / Async
Adjust and try again
Keep an eye on system metrics Response >me CPU load
Memory usage Error rates etc
Stability tes>ng
Start a linux machine on AWS Set up a session
with tmux Start siege Leave it running
> siege -c10 -t24h -i -d1 -f urls.txt
Put your system under con>nuous load from your first commit
Keep an eye on system metrics every now and then
Good to know: siege counts error responses and will stop
if it encounters more than 1024 errors
TLDR Stop doing performance tests like its 1999 Put your
system under load from day 1 Run tests interac>vely, be crea>ve Gain understanding AWS + tmux + siege is awesome!
tmux cheat sheet > tmux new -s loadtest starts a
new session named loadtest ctrl-b + d exits session > tmux list-sessions lists all current sessions > tmux a@ach -t loadtest a@aches to the session named loadtest
If siege does not fit your use case You could
try - gatling - locust - wrk They are bigger (more complex) hammers