Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Billing the Cloud
Search
Pierre-Yves Ritschard
May 12, 2017
Programming
0
270
Billing the Cloud
Updated billing the cloud slides for We are Developers 2017 in Vienna
Pierre-Yves Ritschard
May 12, 2017
Tweet
Share
More Decks by Pierre-Yves Ritschard
See All by Pierre-Yves Ritschard
Meetup Camptocamp: Exoscale SKS
pyr
0
340
The (long) road to Kubernetes
pyr
0
270
From vertical to horizontal: The challenges of scalability in the cloud
pyr
0
53
Change Management at Scale
pyr
0
83
5 years of Clojure
pyr
2
970
Taming Jenkins
pyr
0
28
Init: then and now
pyr
1
160
From Vertical to Horizontal
pyr
2
130
Billing the Cloud
pyr
7
2.1k
Other Decks in Programming
See All in Programming
長期運用プロダクトの開発速度を維持し続けるためのリファクタリング実践例
wataruss
8
2.6k
From Spring Boot 2 to Spring Boot 3 with Java 22 and Jakarta EE
ivargrimstad
0
1k
Modular Monolith Go Server with GraphQL Federation + gRPC
110y
1
560
めざせ!WKWebViewマスター! / WKWebView Master
marcy731
3
520
エラーレスポンス設計から考える、0→1開発におけるGraphQLへの向き合い方
bicstone
4
670
現代のVueとTypeScript - 型安全の活用術
minako__ph
4
3.1k
Kotlin 2.0 and Beyond
antonarhipov
2
140
BasicBasic認証
sadnessojisan
5
2.5k
僕が思い描くTypeScriptの未来を勝手に先取りする
yukukotani
6
1.9k
月間4.5億回再生を超える大規模サービス TVer iOSアプリのリアーキテクチャ戦略 - iOSDC2024
techtver
PRO
1
610
GoのIteratorに詳しくなってしまう
inatonix
1
180
ゲームボーイアドバンスでSwiftを動かそう
k_koheyi
0
520
Featured
See All Featured
Rebuilding a faster, lazier Slack
samanthasiow
78
8.5k
Thoughts on Productivity
jonyablonski
65
4.2k
Writing Fast Ruby
sferik
623
60k
How GitHub Uses GitHub to Build GitHub
holman
472
290k
Designing Dashboards & Data Visualisations in Web Apps
destraynor
226
52k
Easily Structure & Communicate Ideas using Wireframe
afnizarnur
190
16k
Fantastic passwords and where to find them - at NoRuKo
philnash
47
2.7k
How to Ace a Technical Interview
jacobian
275
23k
The World Runs on Bad Software
bkeepers
PRO
64
11k
Done Done
chrislema
180
16k
Making Projects Easy
brettharned
113
5.8k
Statistics for Hackers
jakevdp
793
220k
Transcript
@pyr Billing the cloud Real world stream processing
@pyr Three-line bio • CTO & co-founder at Exoscale •
Open Source Developer • Monitoring & Distributed Systems Enthusiast
@pyr Billing the cloud Real world stream processing
@pyr • Billing resources • Scaling methodologies • Our approach
@pyr
@pyr provider "exoscale" { api_key = "${var.exoscale_api_key}" secret_key = "${var.exoscale_secret_key}"
} resource "exoscale_instance" "web" { template = "ubuntu 17.04" disk_size = "50g" template = "ubuntu 17.04" profile = "medium" ssh_key = "production" }
None
None
@pyr Infrastructure isn’t free! (sorry)
@pyr Business Model • Provide cloud infrastructure • (???) •
Profit!
None
None
@pyr 10000 mile high view
None
Quantities
Quantities • 10 megabytes have been set from 159.100.251.251 over
the last minute
Resources
Resources • Account WAD started instance foo with profile large
today at 12:00 • Account WAD stopped instance foo today at 12:15
A bit closer to reality {:type :usage :entity :vm :action
:create :time #inst "2016-12-12T15:48:32.000-00:00" :template "ubuntu-16.04" :source :cloudstack :account "geneva-jug" :uuid "7a070a3d-66ff-4658-ab08-fe3cecd7c70f" :version 1 :offering "medium"}
A bit closer to reality message IPMeasure { /* Versioning
*/ required uint32 header = 1; required uint32 saddr = 2; required uint64 bytes = 3; /* Validity */ required uint64 start = 4; required uint64 end = 5; }
@pyr Theory
@pyr Quantities are simple
None
@pyr Resources are harder
None
@pyr This is per account
None
@pyr Solving for all events
resources = {} metering = [] def usage_metering(): for event
in fetch_all_events(): uuid = event.uuid() time = event.time() if event.action() == 'start': resources[uuid] = time else: timespan = duration(resources[uuid], time) usage = Usage(uuid, timespan) metering.append(usage) return metering
@pyr In Practice
@pyr • This is a never-ending process • Minute-precision billing
• Applied every hour
@pyr • Avoid overbilling at all cost • Avoid underbilling
(we need to eat!)
@pyr • Keep a small operational footprint
@pyr A naive approach
30 * * * * usage-metering >/dev/null 2>&1
None
@pyr Advantages
@pyr • Low operational overhead • Simple functional boundaries •
Easy to test
@pyr Drawbacks
@pyr • High pressure on SQL server • Hard to
avoid overlapping jobs • Overlaps result in longer metering intervals
You are in a room full of overlapping cron jobs.
You can hear the screams of a dying MySQL server. An Oracle vendor is here. To the West, a door is marked “Map/Reduce” To the East, a door is marked “Stream Processing”
> Talk to Oracle
You’ve been eaten by a grue.
> Go West
@pyr
@pyr • Conceptually simple • Spreads easily • Data locality
aware processing
@pyr • ETL • High latency • High operational overhead
> Go East
@pyr
@pyr • Continuous computation on an unbounded stream • Each
record processed as it arrives • Very low latency
@pyr • Conceptually harder • Where do we store intermediate
results? • How does data flow between computation steps?
@pyr Deciding factors
@pyr Our shopping list • Operational simplicity • Integration through
our whole stack • Room to grow
@pyr Operational simplicity • Experience matters • Spark and Storm
are intimidating • Hbase & Hive discarded
@pyr Integration • HDFS & Kafka require simple integration •
Spark goes hand in hand with Cassandra
@pyr Room to grow • A ton of logs •
A ton of metrics
@pyr Small confession • Previously knew Kafka
@pyr
None
@pyr • Publish & Subscribe • Processing • Store
@pyr Publish & Subscribe • Records are produced on topics
• Topics have a predefined number of partitions • Records have a key which determines their partition
@pyr • Consumers get assigned a set of partitions •
Consumers store their last consumed offset • Brokers own partitions, handle replication
None
@pyr • Stable consumer topology • Memory disaggregation • Can
rely on in-memory storage • Age expiry and log compaction
@pyr
@pyr Billing at Exoscale
None
None
None
@pyr Problem solved?
@pyr • Process crashes • Undelivered message? • Avoiding overbilling
@pyr Reconciliation • Snapshot of full inventory • Converges stored
resource state if necessary • Handles failed deliveries as well
@pyr Avoiding overbilling • Reconciler acts as logical clock •
When supplying usage, attach a unique transaction ID • Reject multiple transaction attempts on a single ID
@pyr Avoiding overbilling • Reconciler acts as logical clock •
When supplying usage, attach a unique transaction ID • Reject multiple transaction attempts on a single ID
@pyr Parting words
@pyr Looking back • Things stay simple (roughly 600 LoC)
• Room to grow • Stable and resilient • DNS, Logs, Metrics, Event Sourcing
@pyr What about batch? • Streaming doesn’t work for everything
• Sometimes throughput matters more than latency • Building models in batch, applying with stream processing
@pyr Thanks! Questions?