Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Billing the Cloud

Billing the Cloud

Updated billing the cloud slides for We are Developers 2017 in Vienna

Pierre-Yves Ritschard

May 12, 2017
Tweet

More Decks by Pierre-Yves Ritschard

Other Decks in Programming

Transcript

  1. @pyr Three-line bio • CTO & co-founder at Exoscale •

    Open Source Developer • Monitoring & Distributed Systems Enthusiast
  2. @pyr provider "exoscale" { api_key = "${var.exoscale_api_key}" secret_key = "${var.exoscale_secret_key}"

    } resource "exoscale_instance" "web" { template = "ubuntu 17.04" disk_size = "50g" template = "ubuntu 17.04" profile = "medium" ssh_key = "production" }
  3. Resources • Account WAD started instance foo with profile large

    today at 12:00 • Account WAD stopped instance foo today at 12:15
  4. A bit closer to reality {:type :usage :entity :vm :action

    :create :time #inst "2016-12-12T15:48:32.000-00:00" :template "ubuntu-16.04" :source :cloudstack :account "geneva-jug" :uuid "7a070a3d-66ff-4658-ab08-fe3cecd7c70f" :version 1 :offering "medium"}
  5. A bit closer to reality message IPMeasure { /* Versioning

    */ required uint32 header = 1; required uint32 saddr = 2; required uint64 bytes = 3; /* Validity */ required uint64 start = 4; required uint64 end = 5; }
  6. resources = {} metering = [] def usage_metering(): for event

    in fetch_all_events(): uuid = event.uuid() time = event.time() if event.action() == 'start': resources[uuid] = time else: timespan = duration(resources[uuid], time) usage = Usage(uuid, timespan) metering.append(usage) return metering
  7. @pyr • High pressure on SQL server • Hard to

    avoid overlapping jobs • Overlaps result in longer metering intervals
  8. You are in a room full of overlapping cron jobs.

    You can hear the screams of a dying MySQL server. An Oracle vendor is here. To the West, a door is marked “Map/Reduce” To the East, a door is marked “Stream Processing”
  9. @pyr • Continuous computation on an unbounded stream • Each

    record processed as it arrives • Very low latency
  10. @pyr • Conceptually harder • Where do we store intermediate

    results? • How does data flow between computation steps?
  11. @pyr Publish & Subscribe • Records are produced on topics

    • Topics have a predefined number of partitions • Records have a key which determines their partition
  12. @pyr • Consumers get assigned a set of partitions •

    Consumers store their last consumed offset • Brokers own partitions, handle replication
  13. @pyr • Stable consumer topology • Memory disaggregation • Can

    rely on in-memory storage • Age expiry and log compaction
  14. @pyr Reconciliation • Snapshot of full inventory • Converges stored

    resource state if necessary • Handles failed deliveries as well
  15. @pyr Avoiding overbilling • Reconciler acts as logical clock •

    When supplying usage, attach a unique transaction ID • Reject multiple transaction attempts on a single ID
  16. @pyr Avoiding overbilling • Reconciler acts as logical clock •

    When supplying usage, attach a unique transaction ID • Reject multiple transaction attempts on a single ID
  17. @pyr Looking back • Things stay simple (roughly 600 LoC)

    • Room to grow • Stable and resilient • DNS, Logs, Metrics, Event Sourcing
  18. @pyr What about batch? • Streaming doesn’t work for everything

    • Sometimes throughput matters more than latency • Building models in batch, applying with stream processing