Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Monolith to Microservices: A small teams journey

Dale Humby
December 09, 2018

Monolith to Microservices: A small teams journey

At Nomanini we built our first payments platform on Google Cloud when it was still only App Engine. As our product grew we built an ever-more-complex monolith. As our start-up pivoted we needed more flexibility, and wanted to experiment with newer technologies. With only three developers we made the bold decision to migrate our platform from an App Engine monolith to microservices running on Docker and Kubernetes. In this frank discussion I talk about the reasons why we made this move, the pitfalls we experienced, the benefits of the move, and why we love Kubernetes on Google Cloud.

Video: https://youtu.be/1sLg689YZtg

Dale Humby

December 09, 2018
Tweet

More Decks by Dale Humby

Other Decks in Technology

Transcript

  1. $ 250+ $ 200 $ 150+ < $ 40 2011

    2018 Cost of Android phones has radically decreased We could not scale hardware
  2. Monolith Pro • Simplicity • Consistency • Ease of cross

    boundary refactoring Con • With Trunk based dev: Blocked sync 6 devs • Single language • Package entire project and deploy, even if small change • Scale by deploying entire project (heavy weight VMs)
  3. Microservices Pro • Scale out independently • Experimentation • Cognitive

    load • Fast, independent release cycle • Seperate teams Con • Comms: Typically TCP/IP (http, graphql, rpc) instead of IPC • Debug ◦ Async ◦ Architecture complex, messaging ◦ Trace across services ◦ Visibility in to status • Dependency: large refactors, backward compat overhead, scheduling across teams • Infrastructure: Duplication of monitoring, CI/CD pipelines
  4. CI/CD Maturity Model Continuous Integration Continuous Delivery Continuous Deployment Manual

    everything Automated tests Build artifacts Always deployable Readiness feedback Push-button deploy DevOps culture
  5. CI/CD Maturity Model Continuous Integration Continuous Delivery Continuous Deployment Manual

    everything Automated tests Build artifacts Always deployable Readiness feedback Push-button deploy DevOps culture Auto deploy to prod Monitor, Alerting Canary releases
  6. How big should microservices be? • 2 pizza team •

    Hold in head • Bounded contexts • Rewrite in 2 weeks • 12 people, 1 service • 6 people, 6 services • Does 1 thing well • As small as possible, and no smaller • Independently replaceable, deployable, upgradable • Heterogeneous tech
  7. Too Big • Painful deployment • Hard to change •

    “Why does this depend on something I’ve never heard of?” • Large teams • 2-3 integration technologies (RPC, REST, Database)
  8. Too Big • Painful deployment • Hard to change •

    “Why does this depend on something I’ve never heard of?” • Large teams • 2-3 integration technologies (RPC, REST, Database) How to fix? • Find the seams • Coupling, split on dependencies • Functional decomposition, data partition • Split, add API service
  9. Too small • Integration issues • Inconvenient coupling, especially in

    deployments • Exposes representations, instead of behaviour
  10. Too small • Integration issues • Inconvenient coupling, especially in

    deployments • Exposes representations, instead of behaviour How to fix? • Redraw domain boundaries • Combine services • Hide CRUD, add business oriented events • Service aggregator/orchestrator in front
  11. Just right But first... • Mature DevOps • No siloed

    teams Business domain • High cohesion • Low coupling • Single responsibility
  12. Just right But first... • Mature DevOps • No siloed

    teams Business domain Organisational culture • High cohesion • Low coupling • Single responsibility
  13. "Organizations which design systems... are constrained to produce designs which

    are copies of the communication structures of these organizations." - Melvin Conway
  14. Developed as only practical way to manage Google-scale compute Everything

    at Google runs in a container Containers at Google 37 Launch over 2 Billion containers per week.
  15. Kubernetes Node 1 µS 1 µS 1 µS 4 µS

    3 Node 2 µS 1 µS 3 µS 2 Node 3 µs 1 µs 1 µS 2
  16. Kubernetes in Single Zone Node 1 µS 1 µS 1

    µS 4 µS 3 Node 2 µS 1 µS 3 µS 2 Node 3 µS 1 µS 1 µS 2
  17. Kubernetes in High Availability Node 1 µS 1 µS 1

    µS 4 µS 3 Node 2 µS 1 µS 3 µS 2 Node 3 µS 1 µS 1 µS 2
  18. Node 3 Kubernetes in High Availability Node 1 µS 1

    µS 1 µS 3 µS 1 µS 3 µS 2 µS 1 µS 1 µS 2 µS 4
  19. Node 3 Kubernetes in High Availability Node 1 µS 1

    µS 1 µS 3 µS 1 µS 3 µS 2 µS 1 µS 1 µS 2
  20. Node 3 Kubernetes in High Availability Node 1 µS 1

    µS 1 µS 3 µS 1 µS 3 µS 4 µS 1 µS 1 µS 2 µS 2
  21. Node 3 Kubernetes + Preemptible VM’s = 70% cost saving

    Node 1 µS 1 µS 1 µS 3 µS 3 µS 1 µS 1 µS 2 µS 2
  22. Best practices • Auto-scaling • Async, Eventual consistency • Timeouts,

    Circuit breakers to fail fast • Bulk-heads • Avoiding Cascading failures
  23. CI CD Microservices Visibility time knowledge Learning curve Kubernetes: Control

    plane Istio: Microservice mesh Always ready to release Automation, testing
  24. Google Cloud Platform Lessons learnt Monolith • Increasingly difficult as

    team, product complexity scale • Not easy to change to another tech stack (DB’s, language, libraries)
  25. Google Cloud Platform Lessons learnt Monolith • Increasingly difficult as

    team, product complexity scale • Not easy to change to another tech stack (DB’s, language, libraries) Microservices • Went too small: CRUD services… now consolidating • Really needed a strong CI/CD culture, even then challenging • Refactoring while live : Changing engine while aeroplane is flying • Distributed systems are challenging • Kubernetes ecosystem good, big learning curves, need larger team
  26. Google Cloud Platform Lessons learnt Monolith • Increasingly difficult as

    team, product complexity scale • Not easy to change to another tech stack (DB’s, language, libraries) Microservices • Went too small: CRUD services… now consolidating • Really needed a strong CI/CD culture, even then challenging • Refactoring while live : Changing engine while aeroplane is flying • Distributed systems are challenging • Kubernetes ecosystem good, big learning curves, need larger team Not removing complexity – Just moving it