Save 37% off PRO during our Black Friday Sale! »

Engineering a Continuous Delivery Pipeline

Engineering a Continuous Delivery Pipeline

How do you make the process of getting code into production seamless as a developer, without causing difficulties for operations? For example, developers would rather avoid gatekeeping in order to have shorter feedback loops. Ops teams would rather be well prepared for what’s going into production and when, in order to avoid surprises and prepare support training. In this talk you will learn about the process of creating a deployment pipeline for Kubernetes using Gitlab CI, what checks you can use to catch human error early, and ways of validating that changes won't cause the result to tip over. This talk will improve your ability to design deployment processes which avoid developer pain but builds operational trust.

Given at DevOps Days Edinburgh 2017


Charlotte Godley

October 24, 2017


  1. Engineering a Continuous Delivery Pipeline Charlotte Godley

  2. @charwarz Aim & Objectives Describe how we engineered our continuous

    delivery pipeline into multiple Kubernetes environments @ Ocado Technology Objectives • Set the problem in context: some specifics of the Ocado use case • Define what a bad deployment process looks like • Define what a better deployment process looks like • Discuss what caused us pain, and how we addressed it • Talk about where we’re going next
  3. @charwarz Hello!

  4. @charwarz Problem context Source:

  5. @charwarz Current (bad) Delivery Processes at Ocado Source:

  6. @charwarz Getting a big picture is hard. Source:

  7. @charwarz Incomplete or Non-existent Documentation Source:

  8. @charwarz One size doesn’t fit all Source:

  9. @charwarz No Post-deployment Visibility Aka, did that work?!

  10. The big picture

  11. @charwarz GitOps Git is the single source of truth for

    operations. Every merge to master starts a deployment pipeline, so what’s currently running in prod should be the HEAD of master branch. Operations by Pull Request - Weaveworks gives a good summary
  12. None
  13. None
  14. Documentation

  15. @charwarz Write down all the things Documentation our team writes

    is... - In one, obvious place - Peer reviewed - Concise, but with lots of examples - Updated regularly - A reference guide for best practice, but avoids duplicating upstream documentation
  16. @charwarz What A Good Manifests Repo Looks Like |-- .gitlab-ci.yml

    |-- |-- manifests | |--00-Namespace.yaml ...other useful examples... |-- hack | |
  17. @charwarz What A Good Deployment Repo Looks Like |-- .gitlab-ci.yml

    |-- |-- teams | |--00-kubernetes | | |--manifests | | | |--secret.yaml.enc | | |--bundles
  18. Avoiding rules

  19. @charwarz Ground rules... - Every resource has a namespace -

    Every project namespace name is <appId>[-optional-suffix] - Every project namespace has the label appId: <appId> - The above 2 rules match each other
  20. Did that work?!

  21. @charwarz Monitoring - Automated slack integrations of when pipelines start/stop

    - Document links to our own dashboards - End to end tests to highlight where apps could be affected by kubernetes issues
  22. @charwarz Access - Read-only access - Developer access - Admin

  23. Pain points

  24. @charwarz Workflow

  25. @charwarz Workflow

  26. @charwarz Workflow

  27. @charwarz Workflow

  28. @charwarz Workflow fix: reference updater bot

  29. @charwarz Validation Source:

  30. @charwarz Tidying up - What’s currently on the cluster is

    master branch - Add = new merge request - Rollback = revert - Delete = ??
  31. @charwarz Testing the pipeline Source:

  32. What’s next?

  33. @charwarz Summary “If users aren't finding success on their own

    it's not their fault. It's our fault. We didn't make it easy enough for them to fall into the pit of success.” Jeff Atwood
  34. @charwarz Thanks for listening! (We’re hiring…)