Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From Commit To Production And Beyond - The Continuous Delivery Pipeline

From Commit To Production And Beyond - The Continuous Delivery Pipeline

You’ve probably heard about continuous delivery, and you’ve probably heard of DevOps, but how are the two related? Throughout this talk you will learn what continuous delivery is and why your organization should strive to achieve it. You will then embark on a continuous delivery journey that will highlight the level of DevOps maturity an organization should be at to safely deliver to production on a regular basis and keep it running for the long term.

This talk will give you some ideas of what a continuous delivery pipeline looks like and a workflow the dev, QA and ops groups may want to follow. Particular attention will be paid to the application life after deployment and ways of managing the complexity of an ever more distributed system.

Arthur Maltson

March 27, 2018
Tweet

More Decks by Arthur Maltson

Other Decks in Programming

Transcript

  1. First lets talk about what Continuous Delivery isn’t. It’s not

    using your existing process and pipeline and just ship faster.
  2. CONTINUOUS DELIVERY IS THE ABILITY TO GET CHANGES OF ALL

    TYPES—INCLUDING NEW FEATURES, CONFIGURATION CHANGES, BUG FIXES AND EXPERIMENTS—INTO PRODUCTION, OR INTO THE HANDS OF USERS, SAFELY AND QUICKLY IN A SUSTAINABLE WAY. Jez Humble continuousdelivery.com
  3. And Jez knows what he’s talking about, he co-authored THE

    book on Continuous Delivery. You should definitely read it.
  4. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  5. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  6. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  7. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  8. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  9. CONTINUOUS DELIVERY CONTINUUM I wanted to introduce something I call

    the Continuous Delivery Continuum. Some companies fall on one end, that ship once or twice a year, do manual QA, deployment and traditional waterfall. On the other end is the DevOps unicorns, who ship dozens or hundreds of times a day. In reality, most companies land somewhere in middle. Whether you’re just starting out with Continuous Integration, further ahead and shipping every few weeks or somewhere in between, you’re somewhere on the Continuum. What I want to convince you today is that your company should strive to move closer to the DevOps unicorns.
  10. So why would you want to move closer to the

    DevOps unicorns and do Continuous Delivery? Show of hands, who enjoys the weekend long release marathons? No one?
  11. IN SOFTWARE, WHEN SOMETHING IS PAINFUL, THE WAY TO REDUCE

    THE PAIN IS TO DO IT MORE FREQUENTLY, NOT LESS. Jez Humble Continuous Delivery book As counter intuitive as it may seem, over and over it’s been shown that in fact deploying to production more frequently leads to more stability, not less.
  12. The more time and money we spend on a change

    set, the larger that change set becomes and so the larger the problem space becomes when something goes wrong. We should strive to make smaller changes to reduce the problem space when issues do arise.
  13. CYCLE TIME: THE TIME IT TAKES FROM DECIDING TO MAKE

    A CHANGE, WHETHER A BUGFIX OR A FEATURE, TO HAVING IT AVAILABLE TO USERS. Jez Humble Continuous Delivery book This is a metric that every company should start tracking and hopefully start to notice going down as they move further to the right on the Continuous Delivery Continuum.
  14. SOFTWARE ONLY BECOMES VALUABLE WHEN YOU SHIP IT TO CUSTOMERS.

    BEFORE THEN IT’S JUST A COSTLY ACCUMULATION OF HARD WORK AND ASSUMPTIONS. Darragh Curran blog.intercom.io/shipping-is-your-companys-heartbeat Which feeds into an excellent blog post by Darragh Curran that argues shipping is the heartbeat of your company.
  15. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS We start out with just this first part… and a bit more… and well, yeah, the pipeline is fairly large. But we’re technical people, we like to break down a problem into manageable chunks.
  16. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY We start out with just this first part… and a bit more… and well, yeah, the pipeline is fairly large. But we’re technical people, we like to break down a problem into manageable chunks.
  17. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD We start out with just this first part… and a bit more… and well, yeah, the pipeline is fairly large. But we’re technical people, we like to break down a problem into manageable chunks.
  18. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS We start out with just this first part… and a bit more… and well, yeah, the pipeline is fairly large. But we’re technical people, we like to break down a problem into manageable chunks.
  19. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR We start out with just this first part… and a bit more… and well, yeah, the pipeline is fairly large. But we’re technical people, we like to break down a problem into manageable chunks.
  20. Lets start with our developer, she creates a new branch

    and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  21. ! Lets start with our developer, she creates a new

    branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  22. NEW BRANCH ! Lets start with our developer, she creates

    a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  23. NEW BRANCH TEST DRIVEN DEVELOPMENT ! Lets start with our

    developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  24. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS ! Lets start with

    our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  25. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS ! Lets

    start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  26. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST ! Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  27. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER ! Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  28. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS ! Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  29. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS ! Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  30. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS ! Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  31. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS ! " Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  32. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS ! " Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  33. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  34. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " Lets start with our developer, she creates a new branch and follows the standard good development practices locally on her workstation. She tries to keep the branch to only a couple of days, to avoid integration pain later on. Working with QA, she writes acceptance tests. These are executed by the CI server. You might ask, why she doesn’t run them locally on her workstation. If you're acceptance test run fast enough, by all means, but these tend to be slow and require parallelization across dozens or hundreds of machines to be fast. Then we run contract tests, which we’ll talk about later.
  35. Once all her tests on the branch are passing on

    the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  36. CODE REVIEW Once all her tests on the branch are

    passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  37. CODE REVIEW ! Once all her tests on the branch

    are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  38. CODE REVIEW MERGE BRANCH ! Once all her tests on

    the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  39. CODE REVIEW MERGE BRANCH CI SERVER ! Once all her

    tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  40. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS ! Once all

    her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  41. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS !

    Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  42. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  43. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  44. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  45. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  46. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  47. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  48. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! Once all her tests on the branch are passing on the CI server, she creates a Code Review. Once the code review is done, we merge the branch and the CI server reruns the same tests, now on the integrated master/trunk. When we’re on master, we now have an important step, we build the binary. This brave artifact will go on a quest to production. It’ll be subject to many challenges along the way.
  49. With the binary in hand, we try deploying it to

    an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  50. ISOLATED DEPLOY With the binary in hand, we try deploying

    it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  51. ISOLATED DEPLOY ! With the binary in hand, we try

    deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  52. ISOLATED DEPLOY $ ! With the binary in hand, we

    try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  53. ISOLATED DEPLOY START UP $ ! With the binary in

    hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  54. ISOLATED DEPLOY START UP SMOKE TESTS $ ! With the

    binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  55. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY $ !

    With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  56. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    $ ! With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  57. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS $ ! With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  58. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS $ ! With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  59. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  60. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! With the binary in hand, we try deploying it to an isolated environment. This is where Development and Operations work together to maintain consistent deployment scripts that are used for every environment. If you’re using a Platform as a Service, like Cloud Foundry, this is trivial to do. If you have an artisanal, hand crafted platform, that’s fine too, just make it easy to deploy to an isolated environment that’s then torn down. We then check that startup is working. You’ll be surprised how often a change can be made that breaks the startup of the application. Again, you want to catch these things early, not 2 or 3 weeks after the fact and then have to go through 2-3 weeks of change sets. With an isolated deployment working, we deploy to development. At this point we introduce monitoring the logs and load on the system. Again, this all goes back to catching issues early and reducing the problem space when issues arise.
  61. Once we get to the Staging/UAT/Pre Prod/etc environment, this is

    usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  62. STAGING * DEPLOY Once we get to the Staging/UAT/Pre Prod/etc

    environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  63. STAGING * DEPLOY " Once we get to the Staging/UAT/Pre

    Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  64. STAGING * DEPLOY " ! Once we get to the

    Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  65. STAGING * DEPLOY " ! $ Once we get to

    the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  66. STAGING * DEPLOY START UP " ! $ Once we

    get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  67. STAGING * DEPLOY START UP SMOKE TESTS " ! $

    Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  68. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS "

    ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  69. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR

    LOAD " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  70. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    MONITOR LOGS MONITOR LOAD " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  71. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    MONITOR LOGS MONITOR LOAD $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  72. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP MONITOR LOGS MONITOR LOAD $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  73. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  74. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOGS MONITOR LOAD $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  75. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  76. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  77. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ Once we get to the Staging/UAT/Pre Prod/etc environment, this is usually the kingdom of the QA team. This is the environment they’re usually performing their exploratory testing, so they work closely with Development and Operations. With the Staging deployment successful, we start a production blue/green or dark deployment. This is usually the purview of the Operations team. You may have also noticed a pattern, we perform the same tests and analysis in every environment. This goes back to reducing the problem space when issues arise, if you know you’ve performed the same checks with each environment, if there’s any issues you’ll know it’s with that specific environment. At the production level we also start introducing monitoring business metrics.
  78. With a dark deployment successful, we move on to making

    that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  79. PRODUCTION LIVE With a dark deployment successful, we move on

    to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  80. PRODUCTION LIVE $ With a dark deployment successful, we move

    on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  81. PRODUCTION LIVE $ ! With a dark deployment successful, we

    move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  82. PRODUCTION LIVE $ " ! With a dark deployment successful,

    we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  83. PRODUCTION LIVE MONITOR LOGS $ " ! With a dark

    deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  84. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD $ " ! With

    a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  85. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS $ "

    ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  86. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  87. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES " $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  88. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES EXPLORATORY TESTING " $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  89. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES EXPLORATORY TESTING PERFORMANCE TESTING " $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  90. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES EXPLORATORY TESTING PERFORMANCE TESTING " % $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  91. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES EXPLORATORY TESTING PERFORMANCE TESTING SECURITY - STATIC ANALYSIS " % $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  92. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES EXPLORATORY TESTING PERFORMANCE TESTING SECURITY - STATIC ANALYSIS SECURITY - PENETRATION " % $ " ! With a dark deployment successful, we move on to making that production version live. We also want to monitor error rates at this point to quickly roll back if we need once the site goes live. You may also ask, well QA also does additional work like exploratory testing and performance testing. Security teams will do static analysis and penetration test. These can happen either outside the pipeline, or as your pipeline matures, they can start getting integrated, eg. performance tests and security static analysis.
  93. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR And that’s a whirl wind tour of the pipeline. It’s a lot to take in…
  94. But remember, this is a journey. It’s going to take

    years to build this up. For us it’s been 6+ year journey and still going. You have to build it piece by piece, usually starting with the CI side first, then automating the deployment and then filling in the middle.
  95. WHAT IS CONTINUOUS DELIVERY? THE PIPELINE PIPELINE IN ACTION With

    the pipeline in mind, let’s take a look at it in action.
  96. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR We’ll use an example from the Darragh's blog post. Image a customer calls you up and says they’re trying to sign up but can’t seem to get through the name verification. They think it might be the hyphen in their name. The devs realize this is just a regular expression update in the name validation service. Let’s take a look at what doing this through the pipeline looks like.
  97. She first creates a branch to track this change, writing

    the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  98. ! She first creates a branch to track this change,

    writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  99. NEW BRANCH ! She first creates a branch to track

    this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  100. NEW BRANCH TEST DRIVEN DEVELOPMENT ! She first creates a

    branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  101. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS ! She first creates

    a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  102. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS ! She

    first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  103. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST ! She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  104. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  105. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  106. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  107. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  108. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  109. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  110. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  111. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS ! TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  112. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  113. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS She first creates a branch to track this change, writing the test and making it pass. With the change working locally, she pushes the branch to the CI server. The CI server verifies the changes on a platform that’s closer to production, say a Linux Server vs Mac desktop. Then CI runs the acceptance test and finds it fails.
  114. FAST FEEDBACK This highlights the importance of having fast feedback.

    People have short attention spans, if a build takes more than a few minutes the devs will be off to Twitter. This is why you want to parallelize your tests as much as possible. For example, Facebook spins up one machine per acceptance test, i.e. 10K machines, so the feedback is as slow as the slowest test.
  115. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN UNIT TESTS With the acceptance tests now passing, we find the contract tests fail.
  116. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS With the acceptance tests now passing, we find the contract tests fail.
  117. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS With the acceptance tests now passing, we find the contract tests fail.
  118. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS With the acceptance tests now passing, we find the contract tests fail.
  119. CONSUMER DRIVEN CONTRACTS PACTO PACT PACT-JVM Consumer driven contracts are

    a fairly new concept. The idea is your consuming services write tests with what endpoints they expect, which requests they’re sending and which responses they expect. Then your service consumes these tests as part of your pipeline. http://martinfowler.com/articles/ consumerDrivenContracts.html
  120. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN ACCEPTANCE TESTS Our developer makes the required changes and gets the contract tests passing.
  121. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS Our developer makes the required changes and gets the contract tests passing.
  122. NEW BRANCH TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT

    TEST CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! " TEST DRIVEN DEVELOPMENT STYLECHECKS STATIC ANALYSIS RUN UNIT TEST STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS Our developer makes the required changes and gets the contract tests passing.
  123. With the CI branch passing, our developer creates a code

    review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  124. CODE REVIEW With the CI branch passing, our developer creates

    a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  125. CODE REVIEW ! With the CI branch passing, our developer

    creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  126. CODE REVIEW ! & With the CI branch passing, our

    developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  127. CODE REVIEW ! & ✅ With the CI branch passing,

    our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  128. CODE REVIEW MERGE BRANCH ! & ✅ With the CI

    branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  129. CODE REVIEW MERGE BRANCH CI SERVER ! & ✅ With

    the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  130. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! & ✅ With the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  131. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS ! & ✅ STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS With the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  132. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! & ✅ STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS With the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  133. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! & ✅ STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS With the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  134. CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN

    UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS BUILDS BINARY ! & ✅ STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS With the CI branch passing, our developer creates a code review. Code reviews are an amazing way to share knowledge, spread awareness of changes happening to a project and distributing code ownership so no one person “owns” specific code. Oh, and it helps find bugs. With the branch merged, we rerun the tests with the integrated master and build our binary. This artifact will start its journey to production. If it fails any of the challenges, we throw it away. The artifact doesn’t have any feelings…
  135. ISOLATED DEPLOY $ ! Then we try to to do

    an isolated deployment… and it fails.
  136. ISOLATED DEPLOY START UP $ ! Then we try to

    to do an isolated deployment… and it fails.
  137. ISOLATED DEPLOY START UP $ ! START UP Then we

    try to to do an isolated deployment… and it fails.
  138. Remember, this pipeline is about building confidence in the artifact

    as we move further to the right. We need to safely deploy to production. If we don’t test startup regularly, we could have weeks of changes to go through when the startup fails.
  139. ISOLATED DEPLOY START UP $ ! START UP Our developer

    fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  140. ISOLATED DEPLOY START UP $ ! START UP START UP

    Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  141. ISOLATED DEPLOY START UP SMOKE TESTS $ ! START UP

    START UP Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  142. ISOLATED DEPLOY START UP SMOKE TESTS $ ! START UP

    START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  143. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY $ !

    START UP START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  144. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    $ ! START UP START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  145. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    $ ! START UP START UP START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  146. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS $ ! START UP START UP START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  147. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS $ ! START UP START UP SMOKE TESTS START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  148. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! START UP START UP SMOKE TESTS START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  149. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! START UP START UP SMOKE TESTS ✅ START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  150. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! START UP START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  151. ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP

    SMOKE TESTS MONITOR LOGS MONITOR LOAD $ ! START UP START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS Our developer fixes the startup issue and pushes through her changes. We throw away the previous artifact and create a new one. This new artifact goes through the isolated deployment, then moves on to the development deployment.
  152. With a successful development deploy, we move on to the

    staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  153. STAGING * DEPLOY " ! $ With a successful development

    deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  154. STAGING * DEPLOY START UP " ! $ With a

    successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  155. STAGING * DEPLOY START UP " ! $ START UP

    With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  156. STAGING * DEPLOY START UP SMOKE TESTS " ! $

    START UP With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  157. STAGING * DEPLOY START UP SMOKE TESTS " ! $

    START UP SMOKE TESTS With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  158. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS "

    ! $ START UP SMOKE TESTS With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  159. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR

    LOAD " ! $ START UP SMOKE TESTS With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  160. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR

    LOAD " ! $ START UP SMOKE TESTS ✅ With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  161. STAGING * DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR

    LOAD " ! $ START UP SMOKE TESTS ✅ ✅ With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  162. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ With a successful development deploy, we move on to the staging/UAT/etc environments. With that done, we go to a production dark or blue/green deployment. You might say, “wait a second Arthur, I thought we’re talking about Continuous Delivery, not Deployment.” That’s leads to an important point….
  163. DEPLOY VS RELEASE Decouple deployment from releases. Deployment is about

    shipping code to production not necessarily releasing that code to the customer.
  164. This is very important. If you’re deploying code, even if

    only to a dark environment, on a regular basis you’re constant exercising your pipeline. This will find breaks in the pipeline quickly. For example, you might be accidentally using a person’s username/password for prod deployments in scripts and that person leaves. Now your deploys are broken. If you deploy to prod only every few weeks, you’d only catch this issue at that time instead of as soon as the person left.
  165. FEATURE FLAGS IF STATEMENT CONFIG FILE ROLLOUT TOGGLZ So if

    you deploy regularly to production, how do you prevent a release from happening? This is where feature toggles/flags come in. You can start small with an if statement, and get as sophisticated as tools like rollout or togglz.
  166. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ We do this dark deployment and find our logs, load and metrics are looking good.
  167. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ We do this dark deployment and find our logs, load and metrics are looking good.
  168. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP We do this dark deployment and find our logs, load and metrics are looking good.
  169. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP We do this dark deployment and find our logs, load and metrics are looking good.
  170. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS We do this dark deployment and find our logs, load and metrics are looking good.
  171. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS We do this dark deployment and find our logs, load and metrics are looking good.
  172. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS ✅ We do this dark deployment and find our logs, load and metrics are looking good.
  173. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS ✅ ✅ We do this dark deployment and find our logs, load and metrics are looking good.
  174. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS ✅ ✅ ✅ We do this dark deployment and find our logs, load and metrics are looking good.
  175. STOP? GO? At this point we’ve exercised our pipeline all

    the way to production. Depending on your industry, you might not be able to deploy to production. Say you ship software for offline medical systems. But we’ve gone through the full pipeline and that’s the goal. In our example we’re a SaaS company, so we can Go to production.
  176. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS ✅ ✅ ✅ We start moving to production.
  177. STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION DEPLOY DARK

    START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS $ " ! $ START UP SMOKE TESTS ✅ ✅ START UP SMOKE TESTS ✅ ✅ ✅ We start moving to production.
  178. As our code goes to production, we find our logs,

    load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  179. PRODUCTION LIVE As our code goes to production, we find

    our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  180. PRODUCTION LIVE $ " ! As our code goes to

    production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  181. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! As our code goes to production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  182. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ As our code goes to production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  183. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ As our code goes to production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  184. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ As our code goes to production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  185. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ As our code goes to production, we find our logs, load and error rates are look good. However our business metrics are showing revenue plummeting. Alarm bells go off. What’s going on?
  186. MONITORING ❤ This is the need for having good monitoring,

    especially for key business metrics, like revenue.
  187. FAST ROLLBACK With a Continuous Delivery pipeline, you need to

    make it easy to rollback changes. We quickly rollback our change that were causing revenue to drop and start investigating. We find that a CSS change was made that was hiding the buy button and none of the tests caught it.
  188. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ With the CSS change reverted, we create a new binary, it makes it’s way through the pipeline and gets deployed. We let our user know they can sign up now, the hyphen issue is fixed. The developers, operators and QA go on vacation.
  189. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ With the CSS change reverted, we create a new binary, it makes it’s way through the pipeline and gets deployed. We let our user know they can sign up now, the hyphen issue is fixed. The developers, operators and QA go on vacation.
  190. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ With the CSS change reverted, we create a new binary, it makes it’s way through the pipeline and gets deployed. We let our user know they can sign up now, the hyphen issue is fixed. The developers, operators and QA go on vacation.
  191. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ With the CSS change reverted, we create a new binary, it makes it’s way through the pipeline and gets deployed. We let our user know they can sign up now, the hyphen issue is fixed. The developers, operators and QA go on vacation.
  192. PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR

    RATES $ " ! ✅ ✅ ✅ With the CSS change reverted, we create a new binary, it makes it’s way through the pipeline and gets deployed. We let our user know they can sign up now, the hyphen issue is fixed. The developers, operators and QA go on vacation.
  193. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR And that’s a quick tour of having a change propagate through the pipeline.
  194. NEW BRANCH TDD STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS CI

    SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE TESTS RUN CONTRACT TESTS CODE REVIEW MERGE BRANCH CI SERVER STYLECHECKS STATIC ANALYSIS RUN UNIT TESTS RUN ACCEPTANCE RUN CONTRACT BUILDS BINARY ISOLATED DEPLOY START UP SMOKE TESTS DEVELOPMENT DEPLOY START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD STAGING * DEPLOY START UP SMOKE TESTS PRODUCTION START UP SMOKE TESTS MONITOR LOGS MONITOR LOAD MONITOR LOGS MONITOR LOAD MONITOR METRICS PRODUCTION LIVE MONITOR LOGS MONITOR LOAD MONITOR METRICS MONITOR ERROR And we can improve our cycle time with a robust Continuous Delivery pipeline. Remember, it’s a big pipeline and takes many years to build. But it’s well worth the investment.
  195. CONTINUOUS DELIVERY CONTINUUM And hopefully through this talk I’ve convinced

    you that you should try to move your company further to the right on the Continuous Delivery Continuum.
  196. CONTINUOUS DELIVERY CONTINUUM And hopefully through this talk I’ve convinced

    you that you should try to move your company further to the right on the Continuous Delivery Continuum.
  197. CONTINUOUS DELIVERY CONTINUUM And hopefully through this talk I’ve convinced

    you that you should try to move your company further to the right on the Continuous Delivery Continuum.
  198. CONTINUOUS DELIVERY CONTINUUM And hopefully through this talk I’ve convinced

    you that you should try to move your company further to the right on the Continuous Delivery Continuum.
  199. If this sounds interesting, and you'd like to work in

    an environment where this is encouraged, and work with AWS, open source tech, and even contribute and open source your own projects, we're hiring!
  200. If this sounds interesting, and you'd like to work in

    an environment where this is encouraged, and work with AWS, open source tech, and even contribute and open source your own projects, we're hiring!
  201. If this sounds interesting, and you'd like to work in

    an environment where this is encouraged, and work with AWS, open source tech, and even contribute and open source your own projects, we're hiring!
  202. WE’RE HIRING If this sounds interesting, and you'd like to

    work in an environment where this is encouraged, and work with AWS, open source tech, and even contribute and open source your own projects, we're hiring!
  203. ARTHUR MALTSON Slides: https://speakerdeck.com/amaltson Stats: 70% Dev / 30% Ops,

    110% DadOps Work: Capital One Canada Loves: Automation, Ruby, Ansible, Terraform Hates: Manual processes @amaltson PERSON WE’RE HIRING! maltson.com
  204. TOOLS I ❤ ▸ Feature Flags: Rollout, Togglz ▸ Static

    Analysis: SonarQube (Java, Javascript, C#, Python, more) ▸ CI Servers: ConcourseCI, Jenkins, Bamboo ▸ Contract Testing: Pacto, Pact, Pact-JVM ▸ Artifact Storage: Nexus, Artifactory ▸ Security Analysis: Nexus Lifecycle ▸ Deployment: Terraform, Ansible, Pivotal Cloud Foundry ▸ Log Aggregation: Splunk, ELK (ElasticSearch, Logstash, Kibana) ▸ Monitoring: New Relic, Prometheus ▸ Alerting: PagerDuty, VictorOps ▸ Metrics: DataDog, InfluxDB, Grafana ▸ ChatOps: lita.io
  205. CREDITS ▸ Slide 1 - Arthur T. LaBar, Pipeline |

    Fairbanks, Alaska, https://flic.kr/p/daszgo ▸ Slide 4, Ben Simo, Pipeline, https://flic.kr/p/6pSjdF ▸ Slide 8, 52, dtrace.org, https://stackstorm.com/wp/wp-content/uploads/2014/05/dtrace_pony_xray-2.jpg ▸ Slide 11, PenthaCorp, http://www.panthacorp.com/continuous-delivery-for-business ▸ Slide 12, William Warby, Stopwatch, https://flic.kr/p/62hNF6 ▸ Slide 15, SpongeBob excited, http://mashable.com/wp-content/uploads/2013/07/SpongeBob.gif ▸ Slide 16, Excited chimp, http://www.memecenter.com/fun/2547937/i-amp-039-m-so-excited ▸ Slide 25, Matthias Ripp, Long and winding road..., https://flic.kr/p/r8XeBN ▸ Slide 35, askideas.com, https://www.askideas.com/25-funny-safety-images-and-photos ▸ Slide 39, Lee, The Human Hamster Wheel, https://flic.kr/p/5rK2pY ▸ Slide 40, ThoughtWorks, https://www.thoughtworks.com/radar/techniques/decoupling-deployment-from- release ▸ Others: DepositPhotos