development has long delivered value in big chunks. Many teams make the problem worse by tending to respond to stress by making the chunks of value bigger, from deploying software less frequently to integrating less often.
more things are deferred, the larger the chunk, the higher the risk. In contrast, the principle of flow suggests that for improvement, deploy smaller increments of value ever more frequently.
steps are expressed in practices like test-first programming, which proceeds one test at a time, and continuous integration, which integrates and tests a few hours' worth of changes at a time.
are many barriers to deploying frequently. Some are psychological or social, like a deployment process so stressful that people don't want to go through it twice as often.
conditioned to think that the large-batch, all-or-nothing approach to software development is good. It’s time to recondition ourselves to think that this is the worst possible approach to good software development.
solution — the easy but harmful one — is to slow down the release calendar. Like going to the dentist less frequently because it hurts, this response to the problem can only exacerbate the issue.
lean manufacturing, it's generally more efficient to drive down the batch size. I try to encourage engineers to check in anytime they have the software in a working state in their sandbox. And it's way easier to deploy small bits of code, since if something goes wrong, the problem is automatically localized and easy to revert. 2008: Eric Ries * blog post, September 2008
making very frequent small changes if you've only changed one thing at a time, it is really easy to figure out what broke the site https://www.youtube.com/watch?v=nEmJ_5UHs1g Robert Johnson
you released frequently, so the delta between what is currently in production and the new release is small. If that were true, the risk of release would be greatly diminished
at IMVU IMVU makes about fifty changes to its product every single day. Just as with the Toyota Production System, the key to being able to operate this quickly is to check for defects immediately, thus preventing bigger problems later.
to 40 times a day. We like very short changes. If something breaks, you know exactly what breaks, really quickly. https://www.youtube.com/watch?v=qyz3jkOBbQY Zach Holman
are reducing the amount of complexity that has to be dealt with at any one time by the people working on the batch. Break down large releases into small units of deployment 2012: Damon Edwards http://dev2ops.org/2012/03/devops-lessons-from-lean-small-batches-improve-flow/ * published March 2012
longer you build up ‘inventory’, the more opportunities there are for defects. https://www.youtube.com/watch?v=Luskg9ES9qI * GOTO Aarhus conference, October 2012
change that involved just one single line of code? Do you do this on a repeatable, reliable basis? * Berlin, October 2012 2012: Jez Humble https://www.youtube.com/watch?v=skLJuksCRTw
number of changes per deployment, which is an inherent benefit of continuous delivery and helps mitigate risk by making it easier to identify and triage problems if things go south during a deployment. https://medium.com/netflix-techblog/deploying-the-netflix-api-79b6176cc3f0 2013: Netflix * published August 2013
You know what's changed in production. If it doesn't work, rollback is simple * Surge conference, September 2013 https://www.youtube.com/watch?v=8-6azNVq2X0
batches means doing a lot of small releases with a few features rather than a small number of large releases with lots of features. To lower overall risk, it’s better to do many small releases containing only a few features each.
small batches principle is counter-intuitive because there is a human tendency to avoid risky behavior. Deploying software in production involves risk; therefore businesses traditionally minimize the frequency of deployments.
this makes them feel better, they actually are shooting themselves in the foot because the deployments that are done are bigger and riskier, and the team doing them is out of practice by the time the next one rolls around.
a day. And there's good reason for that: it's safer overall. Incremental deploys are easier to understand and fix than one gigantic deploy once a year. https://zachholman.com/talk/move-fast-break-nothing/ 2014: Github * published October 2014
due to understandable fear unfortunately, this means that our changes build up between releases 2015: Building Microservices * book published February 2015
release cycles, optimally deploying daily Small releases tend to have fewer bugs. https://github.com/zalando/engineering-principles/blob/master/README.md
updates to applications in large batches with the belief that this is more efficient, less impactful, and a better return on the investment. Because we believed this, we built processes, compliance checks, architecture guidelines, and measures to support the large sized, complex releases. But this belief is incorrect. 2017: Salesforce * blog post, December 2017 https://www.salesforce.com/blog/2017/12/smaller-batches-improves-application-roi.html
to say that we now know that smaller and more frequent changes are much safer than larger and less frequent changes ship early, ship often, ship smaller change sets
https://www.youtube.com/watch?v=NiTxzYA_qRQ We could decide to write our software in big pieces and deploy infrequently. But that is a low quality solution. Each piece is complicated and risky.
Or we could decide to break our problem down into small little pieces. Each piece is smaller, and simpler, and lower risk. https://www.youtube.com/watch?v=NiTxzYA_qRQ