development has long delivered value in big chunks. Many teams make the problem worse by tending to respond to stress by making the chunks of value bigger, from deploying software less frequently to integrating less often.
conditioned to think that the large-batch, all-or-nothing approach to software development is good. It’s time to recondition ourselves to think that this is the worst possible approach to good software development.
lean manufacturing, it's generally more efﬁcient to drive down the batch size. I try to encourage engineers to check in anytime they have the software in a working state in their sandbox. And it's way easier to deploy small bits of code, since if something goes wrong, the problem is automatically localized and easy to revert. 2008: Eric Ries * blog post, September 2008
at IMVU IMVU makes about ﬁfty changes to its product every single day. Just as with the Toyota Production System, the key to being able to operate this quickly is to check for defects immediately, thus preventing bigger problems later.
are reducing the amount of complexity that has to be dealt with at any one time by the people working on the batch. Break down large releases into small units of deployment 2012: Damon Edwards http://dev2ops.org/2012/03/devops-lessons-from-lean-small-batches-improve-ﬂow/ * published March 2012
number of changes per deployment, which is an inherent beneﬁt of continuous delivery and helps mitigate risk by making it easier to identify and triage problems if things go south during a deployment. https://medium.com/netﬂix-techblog/deploying-the-netﬂix-api-79b6176cc3f0 2013: Netﬂix * published August 2013
batches means doing a lot of small releases with a few features rather than a small number of large releases with lots of features. To lower overall risk, it’s better to do many small releases containing only a few features each.
small batches principle is counter-intuitive because there is a human tendency to avoid risky behavior. Deploying software in production involves risk; therefore businesses traditionally minimize the frequency of deployments.
this makes them feel better, they actually are shooting themselves in the foot because the deployments that are done are bigger and riskier, and the team doing them is out of practice by the time the next one rolls around.
a day. And there's good reason for that: it's safer overall. Incremental deploys are easier to understand and ﬁx than one gigantic deploy once a year. https://zachholman.com/talk/move-fast-break-nothing/ 2014: Github * published October 2014
2017: Intercom * blog post, June 2017 https://www.intercom.com/blog/moving-faster-with-smaller-steps/ Rather than shipping new product quarterly, monthly or weekly, we deploy new features to our customers up to 50 times a day.
of a problem increases with every additional change. Furthermore, when something does go wrong, it is much easier to ﬁnd the offending change from within a small batch size than a large one. 2017: Nextdoor * blog post, December 2017 https://engblog.nextdoor.com/3-hard-lessons-from-scaling-continuous-deployment-to-a-monolith-with-70-engineers-99fb6dfe3c38
updates to applications in large batches with the belief that this is more efﬁcient, less impactful, and a better return on the investment. Because we believed this, we built processes, compliance checks, architecture guidelines, and measures to support the large sized, complex releases. But this belief is incorrect. 2017: Salesforce * blog post, December 2017 https://www.salesforce.com/blog/2017/12/smaller-batches-improves-application-roi.html
* published March 13 2018 https://rollout.io/blog/deploy-to-production-5-tips-make-smoother/ Amplify feedback by increasing the number of times you deploy. It might sound risky, but it’s not. The fewer changes you make, the easier it will be to know what’s wrong.