Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[2016.09 Meetup #5] [TALK #1] Diogo Oliveira - ...

DevOps Lisbon
September 12, 2016

[2016.09 Meetup #5] [TALK #1] Diogo Oliveira - The OutSystems R&D Continuous Delivery Journey

OutSystems builds a complex software product. As the company and the product complexity kept growing (and at a faster pace) to a model where we needed to be able to release more frequently, challenges appeared on the way we were doing automated testing, which demanded significant changes and improvements in these processes. We will share with you our journey towards Continuous Delivery @ OutSystems R&D, namely describing where we were, where are we now (and how are we doing it) and where do we want to go.

DevOps Lisbon

September 12, 2016
Tweet

More Decks by DevOps Lisbon

Other Decks in Technology

Transcript

  1. The OutSystems R&D Continuous Delivery Journey DevOps Lisbon Meetup -

    2016/09/12 Diogo Oliveira - SW Engineer @ OutSystems
  2. Agenda • OutSystems - Context • Product, Development and Quality

    • The Continuous Delivery journey ◦ Where we were 2 years ago ◦ What made us change ◦ What did we do ◦ What are we doing now • The future
  3. Founded in 2001 in Linda-a-Velha OutSystems provides a low-code rapid

    application development and delivery platform (plus integration of custom code) It consists on a complete application lifecycle system to develop, manage and change enterprise web & native mobile applications 431+ Employees and Hiring! 116 at Engineering
  4. Product Development • C# is the main language we use

    • 80% of the Java code translated from the .NET C# code (including unit tests) • All the front-end parts of product are developed using the product itself • A lot of code is generated
  5. Product Quality • ~10.000 distinct fully automated tests (per major

    version) • Non functional requirements tests (e.g., security, performance) • Testing infrastructure and test orchestration management • Constant dogfooding • EAP, Alphas, Betas
  6. Release and Support Model • Support model: ◦ 1 major

    release per year ◦ 3 versions under support and corrective maintenance
  7. Team size • An Engineering team that was significantly smaller

    than it is now July 2014 41 SW Engineers July 2016 116 SW Engineers
  8. Development Model Teams working on separate environments • Achieved through

    branching (per team and/or project) • ~30 active branches as of November 2014 REINTEGRATE HELL
  9. Quality Assurance How were we doing QA? Full Build +

    Full Test Run in Test Environments 1- Nightly w/ Full Build + Unit / Integration Tests 2- Weekly w/ Full Build + System Tests 3- Weekly w/ Full Build + Full Run 4- Disabled Tests, run at each milestone or not run at all Executed over multiple stack combinations (.NET / Java, Oracle / SQL Server / MySQL, IIS / WebLogic / Jboss, (...) Long Feedback Loop! (hours)
  10. Quality Assurance ~20 hours to run ~10.000 tests No daily

    visibility Slow Builds Unreliable Environments Flaky Tests Long Feedback Loop
  11. Testing Infrastructure Almost all our testing infrastructure (~90%) was on-premises

    based, and to provision each test environment we had… … a 49 pages long manual
  12. This model was not near Continuous Delivery! But… For our

    release frequency and support model this worked well We were living well with it! Where were we then?
  13. What made us change • No more corrective maintenance only

    ◦ We started to release features in “maintenance releases” (more frequent releases) • The number of developers working and the number of commits increased significantly So...
  14. What made us change … the need for faster feedback

    (and with quality!) started to grow But our processes, tools and infrastructure were not in place for that
  15. What made us change We perceived the boiling water on

    time, before the frog started to die (our frog was smarter)! Which means… We understood that the need for faster feedback and more frequent releases would keep growing, so we started building our own journey towards Continuous Delivery!
  16. The Continuous Delivery Journey So, where did we start? After

    a root cause analysis, prioritization and alignment...
  17. The Continuous Delivery Journey We started by the Infrastructure! For

    our CD journey we would need to have an infrastructure that was: • Scalable • Elastic • Consistent • Reliable • Efficient • Easy to provision • Easy to recover
  18. The Continuous Delivery Journey • Automate the provisioning of our

    test environments (“infrastructure as code”) ◦ Saved ~3 days of 1 person for each test environment request ◦ Easier for development teams to keep infrastructure code updated
  19. The Continuous Delivery Journey • “Nimbus” project - move our

    testing infrastructure to the cloud (AWS) ◦ Fully automated provisioning process (started by 1 Stack) ◦ Easy to recycle machines ◦ Reliable and performant
  20. The Continuous Delivery Journey At this time… There was a

    big project, the “New Runtime”, where almost all the teams were working on (sharing milestones) Some teams started to continuously share code
  21. The Continuous Delivery Journey Some developers (by their initiative) went

    to their managers... Hey! We are really excited about CD and we have this idea… It will take us 1 week to implement the first version... We believe it will really improve our process!
  22. The Continuous Delivery Journey • Automated incremental builds • Automated

    installations • Automated code translation (from .NET C# to Java) • A couple of automated tests (~1000 tests to start) • Automatic assign to right “culprits”!
  23. The Continuous Delivery Journey What did CINTIA bring? Build +

    Installation + Translation + ~1000 Tests in 19 minutes, automatically triggered by commits Fast feedback! Automatic “culprit” assign fast!
  24. The Continuous Delivery Journey Pretty cool! But we still only

    have around 1/10 of our tests there… Can’t we go further?
  25. The Continuous Delivery Journey The Build Pipeline project More tests

    included Feedback on different levels (with different feedback meanings/loop times) Flaky tests detection Leverage our “Nimbus” cloud infrastructure
  26. 3 test stages (~8.000 tests) More Stacks (Java) Flaky tests

    detection Quarantine tests Tests execution data Monitoring
  27. The Continuous Delivery Journey BEFORE (2 years ago) ~10.000 tests

    in ~20 hours No daily visibility Slow Builds Unreliable Environments Flaky/Bad designed Tests Long Feedback Loop NOW ~8.000 tests in ~1h30 min Full daily visibility Fast/incremental Builds Reliable Test Environment Focus on creating fast/good design tests Fast Feedback Loop (~ 150.000 test executions each day)
  28. R&D Reorganization Why? • Have an organization that is able

    to scale • Adapt team structure to meet the desired product architecture
  29. Rethinking the R&D Development Model Ongoing strategic project to rethink

    the current R&D Development Model Some topics being addressed: • Single branch development • How to allow teams’ autonomy • Independent release cycles • (...)
  30. The future We are working on defining the next steps

    and our roadmap (already have directions defined), but we know for sure that we want to: • Align validation process with product architecture • Achieve overall faster feedback • Improve the CD mindset (e.g., “you break it, you fix it, fast”) • Take developers out of the release decision • (...) Achieve Continuous Delivery! Whatever we do, we are building our own path