Upgrade to Pro — share decks privately, control downloads, hide ads and more …

How to Build a Performance Test Pipeline from S...

How to Build a Performance Test Pipeline from Scratch

Video: https://www.youtube.com/watch?v=xKMIGN1WHgo

There comes a point in a company's evolution when the rush to build all the features as fast as possible subsides and the company realizes that performance should be prioritized too. The CEO publishes a document that says "a Slack client must be as fast as fuck" and the engineer team sets out to fix all the performance bottlenecks. But how does an engineer validate that their improvements actually work? More importantly, how does the team prevent future performance regressions?

Over a year ago, we asked these questions and decided to build a performance testing pipeline that would continuously validate every code change for performance impact. In this talk, I will introduce the basic building blocks of this pipeline and share the lessons learned from building and maintaining this infrastructure.

Valera Zakharov

October 12, 2018
Tweet

More Decks by Valera Zakharov

Other Decks in Technology

Transcript

  1. Naive Approach Measure dev version value Compare against baseline Alert

    if they are different Execution time Frame metrics Resource usage Anything that can 
 be measured latest master
  2. Stats to the Rescue C O M PA R E

    DEV BUILD VALUES MASTER BUILD VALUES
  3. Statistical Approach Collect set of N values from dev version

    Test against data set from master Alert if confidence > threshold
  4. Collect set of N values from dev version Test against

    data set from master Alert if diff confidence > threshold WE CONTROL THESE Statistical Approach
  5. Higher Number of values = better stats Higher alert threshold

    = lower false alert rate lower chance of valid detection more device time Statistical Approach
  6. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert
  7. Naive approach Build Node runner Build Node backend aggregate metrics

    test metrics PerfTest Job run tests + gather data
  8. Naive approach Build Node runner Build Node backend aggregate metrics

    test metrics test metrics test metrics PerfTest Job run tests + gather data
  9. Naiv-ish approach Build Node runner Build Node backend aggregate metrics

    test metrics test metrics test metrics device provider get release PerfTest Job run tests + gather data
  10. PerfTest Job Cloud Version Bui runner Build Node backend AGGREGATE

    METRICS TEST METRICS TEST METRICS TEST METRICS device provider GET RELEASE run tests + gather data
  11. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert
  12. Instrumented Application Instrumentation Test EventTracker.startPerfTracking(Beacon.CHANNEL_SYNC) // code that does channel

    sync EventTracker.endPerfTracking(Beacon.CHANNEL_SYNC) persist_rtm_start,44 process_rtm_start,19 ms_time_to_connect,703 channel_sync,381
  13. Focus on client Network is highly unstable & variable Backend

    regressions should not block client developers Use Record & Replay github.com/airbnb/okreplay
  14. Keep it real We want to catch regressions that represent

    the real world Preserve the prod object graph Run against release-like config LargeTest
  15. Make it stable Perf tests will be executed a lot

    Stability bar is very high Don’t compromise on flakiness Use IdlingResource
  16. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert
  17. Backend Perf Data { "build_info":{ "platform":"android", “author_slack_id”:”W1234567”, "branch_name":"master", "build_cause":"Fixed sort

    order for starred unreads. (#9838)", "id":8668, "jenkins_build_number":"9287", "author_name":"Kevin Lai", "job_name":"android-master-perf" }, "tests":[ { "status":"complete", "name":"com.Slack.ui.perf.SignInPerfTest#firstSignin_medium", "metric_results":[ {"name":"inflate_flannel_start","value":263}, {"name":"quickswitcher_show",”value”:30}, {"name":"inflate_flannel_start","value":314}, {"name":"quickswitcher_show","value":45} ] } ] }
  18. Backend Backend Stack New shiny tech is great … …

    but use whatever stack you have in house
  19. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert
  20. "

  21. "

  22. "

  23. "

  24. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert
  25. !

  26. !

  27. !

  28. !

  29. !

  30. !

  31. ! More on debugging Pre-merge alerting is great for experimenting

    Detailed trace info would be great nice https://github.com/facebookincubator/profilo looks promising
  32. !

  33. !

  34. PerfTest Job Backend ! " merge to master open PR

    Trends trigger perf run run tests + gather data perf data Alert