Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Benchmarking your application’s performance

Benchmarking your application’s performance

Back in 1974, Donald Knuth said that “premature optimization is the root of all evil (or at least most of it) in programming”. But optimization without benchmarks can be even worse than that. It’s like following a map without using a compass, and just following your instinct that you’re going in the right direction. Whether it is during development or at runtime, many tools exist to let you measure the performance of your applications. These measures can then be used to prioritize which parts to optimize first, as well as measure the performances gained (or lost) after a refactoring.

Xavier Gouchet

April 20, 2020
Tweet

More Decks by Xavier Gouchet

Other Decks in Programming

Transcript

  1. Mobile app performance is still critical today - Low end

    devices - Slow networks - Short attention span 5 @priscilladupreez
  2. Performance (like Android) are fragmented If you develop on a

    Pixel 4, make sure your app also works correctly on a Samsung Galaxy S7 (2016) 6
  3. Performance must be measured You can’t just base your roadmap

    decision on “a hunch” 7 @youngprodigy3
  4. User will tell you it feels slow Measurements will tell

    you what part exactly is slow. #MeasurementsMatter Changes must be measured Measurements will tell you how much your PR improves (or degrades) your feature’s performance 8
  5. “Premature optimization is the root of all evil (or at

    least most of it) in programming.” — Donald Knuth Don’t optimize early, but optimize eventually. 9
  6. 10 Making sure the code works - Unit Testing -

    Instrumented Testing - Manual Testing - Mutation Testing - … In your toolbox Making sure it works fast - ???
  7. Overview Get a full view of: - network - cpu

    - heap - rendering - … 12 @kmuza
  8. Launching systrace $ systrace.py Starting tracing (stop with enter) Tracing

    completed. Collecting output... Outputting Systrace results... Tracing complete, writing results 18
  9. 21

  10. Pros ▫ (Almost) no setup required ▫ Get full system

    state info ▫ Controlled scenario and environment 23 Systrace / Perfetto Cons ▫ Performance measurements are skewed ▫ Needs manual trigger ▫ Complex and dense UI/UX
  11. When should you use it ? ▫ When you have

    a specific performance issue ▫ When you want to optimize a specific part of the code ▫ To debug a tricky algorithm (nested loops, concurrency) 24
  12. Jetpack Benchmark Library android { // … defaultConfig { //

    … testInstrumentationRunner "androidx.benchmark.junit4.AndroidBenchmarkRunner" } } 29
  13. @Test fun benchmarkSomeWork() { benchmarkRule.measureRepeated { cleanState() val data =

    createData() doSomeWork(data) } } 37 Benchmark Test Function
  14. @Test fun benchmarkSomeWork() { benchmarkRule.measureRepeated { runWithTimingDisabled { cleanState() }

    val data = runWithTimingDisabled { createData() } doSomeWork(data) } } 38 Benchmark Test Function
  15. ▫ Stored in a JSON file ▫ Full timing information

    in nanoseconds 40 Benchmark Results
  16. { "name": "benchmarkSomeWork", "className": "com.example.BenchmarkTest", "metrics": { "timeNs": { "minimum":

    2561000, "maximum": 7133000, "median": 3914000, "runs": [ … ] } } 41 Benchmark Result
  17. Running Benchmark in the CI ▫ Emulators are unstable, use

    real devices ▫ Needs some tweaks in your gradle config ▫ Use the JSON to: - Graph the evolution of performance - Fail if the measurements exceed a threshold 42
  18. Pros ▫ Reliable and consistent ▫ Measure just what you

    want ▫ Can be as high or low level as needed 43 Jetpack Benchmark Cons ▫ Needs to be ran on a real device ▫ The feature to be tested must be repeatable and called from code ▫ Takes a long time in CI
  19. When should you use it ? ▫ Monitor critical parts

    of your logic in CI ▫ When refactoring a specific feature ▫ When writing code executed on the main thread 44
  20. Going Further ▫ Official Gradle plugin to lock the CPU

    clocks ▫ Sample Gradle plugin to fail on high measurements ▫ Using Jetpack Benchmark on Firebase Test Lab 45
  21. What about the real users? Get performance and runtime information

    from the production environment 47 @robin_rednine
  22. 49 repositories { maven { url "https://dl.bintray.com/datadog/datadog-maven" } } dependencies

    { implementation "com.datadoghq:dd-sdk-android:1.4.0" implementation "com.datadoghq:dd-sdk-android-ktx:1.4.0" } Using Datadog SDK for Android
  23. 52 val logger = Logger.Builder().build() logger.i( "Activity init to resume

    took $duration nanos", attributes = mapOf( "activity.time_to_resume" to duration, "activity.classname" to javaClass.simpleName ) ) Sending Logs
  24. 53

  25. 54 Metrics ▫ Extract metrics information from logs ▫ Visualize

    critical KPIs in a dashboards ▫ Get alerts on spikes / drops
  26. 55

  27. 56 Tracing (beta in 1.4.0) ▫ Get traces across distributed

    architectures ▫ Performance information ▫ Debugging network errors
  28. 58

  29. Pros ▫ Real data - real use cases - real

    devices - real network conditions 63 App Performance Monitoring Cons ▫ Need large enough user base to have relevant data ▫ Can’t measure everything
  30. When should you use it ? ▫ Monitor key aspects

    of your app in the wild ▫ Understand how performance impacts your users ▫ A/B testing measures 64
  31. Going Further ▫ Library is Open Source on GitHub ▫

    More features coming soon (Real User Monitoring) 65
  32. Key takeaways “Measure twice, cut once.” 67 Choose the relevant

    tool to your use case. Always analyse your measurements before taking action.
  33. 68 Useful links ▫ https:/ /ui.perfetto.dev/ ▫ https:/ /developer.android.com/studio/profile/benchmark ▫

    https:/ /developer.android.com/studio/profile/run-benchmarks-in-ci ▫ https:/ /proandroiddev.com/jetpack-benchmark-on-firebase-test-lab-d1 4c5eae815f ▫ https:/ /github.com/DataDog/dd-sdk-android ▫ https:/ /docs.datadoghq.com/logs/log_collection/android
  34. 70 Office hours @xgouchet ▫ April 20th : 2 -

    3 pm ▫ April 21st - 10 - 11 am - 3 - 4 pm #testing #benchmark #??? @theunsteady5
  35. CREDITS Special thanks to all the people who made and

    released these awesome resources for free: ▫ Presentation template by SlidesCarnival ▫ Photographs from Unsplash 71