your application. You can regularly run benchmarks to help analyze and debug performance problems and ensure that you don't introduce regressions in recent changes.
ling", "software pro fi ling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls.
3.0) • Replaces Android Monitor Tools • CPU, Memory, Network and Energy pro fi lers • Pro fi leable apps • Useful for identifying performance bottlenecks
Java) from within Android Studio. • Recommendation: pro fi le your code before writing a benchmark • Useful for CPU work that is run many times in your app • Examples: RecyclerView scrolling with one item shown at a time, data conversions/processing.
it generates the same data every run private val random = Random(0) // create the array once and just copy it in benchmarks private val unsorted = IntArray(10_000) { random.nextInt() } @Test fun benchmark_quickSort() { // creating the variable outside of the measureRepeated to be able to assert after done var listToSort = intArrayOf() // [END_EXCLUDE] benchmarkRule.measureRepeated { // copy the array with timing disabled to measure only the algorithm itself listToSort = runWithTimingDisabled { unsorted.copyOf() } // sort the array in place and measure how long it takes SortingAlgorithms.quickSort(listToSort) } // assert only once not to add overhead to the benchmarks assertTrue(listToSort.isSorted) }
it generates the same data every run fun benchmark_quickSort() { // creating the variable outside of the measureRepeated to be able to assert after done var listToSort = intArrayOf() // [END_EXCLUDE] benchmarkRule.measureRepeated { // copy the array with timing disabled to measure only the algorithm itself listToSort = runWithTimingDisabled { unsorted.copyOf() } // sort the array in place and measure how long it takes SortingAlgorithms.quickSort(listToSort) } // assert only once not to add overhead to the benchmarks
= listOf(StartupTimingMetric()), iterations = 5, setupBlock = { // Press home button before each run to ensure the starting activity isn't visible. pressHome() } ) {
JankStats override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // metrics state holder can be retrieved regardless of JankStats initialization val metricsStateHolder = PerformanceMetricsState.getForHierarchy(binding.root) // initialize JankStats for current window jankStats = JankStats.createAndTrack( window, Dispatchers.Default.asExecutor(), jankFrameListener, ) // add activity name as state metricsStateHolder.state?.addState("Activity", javaClass.simpleName) // ... }
JankStats override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // metrics state holder can be retrieved regardless of JankStats initialization val metricsStateHolder = PerformanceMetricsState.getForHierarchy(binding.root) // initialize JankStats for current window jankStats = JankStats.createAndTrack( window, Dispatchers.Default.asExecutor(), jankFrameListener, ) // add activity name as state metricsStateHolder.state?.addState("Activity", javaClass.simpleName) // ... }
// A real app could do something more interesting, like writing the info to local storage and later on report it. Log.v("JankStatsSample", frameData.toString()) }
this frame began (in nanoseconds) */ val frameStartNanos: Long, /** * The duration of this frame (in nanoseconds) */ val frameDurationNanos: Long, /** * Whether this frame was determined to be janky, meaning that its * duration exceeds the duration determined by the system to indicate jank (@see * [JankStats.jankHeuristicMultiplier]) */ val isJank: Boolean, /** * The UI/app state during this frame. This is the information set by the app, or by * other library code, that can be used later, during analysis, to determine what * UI state was current when jank occurred. * * @see PerformanceMetricsState.addState */ val states: List<StateInfo> )
practice of merging developer code into a main code base frequently. - Regression: Noun: a return to a former or less developed state. Performance degradation
like this: - You're working on something - Another team (usually QA) warns you about a critical performance issue - You switch context and start digging into the codebase not sure where to look - Pain - Manual profiling, benchmarking, etc
than profiling. - Catch problems before they hit users - Running benchmarks manually is repetitive and error prone. - The output is just a number. - Ideally, we should automate this process.
(beware of resource cost) - Or maybe every release Where to run? - Real devices yield more reliable results - Firebase Test Lab (FTL) What to store? - The performance metric (time in ms) - The corresponding build-number or commit-hashId
detecting regressions - The sliding window helps you make context-aware decisions - Use width size and threshold to fine-tune the confidence of regression Detecting Regressions in CI
key points of your app - Use step- fi tting instead of naive approaches - Helps you catch issues before they hit users - When a new build result is ready, check its benchmark values inside the 2* width size - If there’s a regression or improvement fire an alert to investigate the performance in the last width builds