When measuring web performance, we often try to get a single number that we can trend over time. This may be the median page load time, hero image time, page speed score, or core web vitals score. But is it really that simple?
Users seldom visit just a single page on a site, so how do we account for varying performance across multiple pages? How do we tell which page’s performance impacts the overall user experience? How do various cognitive biases affect the user’s perception of our site’s performance?
As developers and data analysts, we have our own biases that affect how we look at the data and which problems we end up trying to solve. Often our measurements themselves may be affected by our confirmation bias.
This talk is targeted at individuals who want to understand the business impact of their site’s performance, and how biases in data can affect that.
In this talk, we’ll go into different biases that may affect user perception as well as our ability to measure that perception, and ways in which to identify if our data exhibits these patterns.
References:
Ebbinghaus, Hermann (1913). On memory: A contribution to experimental psychology
Kahneman, Daniel (2000). "Evaluation by moments, past and future"
Baumeister, Roy F.; Finkenauer, Catrin; Vohs, Kathleen D. (2001). "Bad is stronger than good"
Staw, Barry M. (1997). "The escalation of commitment: An update and appraisal"
Arkes, Hal R.; Ayton, Peter (1999). "The sunk cost and Concorde effects"
The impact of network speed on emotional engagement
Ericsson ConsumerLab neuro research 2015
Wikipedia Paper on User Satisfaction v/s Performance
Toward a more civilized design: studying the effects of computers that apologize
The fastest way to pinpoint frustrating user experiences