Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Web Performance 101 [Chrome Dev Summit 2020]

Tammy Everts
December 07, 2020

Web Performance 101 [Chrome Dev Summit 2020]

What do we mean when we talk about "web performance"? Why should you care about it? How can measure it? How do you get other people in your organization to care? In this workshop at the 2020 Chrome Dev Summit, I covered these questions – including an overview of the history of performance metrics, up to Core Web Vitals.

Tammy Everts

December 07, 2020
Tweet

More Decks by Tammy Everts

Other Decks in Technology

Transcript

  1. Web Performance 101: What is web performance and why should

    I care? @tameverts #ChromeDevSummit ¯\_(ツ)_/¯
  2. What is “web performance”? Why should I care about it?

    How do I measure it? How can I get other people in my company to care about it?
  3. “web stress” When apps or sites are slow, we have

    to concentrate up to 50% harder to stay on task. @tameverts
  4. 14

  5. 15 “We want you to be able to flick from

    one page to another as quickly as you can flick a page on a book. So, we’re really aiming very, very high here… at something like 100 milliseconds.” Urs Hölzle SVP Engineering, Google
  6. Slow pages affect people’s perception of three things completely unrelated

    to time: 1. Content “boring” 2. Visual design “tacky” “confusing” 3. Ease of navigation “frustrating” “hard-to-navigate”
  7. Rebuilding Pinterest pages for performance resulted in 40% decrease in

    wait time, 15% increase in SEO traffic, and 15% increase in signup conversion rate. Ancestry.com saw a 7% increase in conversions after improving render time by 68%, page weight by 46% and load time by 64%. Staples reduced median page load time by 1 second and 98th percentile load time by 6 seconds, resulting in a 10% conversion rate increase. @tameverts
  8. collected 1M+ beacons of real user data across 93 attributes,

    including… • top-level – domain, timestamp, SSL • session – start time, length (in pages), total load time • user agent – browser, OS, mobile ISP • geo – country, city, organization, ISP, network speed • bandwidth • timers – base, custom, user-defined • custom metrics • HTTP headers
  9. “The real thing we are after is to create a

    user experience that people love and they feel is fast… and so we might be front-end engineers, we might be dev, we might be ops, but what we really are is perception brokers.” Steve Souders
  10. What tools do we use? Synthetic (lab) Consistent baseline Mimics

    network & browser conditions No installation Compare any sites Detailed analysis Waterfall charts Filmstrips and videos Limited URLs Real user monitoring (field) Requires JavaScript installation Large sample size (up to 100%) Real network & browser conditions Geographic spread Correlation with other metrics (bounce rate) No detailed analysis Only measure your own site
  11. Free tools to explore Synthetic webpagetest.org developers.google.com/speed/ pagespeed/insights/ Real user

    monitoring github.com/bluesmoon/boomerang developers.google.com/web/tools/ chrome-user-experience-report
  12. ❑ Correlates to what users actually see in the browser

    ❑ Is easy to use and accessible right out of the box ❑ Recognizes that not all pixels and page elements are equal ❑ Allows us to customize what we measure on specific pages The best UX metric…
  13. Is it happening? Is it useful? Is it usable? Is

    it delightful? developers.google.com/web/fundamentals/ performance/user-centric-performance-metrics
  14. Load Time The time from the start of the initial

    navigation until the beginning of the window load event
  15. Start Render The time from the start of the initial

    navigation until the first non-white content is painted
  16. First Contentful Paint (FCP) Text and graphics start to render…

    BUT often catches non-meaningful paints (e.g. headers, nav bars)
  17. First Meaningful Paint (FMP) The paint after which the biggest

    ATF layout change has happened and web fonts have loaded
  18. Analysis of 40 top Alexa-ranked sites 95% of FP events

    occur before Start Render 85% of FCP events occur before Start Render 50% of FMP events occur before Start Render speedcurve.com/blog/an-analysis-of-chromiums-paint-timing-metrics/
  19. ❑ Correlates to what users actually see in the browser

    ❑ Is easy to use and accessible right out of the box ❑ Recognizes that not all pixels and page elements are equal ❑ Allows us to customize what we measure on specific pages The best UX metric…
  20. Custom metrics Measure performance with high-precision timestamps Available in both

    synthetic and RUM (yay!) https://www.w3.org/TR/user-timing/ https://speedcurve.com/blog/user-timing-and-custom-metrics/
  21. Time to First Tweet The time from clicking the link

    to viewing the first tweet on each page’s timeline Pinner Wait Time (PWT) The time from initiating an action (e.g., tapping a pin) until the action is complete (pin close-up view is loaded) Time to Interact (TTI) @tameverts
  22. Lighthouse Scores based on audits run on synthetic tests. Checks

    your page against “rules” for Performance, PWA, Best Practices, and SEO. For each category, you get a score out of 100 and recommendations for what to fix. developers.google.com/web/tools/lighthouse
  23. “Core Web Vitals are the subset of Web Vitals that

    apply to all web pages, should be measured by all site owners, and will be surfaced across all Google tools. “Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome. “The metrics that make up Core Web Vitals will evolve over time. “The current set for 2020 focuses on three aspects of the user experience — loading, interactivity, and visual stability — and includes the following metrics… web.dev/vitals/
  24. Amount of time it takes for the largest visual element

    to render. Available in Chrome and Chromium-based browsers. Measurable via synthetic and RUM.
  25. Amount of time it takes for page to respond to

    user input (e.g. click, tap, key). Only measurable via RUM.
  26. FID can seem fast because user interactions take place later

    in the page’s rendering cycle... after CPU-hogging long tasks have completed. speedcurve.com/blog/first-input-delay-google-core-web-vitals/
  27. Long Tasks Measures JavaScript functions that take 50ms or longer.

    Long or excessive JS tasks can delay rendering, as well as cause page “jank”. Measurable via synthetic and RUM.
  28. Score that reflects how much page elements shift during rendering.

    Available in Chrome and Chromium-based browsers. Measurable via synthetic and RUM.
  29. Bounce rate gets worse as CLS degrades Bounce rate improves

    as CLS degrades Bounce rate stays the same as CLS degrades @tameverts
  30. How do these metrics correlate with my business goals? How

    fast should they be? How do we stay on track?
  31. “The largest hurdle to creating and maintaining stellar site performance

    is the culture of your organization. Lara Hogan designingforperformance.com
  32. “No matter the size or type of team, it can

    be a challenge to educate, incentivize, and empower those around you. “Performance more often comes down to a cultural challenge, rather than simply a technical one.” Lara Hogan designingforperformance.com
  33. 2009 Improved average load time from 6s à 1.2s 7-12%

    increase in conversion rate + 25% increase in PVs Average load time degraded to 5s User feedback: “I will not come back to this site again.” Re-focused on performance 0.4% increase in conversion rate 2010 2011 @tameverts
  34. 1. No front-end measurement 2. Constant feature development 3. Badly

    implemented third-parties 4. Waiting too long to tackle performance problems 5. Relying on performance sprints
  35. Making it up as you go is not always a

    good idea. (Actual photo taken yesterday of my family’s gingerbread village.)
  36. Embrace performance from the ground up. Embed engineers into other

    teams. Enlist performance ambassadors. Teach people how to use (or at least understand) the monitoring tools you use.
  37. We first went to the engineering leaders, and then we

    went to our product leader. Our pitch was totally different... Reefath Rajali // PayPal chasingwaterfalls.io/episodes/episode-two-with-reefath-rajali/
  38. “When we went to our product leaders, we spoke more

    about the business numbers and the business benefits. “When we spoke to our engineering leaders, it was more about our consumer delight.” Reefath Rajali // PayPal chasingwaterfalls.io/episodes/episode-two-with-reefath-rajali/
  39. ❑ bounce rate ❑ cart size ❑ conversions ❑ revenue

    ❑ time on site ❑ page views ❑ SEO ❑ user happiness ❑ user retention ❑ competitors
  40. Who they are What they care about What to show

    them Executives Competition Business impact Benchmarks (filmstrips and videos) Correlation charts (perf + KPIs) Marketing Third parties Traffic + engagement SEO Content Third-party performance Correlation charts (perf + bounce rate) Lighthouse SEO audits Image size Devs / engineers Well, lots of stuff, probably Consult with perf team
  41. Thresholds YOU create for metrics that are meaningful for YOUR

    site addyosmani.com/blog/performance-budgets/ Milestone timings (e.g. start render) Quantity-based (e.g. image weight) Rules-based (e.g. Lighthouse scores)
  42. A good performance budget should show you… What your budget

    is When you go out of bounds How long you’re out of bounds When you’re back within budget
  43. Super important! Look at your own data Monitor your competitors

    No sandbagging allowed Take a step-by-step approach if necessary Use synthetic and RUM (numbers may will vary)
  44. Pro tips Create budgets for your popular and regularly changing

    pages Review violations early and always Compare before and after releases Update budgets accordingly zillow.com/engineering/bigger-faster-more-engaging-budget/
  45. Who What Metric Ops Back-end issues TTFB Marketing Most important

    content Third parties SEO Largest Contentful Paint JS Long Tasks Lighthouse SEO score & audits Devs / engineers How well pages are built Performance issues Start Render, Web Vitals Lighthouse Performance audits
  46. “One of the original directives of the performance team was

    we weren’t going to set ourselves up to be performance cops.” Dan Chilton, Vox Media responsivewebdesign.com/podcast/vox-media-performance/
  47. “We weren’t going to go around slapping people on the

    wrist, saying, ‘You built an article that broke the page size budget! You have to take that down or change that immediately!’ “Our goal setting out was to set up best practices, make recommendations, and be a resource within the company that people can turn to when they have to make performance-related decisions.” Dan Chilton, Vox Media responsivewebdesign.com/podcast/vox-media-performance/
  48. “We, as engineers, should learn how to show the impact

    on anything we do.” Malek Hakim // Priceline chasingwaterfalls.io/episodes/episode-one-with-malek-hakim/
  49. How often is often enough? Wall monitors and dashboards 24/7

    Alerts (to people who can make fixes) in realtime Reports no more than 1X week Meetups, hackathons, etc. monthly (if possible)
  50. !!!

  51. “The dull boring stuff” ~Andy Davies Scripts (especially third parties)

    Images Extraneous code Defer assets where possible