A deep dive into Google’s Core Web Vitals - AffiliateCon 2021
My talk from the 2021 AffiliateCon titled "A deep dive into Google’s Core Web Vitals" covering #webperf topics to make any website REALLY fast! Need help implementing? Reach out to us!
domain 500 vs 34 requests, 140 vs 0 JS files, 6 vs 1 CSS, 5.01 MB vs 356 KB in size, etc. EU US Start Render 0.300 sec 1.700 sec First Interactive 0.345 sec 3.604 sec Load Time 0.995 sec 19.261 sec Speed Index 443 8,792 Total Requests 34 859 Bytes in 356 KB 5,092 KB
loading time is a major factor in page abandonment. According to a Nielsen report, 47% of people expect a website to load within two seconds, and 40% will leave a website if it does not load fully within three seconds.
is a relative concept! Source: https://pa.ag/38jyW6a A site might be fast for one user (on a fast network with a powerful device) but slow for another user (on a slow network with a low-end device). Two sites may finish loading at the exact same time, yet one may seem to load faster (if it loads content progressively rather than waiting until the end). A site might appear to load quickly but then respond slowly (or not at all) to user interaction.
a glance The current set focuses on three aspects of user experience - loading, interactivity, and visual stability - and includes the following metrics (and their respective thresholds): Source: https://pa.ag/3irantb LCP measures loading performance. To provide a good UX, LCP should occur within 2.5 seconds. FID measures interactivity. To provide a good UX, pages should have an FID under 100 milliseconds. CLS measures visual stability. To provide a good UX, pages should maintain a CLS of less than 0.1. i
glance But what do these numbers actually mean? And how are they different from what we have used before? Source: https://pa.ag/38jyW6a Historically, web performance has been measured with the load event. However, even though load is a well-defined moment […], that moment doesn't necessarily correspond with anything the user cares about.
There are several ways to install Google Lighthouse and run it to audit your webpages: Google Chrome Google Chrome Extension Node Command Line Interface (CLI) ▪ Navigate to the webpage you wish to run the audit on. ▪ Right-click on the page and click “inspect element” or press CTRL + SHIFT + I (for dev. tools) ▪ Navigating to the audits tab, find the Lighthouse logo and CTA which says “Perform an audit…” ▪ Simply click on the audit button and it should show you 4 options to run; press “Run Audit”. ▪ Download the extension (https://pa.ag/3cAXxqg) and add it to your Google Chrome browser. ▪ Navigate to the webpage you want to audit. ▪ Hit the Lighthouse Chrome extension icon and let Lighthouse run. ▪ Download and install Google Chrome ▪ Install a current and stable version of Node (>=6) ▪ Run the following command line to install the global Lighthouse npm package: npm install -g lighthouse ▪ Run Lighthouse by typing the following into the CLI: lighthouse https://bastiangrimm.com/
to test in Chrome Press “CTRL+SHIFT+I“, find the “Lighthouse“ tab, adjust settings & click “Generate report“ #1 #2 #3 #4 Make sure to always run Lighthouse in Chrome Incognito mode! #5 #6
on which categories you selected Detailed performance measurement breakdown using the most common metrics, e.g. FCP/FMP Film strip view with various browser paint timings Import/export as well as various “pretty print” and JSON data formats
two ways: To find yours: Chrome > dev tools > performance > timings In the lab: using tools to simulate a page load in a consistent, controlled environment In the field: on real users actually loading and interacting with the page”
LCP, FID and CLS? An overview of the most common issues and respective fixes: LCP is primarily affected by: ▪ Slow server response time ▪ Render blocking JS/CSS ▪ Resource load times ▪ Client-side rendering FID is primarily affected by: ▪ Third-party code ▪ JS execution time ▪ Main thread work/business ▪ Request count & transfer size CLS is primarily affected by: ▪ Images without dimensions ▪ Ads, embeds and iframes without dimensions ▪ Web fonts (FOIT/FOUT) Optimizing for LCP: ▪ Server response times & routing ▪ CDNs, caching & compression ▪ Optimise critical rendering path ▪ Reduce blocking times (CSS, JS, fonts) ▪ Images (format, compression, etc.) ▪ Preloading & pre-rendering ▪ Instant loading based on PRPL Optimising for FID: ▪ Reduce JS execution (defer/async) ▪ Code-split large JS bundles ▪ Break up JS long tasks (>50ms) ▪ Minimise unused polyfills ▪ Use web workers to run JS on a non-critical background thread Optimising for CLS: ▪ Always include size attributes on images, video, iframes, etc. ▪ Reserve required spaces in advance ▪ Reduce dynamic injections
CSSOM is a “map” of the CSS styles found on a web page. ▪ It’s much like the DOM (Document Object Model), but for CSS rather than HTML. ▪ The CSSOM combined with the DOM is used by browsers to display web pages. body font-size:18px; h1 font-size:22px; a font-size:12px; div font-size:16px; p font-size:12px; p font-size:16px;
required “Critical” renders in multiple resolutions and builds a combined/compressed CRP CSS: Critical & criticalCSS on GitHub: http://pa.ag/2wJTZAu & http://pa.ag/2wT1ST9 ▪ Minimum: a snapshot of CSS rules to render a default desktop resolution (e.g. 1280x1024). ▪ Better: various snapshots for mobile phones, pad/s & desktop/s – manually, that’d be a lot of work!
property content-visibility enables the user agent to skip an element's rendering work, including layout & painting, until it is needed – and therefore makes the initial load much faster! Source: http://pa.ag/2Wxn399
on a diet! tinyPNG & tinyJPG for smart (lossy) compression & removal of metadata et al. API access, various plug-ins (WP , etc.) as well as direct integration into Photoshop. Source: http://tinypng.com | http://tinyjpg.com
GIF Lossy & lossless compression, transparency, metadata, colour profiles, animation, and much smaller files (30% vs. JPEG, 80% vs. PNG) – but only in Chrome, Opera & Android. Everything about WebP: http://pa.ag/1EpFWeN / & WebP support: http://pa.ag/2FZK4XS
replacement Swap PNG and JPEG images per re-write (i.e. using nginx/Apache configuration). Alternatively: the <picture> element allows you to manually specify multiple file types. VS.
Chrome 85 & Firefox 80 Developed by the Alliance for Open Media in collaboration with Google, Cisco, and Xiph.org to be an open-sourced and royalty-free image format: Source: https://pa.ag/3gK9Gdk AV1 (.avif) is basically a super-compressed image type. Netflix has already considered .avif superior to the JPEG, PNG, and even the newer WebP image formats for its image quality to compressed file size ratio. i
image for all screen resolutions and devices is not enough. An image per pixel is too much; responsivebreakpoints.com can help! More: https://pa.ag/2NNBvVm & https://pa.ag/2C6t6aQ
The high-performance lazy loader for images, iFrames and more, detects any visibility changes (e.g. through user interaction, CSS or JS) without configuration. More on GitHub: https://pa.ag/2VOywil Especially important for mobile since you only want to load images that are actually visible! You can even lazy load responsive images (with automatic sizes calculation)
to use, but with one big disadvantage: it’s render-blocking! CSS’s (font) call to Google causes the render to stop/block until the download has finished!
your fall-back font match the intended web font (letter spacing, heights, etc.), otherwise this will cause layout shifts: Give it a try: https://pa.ag/2qgE8EH
to play around with: various “font-display” strategies for CSS More: http://pa.ag/2eUwVob ‘font-display’ enables the text to be displayed while the font itself is still loading.”
results in a 100ms blocking period, but no swap - even after it’s downloaded (only on “next page” view). This feels much faster! Go to your CSS file, look for @font-face and add ’font-display:optional’ – there hasn’t been a safer & easier gain in #webperf in a long time! Invisible Fallback Webfont 3s 0s 100ms 3s 100ms Block Swap Fallback Optional
love this one: Adding “display=swap“ to the URL will achieve the same result! However, I’d rather not rely on (external) web fonts at all. Source: https://pa.ag/2BbLK03 https://fonts.googleapis.com/css?family=Lato https://fonts.googleapis.com/css?family=Lato&display=swap
all? Spoiler: YES! And what‘s an acceptable result to aim for? More: http://pa.ag/2lKCIRH & http://pa.ag/2mkJTMY Many possible causes of slow server responses, and therefore many possible ways to improve: ▪ Optimise the server's application logic to prepare pages faster. ▪ Optimise how your server queries databases (or migrate to faster database systems). ▪ Upgrade your server hardware to have more memory or CPU.
a great help Use CDNPerf.com to find the one that suits you best, depending on where you are and which regions/countries you‘re predominantly serving. This will positively impact TTFB! Give it a try: https://www.cdnperf.com/ VS
When you‘re using a CDN, or getting resources from other, external (sub-) domains, make sure to pre-* respectively: DNS lookup for the asset server (static.netdoktor.de) takes ~300 ms
can use <link rel=preload> to optimize Core Web Vitals; specifically, how soon the primary imagery visible in the viewport loads, which positively impacts LCP: Source: https://pa.ag/31DGPmz <link rel="preload" as="image" href="your-hero-image.jpg"> Preload can substantially improve LCP, especially if you need critical images (like hero images) to be prioritized over the loading of other images on a page. While browsers will try their best to prioritize the loading of images in the visible viewport, <link rel=preload> can offer a significant boost in priority.
Lighthouse: To optimise Largest Contentful Paint, you should preload your critical images. Lighthouse 6.5 has suggested how and where to do this Source: https://pa.ag/3mBkqOi
are applied to a URL on specific device type. If a URL is below the threshold of data for a given metric, that metric is omitted from the report. In case you didn’t notice: things are much better in GSC! Search Console now contains a “Core Web Vitals“ report for desktop and mobile. This is current, real-world data based on the Chrome UX report: More: https://pa.ag/3eKHpEe i
Node.js command line tool that crawls a domain and compiles a report with Lighthouse performance data for every page: Give it a try: http://pa.ag/2WAAiWu With a single command, the tool will crawl an entire site, run a Lighthouse report for each page, and then output a spreadsheet with the aggregated data. […] Each row in the spreadsheet is a page on the site, and each individual performance metric is a column. This makes it very easy to perform high-level analysis because you can sort the rows by whichever metric you are analysing.
comparison of desktop vs phone results – or yourself vs competition, etc. More: https://crux-compare.netlify.app/ | Source: http://pa.ag/3nE02NB Q: Is there a difference between desktop and mobile ranking? A: At this time, using page experience as a signal for ranking will apply only to mobile search.
identify problematic layout shifts in the viewport on mobile and desktop. Available as a simple command line tool, or as an online tool. Source: https://pa.ag/3iIJdOU