Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

Summary of the Modern Measurement Playbook (by ...

Summary of the Modern Measurement Playbook (by Google)

The Modern Measurement Playbook by Google is a milestone of measurement marketing. If you haven't had time to read it this deck is for you.

Gianluca Campo

June 11, 2024
Tweet

More Decks by Gianluca Campo

Other Decks in Marketing & SEO

Transcript

  1. Powered by Head of Analytics & SEO @ Nucleus https://www.linkedin.com/in/gianluca-campo/

    https://www.giancampo.com/ Proud to be a MeasureCamp Italy organizer!
  2. Powered by What we’re going to see 1. Fundamentals of

    MEM* (Chapter 1) 2. MEM guidelines and best practices (Chapter 2) What we’re NOT going to see 1. Build your own MEM strategy (Chapter 3), since it’s the most boring part IMHO 2. The Appendix, apart from an image I found useful to explain some notions *MEM stands for Media Effectiveness Measurement Some premises 1. I always learn a lot by preparing a slide deck: Thanks MeasureCamp! 2. I’m just a media measurement marketer wannabe :) 3. The playbook is publicly available here This is for you if you: 1. Haven’t had time to read the paper 2. Think the concepts in the playbook are crucial for Digital Analytics 3. Don’t think Digital Analytics is only data collection Introduction
  3. Powered by Fundamentals // Introduction to MEM Why should you

    care? 1. Impact of media investments? 2. How to optimise media investments? Recommended workflow
  4. Powered by We should expect discrepancies in the outcomes of

    these tools. But they can work together. For instance, for digital click-based channels: 1. Attribution can be used as the upper bound 2. Incremental experiments can be used as lower bound 3. MMM should fall between the two Fundamentals // Differences between MEM tools
  5. Powered by Fundamentals // Maturity stages for MEM framework SSOT

    in need for complementary tools Calibration based on results Continuous test-and-learning plan
  6. Powered by MEM framework helps planning by categorising marketing portfolio

    and business decisions into three levels Fundamentals // Planning and portfolio budget allocation* planning optimisation *In the playbook this topic is part of the next chapter
  7. Powered by INC & ATT // Incrementality to complement attribution

    #1 [Planning for early stage] Incrementality experiments for specific channels or campaigns 3rd party tool records a poor ROAS vs the ROAS by 1st party tool An experiment confirms the value of this campaign
  8. Powered by INC & ATT // Incrementality to complement attribution

    #2 [Planning for intermediate/advanced stage] Calibrate attribution results based on incrementality experiments Incremental Impact Attributed Impact Channel 2 seems the best performer However, calibrated iROAS reveals Channel 1 is the best
  9. Powered by INC & ATT // Incrementality & attribution to

    set new bids [Optimisation for intermediate/advanced stage] The calibration multiplier suggests us the new target ROAS to reach the iROAS goal
  10. Powered by [Optimisation for all stages] INC & ATT //

    Incrementality & attribution to validate optimisations The optimisation generated a lower ROAS compared to previous experiment However, the iROAS in the second experiment is higher than the one in the 1st experiment
  11. Powered by [Optimisation for intermediate/advanced stage] ATT & MMM //

    MMM to calibrate attribution The calibration multiplier is calculated dividing MMM ROAS by DDA ROAS After calibration Display general has shown a higher drop than Search 2,13
  12. Powered by [Planning for early stage] ATT & MMM //

    Attribution to rule out MMM models These models have similar Mean Absolute Percentage Error so are considered equally accurate Model 1 has consistent lower iCPAs than CPAs - which is not possible - so the model can be discarded
  13. Powered by [Planning for intermediate/advanced stage] 1. Use case a.

    Channels with wide confidence intervals of MMM ROAS should be validated with incrementality experiments b. Budget shifts based on MMM forecasts should be validated with incrementality experiments, if shift > 10% c. Search in MMM should regularly be validated with incrementality experiments, because it’s an always-on media 2. Thresholds a. Discrepancy of MMM results vs. Incrementality experiment < 10% = no need for validation b. Discrepancy of MMM results vs. Incrementality experiment > 10% = calibrate MMM based on incrementality experiments INC & MMM // Incrementality to test MMM results
  14. Powered by [Planning for advanced stage] Frequentist MMM (only allows

    for calibration after the fact) 1. Calibration multiplier = Incrementality test iROAS / MMM ROAS (as seen in the previous pages) 2. Ruling out models based on the least similar results between MMM ROAS and incrementality test iROAS (as seen in previous pages) INC & MMM // Calibration of MMM via incrementality tests [Planning for advanced stage] Bayesian MMM (allows to incorporate priors about effectiveness of media channels) After an incrementality experiment, the MMM return curve can be calibrated
  15. Powered by Incrementality // Checklist for experiment design 3. An

    experiment should be designed with clear methodology and objective in mind. For example: a. Conversion Lift based on geography (in GAds UI) are optimal for calibration b. Conversion Lift based on users (in GAds UI) is the least comparable across MEM tools c. GEO Experiments (open source code) use 1st-party data allowing for any comparison but are resource-intensive 4. An experiment should have a comparable scope. In other terms, there should be parity between the scope of the experiment and the scope of corresponding attribution model or MMM. 1. An experiment should have a clear hypothesis, based on evidence from: a. Attribution or MMM results b. Industry research 2. An experiment should have comparables KPIs, so for instance it’s important to know: a. The amount of sales in attribution depends on the attribution model, the lookback window, etc b. The amount of sales in MMM requires 2-3 years of historical sales c. The amount of sales in incrementality tests depends on the chosen methodology (Conversion Lift, Geo Experiments, etc)
  16. Powered by Incrementality // When not to do an incrementality

    test 1. Strive for simplicity. The business question can be answered with a simple analysis or pre-post test 2. A/B experiment is more adequate. A/B experiments are better suited for testing variations 3. Awareness is the marketing goal. Incrementality tests are based on short-term sales, not enough for measuring awareness 4. There are tech limitations. TV or OOH can’t be easily split by region 5. It is not statistically feasible. The amount of sales is too low to get significant results. Pre-Post vs. Optimisation vs. Incrementality (available in the Appendix)
  17. Powered by Brand // Understanding its value 1. Decide the

    KPI and its corresponding target to track brand performance, based on: a. Industry research b. Own ratios between brand KPIs and its revenue impact 2. Define the measurement tools to track KPIs, for example: a. Actively collected data i. 3rd-party brand trackers ii. Brand Lift surveys b. Observed data: Share of Search c. Full-funnel MMM with Nested Brand-Equity MMM A Brand KPI is used to add brand awareness to a traditional MMM
  18. Powered by Brand // Planning for brand KPI outcomes 1.

    Decide the overall budget mix, balancing brand and performance marketing portfolio 2. Plan investment moments using the insights from Share of Search, peers and seasonality should be considered in planning 3. Connect baseline to growth brand performance target to set budgets, in other terms the investments should be calculated to reach goals 4. Use the insights from full-funnel MMM In Q4 Brand SoS loses strength thus the investment plan should consider an increase in budget
  19. Powered by Brand // Optimise for brand KPI outcomes 1.

    [High priority] Boost creatives as they are responsible for at least 50% of average sales effect 2. [Medium priority] Media tactics determine 36% of average sales effect 3. [Low priority] Brand associations and relevancy accounts for 15% of average sales effect