Slide 1

Slide 1 text

Powered by Summary of the Modern Measurement Playbook (by Google) MeasureCamp Zurich 2024 Gianluca Campo

Slide 2

Slide 2 text

Powered by Head of Analytics & SEO @ Nucleus https://www.linkedin.com/in/gianluca-campo/ https://www.giancampo.com/ Proud to be a MeasureCamp Italy organizer!

Slide 3

Slide 3 text

Powered by What we’re going to see 1. Fundamentals of MEM* (Chapter 1) 2. MEM guidelines and best practices (Chapter 2) What we’re NOT going to see 1. Build your own MEM strategy (Chapter 3), since it’s the most boring part IMHO 2. The Appendix, apart from an image I found useful to explain some notions *MEM stands for Media Effectiveness Measurement Some premises 1. I always learn a lot by preparing a slide deck: Thanks MeasureCamp! 2. I’m just a media measurement marketer wannabe :) 3. The playbook is publicly available here This is for you if you: 1. Haven’t had time to read the paper 2. Think the concepts in the playbook are crucial for Digital Analytics 3. Don’t think Digital Analytics is only data collection Introduction

Slide 4

Slide 4 text

Powered by Fundamentals of MEM (Chapter 1)

Slide 5

Slide 5 text

Powered by Fundamentals // Introduction to MEM Why should you care? 1. Impact of media investments? 2. How to optimise media investments? Recommended workflow

Slide 6

Slide 6 text

Powered by Fundamentals // The MEM toolbox

Slide 7

Slide 7 text

Powered by We should expect discrepancies in the outcomes of these tools. But they can work together. For instance, for digital click-based channels: 1. Attribution can be used as the upper bound 2. Incremental experiments can be used as lower bound 3. MMM should fall between the two Fundamentals // Differences between MEM tools

Slide 8

Slide 8 text

Powered by Fundamentals // Triangulation in Marketing Measurement

Slide 9

Slide 9 text

Powered by Fundamentals // Maturity stages for MEM framework SSOT in need for complementary tools Calibration based on results Continuous test-and-learning plan

Slide 10

Slide 10 text

Powered by MEM framework helps planning by categorising marketing portfolio and business decisions into three levels Fundamentals // Planning and portfolio budget allocation* planning optimisation *In the playbook this topic is part of the next chapter

Slide 11

Slide 11 text

Powered by MEM guidelines and best practices (Chapter 2)

Slide 12

Slide 12 text

Powered by Getting back to triangulation

Slide 13

Slide 13 text

Powered by Incrementality & Attribution

Slide 14

Slide 14 text

Powered by INC & ATT // Incrementality to complement attribution #1 [Planning for early stage] Incrementality experiments for specific channels or campaigns 3rd party tool records a poor ROAS vs the ROAS by 1st party tool An experiment confirms the value of this campaign

Slide 15

Slide 15 text

Powered by INC & ATT // Incrementality to complement attribution #2 [Planning for intermediate/advanced stage] Calibrate attribution results based on incrementality experiments Incremental Impact Attributed Impact Channel 2 seems the best performer However, calibrated iROAS reveals Channel 1 is the best

Slide 16

Slide 16 text

Powered by INC & ATT // Incrementality & attribution to set new bids [Optimisation for intermediate/advanced stage] The calibration multiplier suggests us the new target ROAS to reach the iROAS goal

Slide 17

Slide 17 text

Powered by [Optimisation for all stages] INC & ATT // Incrementality & attribution to validate optimisations The optimisation generated a lower ROAS compared to previous experiment However, the iROAS in the second experiment is higher than the one in the 1st experiment

Slide 18

Slide 18 text

Powered by Attribution & MMM

Slide 19

Slide 19 text

Powered by [Optimisation for intermediate/advanced stage] ATT & MMM // MMM to calibrate attribution The calibration multiplier is calculated dividing MMM ROAS by DDA ROAS After calibration Display general has shown a higher drop than Search 2,13

Slide 20

Slide 20 text

Powered by [Planning for early stage] ATT & MMM // Attribution to rule out MMM models These models have similar Mean Absolute Percentage Error so are considered equally accurate Model 1 has consistent lower iCPAs than CPAs - which is not possible - so the model can be discarded

Slide 21

Slide 21 text

Powered by Incrementality & MMM

Slide 22

Slide 22 text

Powered by [Planning for intermediate/advanced stage] 1. Use case a. Channels with wide confidence intervals of MMM ROAS should be validated with incrementality experiments b. Budget shifts based on MMM forecasts should be validated with incrementality experiments, if shift > 10% c. Search in MMM should regularly be validated with incrementality experiments, because it’s an always-on media 2. Thresholds a. Discrepancy of MMM results vs. Incrementality experiment < 10% = no need for validation b. Discrepancy of MMM results vs. Incrementality experiment > 10% = calibrate MMM based on incrementality experiments INC & MMM // Incrementality to test MMM results

Slide 23

Slide 23 text

Powered by [Planning for advanced stage] Frequentist MMM (only allows for calibration after the fact) 1. Calibration multiplier = Incrementality test iROAS / MMM ROAS (as seen in the previous pages) 2. Ruling out models based on the least similar results between MMM ROAS and incrementality test iROAS (as seen in previous pages) INC & MMM // Calibration of MMM via incrementality tests [Planning for advanced stage] Bayesian MMM (allows to incorporate priors about effectiveness of media channels) After an incrementality experiment, the MMM return curve can be calibrated

Slide 24

Slide 24 text

Powered by Best practices for incrementality experiments

Slide 25

Slide 25 text

Powered by Incrementality // Checklist for experiment design 3. An experiment should be designed with clear methodology and objective in mind. For example: a. Conversion Lift based on geography (in GAds UI) are optimal for calibration b. Conversion Lift based on users (in GAds UI) is the least comparable across MEM tools c. GEO Experiments (open source code) use 1st-party data allowing for any comparison but are resource-intensive 4. An experiment should have a comparable scope. In other terms, there should be parity between the scope of the experiment and the scope of corresponding attribution model or MMM. 1. An experiment should have a clear hypothesis, based on evidence from: a. Attribution or MMM results b. Industry research 2. An experiment should have comparables KPIs, so for instance it’s important to know: a. The amount of sales in attribution depends on the attribution model, the lookback window, etc b. The amount of sales in MMM requires 2-3 years of historical sales c. The amount of sales in incrementality tests depends on the chosen methodology (Conversion Lift, Geo Experiments, etc)

Slide 26

Slide 26 text

Powered by Incrementality // When not to do an incrementality test 1. Strive for simplicity. The business question can be answered with a simple analysis or pre-post test 2. A/B experiment is more adequate. A/B experiments are better suited for testing variations 3. Awareness is the marketing goal. Incrementality tests are based on short-term sales, not enough for measuring awareness 4. There are tech limitations. TV or OOH can’t be easily split by region 5. It is not statistically feasible. The amount of sales is too low to get significant results. Pre-Post vs. Optimisation vs. Incrementality (available in the Appendix)

Slide 27

Slide 27 text

Powered by Brand measurement within the framework

Slide 28

Slide 28 text

Powered by Brand // Understanding its value 1. Decide the KPI and its corresponding target to track brand performance, based on: a. Industry research b. Own ratios between brand KPIs and its revenue impact 2. Define the measurement tools to track KPIs, for example: a. Actively collected data i. 3rd-party brand trackers ii. Brand Lift surveys b. Observed data: Share of Search c. Full-funnel MMM with Nested Brand-Equity MMM A Brand KPI is used to add brand awareness to a traditional MMM

Slide 29

Slide 29 text

Powered by Brand // Planning for brand KPI outcomes 1. Decide the overall budget mix, balancing brand and performance marketing portfolio 2. Plan investment moments using the insights from Share of Search, peers and seasonality should be considered in planning 3. Connect baseline to growth brand performance target to set budgets, in other terms the investments should be calculated to reach goals 4. Use the insights from full-funnel MMM In Q4 Brand SoS loses strength thus the investment plan should consider an increase in budget

Slide 30

Slide 30 text

Powered by Brand // Optimise for brand KPI outcomes 1. [High priority] Boost creatives as they are responsible for at least 50% of average sales effect 2. [Medium priority] Media tactics determine 36% of average sales effect 3. [Low priority] Brand associations and relevancy accounts for 15% of average sales effect

Slide 31

Slide 31 text

Powered by Thank you!

Slide 32

Slide 32 text

Powered by And please, take care of your mental health