Slide 1

Slide 1 text

Product Experimentation Workshop Workshop Kickoff + Overview

Slide 2

Slide 2 text

Hands-On Case Study: Existing Experiment Today’s Agenda Introductions + Getting Started 1 2 2 Hands-On Lab: Creating An Experiment 3 What’s Possible When Experimentation is Built-In? 4 AMA with Split Data Scientist Lizzie Eardley 5

Slide 3

Slide 3 text

Introductions Dave Karow Continuous Delivery Evangelist 3 Henry Jewkes Experimentation Architect Lizzie Eardley Data Scientist + Experimentation PM

Slide 4

Slide 4 text

Workshop Housekeeping 4 Diving in together

Slide 5

Slide 5 text

Zoom Etiquette ● Mute your microphone when not speaking ● Make sure your name is displayed as your Zoom username ● Questions and comments are encouraged, we’ll stop often, but feel free to raise a hand in Zoom, DM an instructor or assistant, or simply speak up! 5

Slide 6

Slide 6 text

Get ready ● Follow along in Zoom 6

Slide 7

Slide 7 text

Get ready ● Follow along in Zoom ● Review steps in the handout 7

Slide 8

Slide 8 text

Get ready ● Follow along in Zoom ● Review steps in the handout ● Ask questions in Slack or Zoom 8

Slide 9

Slide 9 text

Get ready ● Follow along in Zoom ● Review steps in the handout ● Ask questions in Slack or Zoom ● Use Zoom reactions 9

Slide 10

Slide 10 text

Get ready ● Follow along in Zoom ● Review steps in the handout ● Ask questions in Slack or Zoom ● Use Zoom reactions ● Before the second Hands-On session, find your personal lab credentials in Slack 10 workshop+demoorg-***@split.io

Slide 11

Slide 11 text

Let’s Get Started! 11

Slide 12

Slide 12 text

Why We Need Experimentation 12 New Release Metrics Change “Can’t we just change things and monitor what happens?”

Slide 13

Slide 13 text

Problem: Separating Signal From Noise 13 New Release Metrics Change Everything else in the world ● Product changes ● Marketing campaigns ● Global Pandemics ● Nice Weather

Slide 14

Slide 14 text

Solution: Cancel Out External Influence With Experimentation (Think noise cancelling headphones, but for your metrics) 14 Control Treatment 50% 50%

Slide 15

Slide 15 text

Hands-On Case Study Session 15 Hotel Booking Travel Site

Slide 16

Slide 16 text

Agenda ▶ Scenario Set-Up ▶ Your Role and Current Focus ▶ One of Your Experiments ▶ Hands-On Tour ▶ Your Website: Split Reservations ▶ Split: Your Experiment ▶ Split: Metrics and Events ▶ Debrief 16

Slide 17

Slide 17 text

Scenario Setup ▶ Your Role as PM, Bookings: ▶ Grow User Engagement and Revenue ▶ Your Current Focus: ▶ Bookings Per User ▶ Bookings Per Platinum User ▶ Average Price Paid Per User 17

Slide 18

Slide 18 text

One of Your Experiments What If We Show More Info By Default? 18

Slide 19

Slide 19 text

Let’s Find Out! 19 Time To Go Hands-On

Slide 20

Slide 20 text

Open Two Browser Tabs: 20 www.splitreservations.com app.split.io

Slide 21

Slide 21 text

1. Log Out of Split (if currently logged in) 21 Login Page: app.split.io/login Email: train@sunflower.com Password: Sunflower1! 2. Log in to Split as: train@sunflower.com

Slide 22

Slide 22 text

Events ▶ Event = evidence of user behavior or system response ▶ Not always a user action: could be a system response too: ▶ Errors ▶ Response time ▶ Number of rows returned ▶ Something a user did or experienced along their journey ▶ Ideally, you already have event telemetry. ▶ If so, Split let’s you harvest it. ▶ If you need to capture new events, you can create them via our SDK or API 22

Slide 23

Slide 23 text

Back to Split 23 Let’s look At An Event

Slide 24

Slide 24 text

Debrief 24 What You Saw

Slide 25

Slide 25 text

Case Study Debrief 1/2 ▶ What You Saw ▶ Split: a feature flag in your code, gatekeeping code execution flow into to one or more treatments (i.e. code paths) ▶ Targeting rules: the per-split rules which determine which treatment a user is routed to at runtime ▶ Events: collected data points containing evidence of user behaviors or system responses. inspect/debug them with live tail ▶ Metrics Definition: the formula for deriving a metric from a series of events ▶ Metrics impact: the per-split dashboard showing the statistically significant differences in user and system behavior between treatments ▶ Metrics Trends + Details: a “drill-down” into the details behind a single metric on a split’s dashboard 25

Slide 26

Slide 26 text

Case Study Debrief 2/2 ▶ The Data Attribution Model Behind What You Saw ▶ An Impression is the record of a Treatment being served to a specific user (who got what treatment, when, and why). ▶ The user got that Impression because they passed through a Split which routed them to a specific Treatment using Targeting rules. ▶ An Event is a data point containing evidence of a user behavior or system response at a point in time. Events are not specific to one Split… they, like Metrics, are global and automatically calculated for every Split. ▶ A Metric is calculated per user*, based on Impressions and Events seen for that user, using the formula constructed in a Metric Definition. ▶ Metrics Impact is the statistical result of the comparing the distributions of Metric values across different Treatments in a Split *Per User, per Account or per any other Traffic Type 26

Slide 27

Slide 27 text

27 Your App Targeting Rules Feature Flags and Targeting Data Layer & Analytics Management Console SDK User Attributes (age, account balance) Visual Debrief: Targeting

Slide 28

Slide 28 text

28 Your App SDK Feature Flags and Targeting Data Layer & Analytics Management Console Visual Debrief: Impressions Impression Data ● Traffic Key (UUID, account ID, etc) ● Split name ● Version (Treatment) of split ● Rule used ● Timestamp Impression Data

Slide 29

Slide 29 text

29 Your App SDK Feature Flags and Targeting Data Layer & Analytics Management Console Visual Debrief: Events SDK Feature Flags and Targeting Data Layer & Analytics Management Console Event Data ● Via SDK, Split API, or packaged integration ○ Traffic Key ○ Event Type ○ Properties

Slide 30

Slide 30 text

30 Visual Debrief: Metrics impact (Attribution) Impressions + Metrics ⇓ Attribution Impression Events Metric Events

Slide 31

Slide 31 text

Hands-On Experiment Lab Onboarding Metrics

Slide 32

Slide 32 text

Log Out from the Case Study 32

Slide 33

Slide 33 text

Log in to Lab 33 ● https://app.split.io/login ● Credentials shared in Slack ● Message us if you need help ○ Missing credentials ○ Failed login ○ Loading spinner

Slide 34

Slide 34 text

Break Time Hands-On Lab is Next!

Slide 35

Slide 35 text

35 Let’s run an experiment! Our travel site has a problem with onboarding

Slide 36

Slide 36 text

Problem ▶ Losing customers during the onboarding process ▶ Customer feedback is that it takes too long ▶ Internally, we need more visibility 36

Slide 37

Slide 37 text

Current Onboarding Searching Trip Planning Comparisons History report 37

Slide 38

Slide 38 text

Current Onboarding Searching Trip Planning Comparisons History report 38

Slide 39

Slide 39 text

Removed ● Travel history report ○ Showed where you have visited ○ Cool feature! ○ For power users 39

Slide 40

Slide 40 text

Removed ● Travel history report ○ Showed where you have visited ○ Cool feature! ○ For power users ● New user survey ● Tracking ○ Time to completion ○ Progress through onboarding ○ Product interactions Added 40

Slide 41

Slide 41 text

From Feature 41 To Experiment

Slide 42

Slide 42 text

Hypothesis ▶ Removing reporting from onboarding will reduce onboarding time and increase the number of users who complete onboarding 42

Slide 43

Slide 43 text

Hypothesis ▶ Removing reporting from onboarding will reduce onboarding time and increase the number of users who complete onboarding Action ▶ Remove reporting from onboarding process Problem ▶ Losing customers during the onboarding process ▶ Customer feedback is that it takes too long 43

Slide 44

Slide 44 text

Metrics Planning 44 https://miro.com/app/boar d/o9J_lQWrXsA=/

Slide 45

Slide 45 text

Time to take a look! 45

Slide 46

Slide 46 text

Break Time “What’s Possible When Experimentation is Built-In?” is Next!

Slide 47

Slide 47 text

What’s Possible When Experimentation is Built-In? Uptime, Safety, Flow, and a Culture of Continuous Learning 47

Slide 48

Slide 48 text

What changes when the lift to do experimentation is small? 48 ● What if automated data science could help you move faster, easier, safer, instead of requiring “prioritization” to get coverage? ● What happens when you escape the limitation of impact analysis being performed on “critical” releases only?

Slide 49

Slide 49 text

Do you do this each time your team releases to production? 49 49

Slide 50

Slide 50 text

50 Can you remember a release night that went like this?

Slide 51

Slide 51 text

How You Release Matters (Progressive Delivery Techniques) 51 Harvey Balls by Sschulte at English Wikipedia [CC BY-SA (https://creativecommons.org/licenses/by-sa/3.0)] Approach Benefits Blue/Green Deployment Canary Release (container based) Feature Flags Feature Flags + Experimentation Avoid Downtime Limit The Blast Radius Limit WIP / Achieve Flow Learn During The Process

Slide 52

Slide 52 text

Problem: Detecting Early Signs of Trouble 52 Feature Exposed to 100% of users

Slide 53

Slide 53 text

Problem: Detecting Early Signs of Trouble 53 Feature Exposed to 100% of users Feature Enabled for 5% of users

Slide 54

Slide 54 text

Solution: Monitoring Guardrail Metrics Automatically 54 Control Treatment 5% 95%

Slide 55

Slide 55 text

Feature Flags and Experimentation: A Layered Approach to Productivity and Psychological Safety 55 Decouple Deploy From Release With Feature Flags ● Incremental Feature Development for Flow ● Testing In Production ● Kill Switch (big red button)

Slide 56

Slide 56 text

Feature Flags and Experimentation: A Layered Approach to Productivity and Psychological Safety 56 Decouple Deploy From Release With Feature Flags Automate Guardrails ● Alert on Exception / Performance Early In Rollout ● “Limit The Blast Radius” w/o Manual Heroics

Slide 57

Slide 57 text

Feature Flags and Experimentation: A Layered Approach to Productivity and Psychological Safety 57 Measure Release Impact Decouple Deploy From Release With Feature Flags Automate Guardrails ● Iteration w/o Measurement = Feature Factory 😡 ● Direct Evidence of Our Efforts → Pride 😎

Slide 58

Slide 58 text

Feature Flags and Experimentation: A Layered Approach to Productivity and Psychological Safety 58 Measure Release Impact Decouple Deploy From Release With Feature Flags Automate Guardrails ● Take Bigger Risks, Safely ● Learn Faster With Less Investment ○ Dynamic Config ○ Painted Door Test to Learn (A/B Test)

Slide 59

Slide 59 text

What Software Delivery Looks Like When Experimentation is Built-In 59 DEPLOY Code deployed No exposure ERROR MITIGATION 0-50% Ramp Identify bugs/crashes MEASURE Maximum Power Ramp Understand impact SCALE MITIGATION 50-100% Ramp Identify scaling issues RELEASE Complete rollout

Slide 60

Slide 60 text

Break Time AMA With Split Data Scientist Lizzie Eardley is Next!

Slide 61

Slide 61 text

Thank you 61