Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Navigating the tool jungle - How to pick the ri...

Navigating the tool jungle - How to pick the right tools for your hypothesis experiments

You've lined up everything for your experiment: type of experiment, hypotheses & success criteria, team & time - but how to get started now? How to pick the right tools to actually build, run and evaluate the experiment?

In this talk I help you pick the best tools for your user research & prototyping experiments. I'll share what we learned from running experiments - and going through the test-fail-learn cycle with our tool choices numerous times.

Our three key learnings are:
- Iterate within your experiment
- User-test your usertesting tools
- The simpler, the better - and faster

For each learning I provide a real-life case with plenty of insights.

Thomas Leitermann

May 24, 2019
Tweet

More Decks by Thomas Leitermann

Other Decks in Technology

Transcript

  1. Navigating the tool jungle - How to pick the right

    tools for your experiments Thomas Leitermann Product Manager MTP Engage Hamburg 24.05.2019 @thomasleiterman
  2. 2

  3. © Business Model Foundry AG Set target & success criteria

    4 Hypothesis Experiment Metric Criteria Users appreciate our value proposition so much, they are willing to pay for it. Insert a paywall after a few example questions. the conversion rate to purchasing the whole course. We are … … super happy with 3% … okay with 1,3% … disappointed with 0,5%. Assumption Measurement Success „Read your plants“-App
  4. Inspired by „The Real Startup Book“, www.realstartupbook.com Evaluative or Generative

    Experiment? 6 Market Product Question Method Question Method Generative Customer  Discovery interview  Ethnography  Surveys, focus groups Problem  Solution interview  Ethnography  Concierge-test  Demo pitch Evaluiative Revenue  Landing Page-Test  Sales pitch & Pre-sales  Fake door  5-second test  Flyer Solution  Prototypes (paper, clickdummy)  Wizard of Oz-test  Analytics  Usability test  Finished product
  5. Derived from A.Osterwalder, Strategyzer (https://blog.strategyzer.com/posts/2017/3/21/how-strong-is-your-innovation-evidence) Strength of evidence vs. time-to-insight

    7 Strong evidence Weak evidence Short time to set up and conduct experiment Long time to set up and conduct experiment Landing page-test Wizard of Oz Concierge Pre-sales Validation interview Discovery interview Crowdfunding
  6. Fulfilled all prerequisites?  Test card: hypothesis, metric, success criteria

     Matching type of experiment  Budget (team, time, money) 8
  7. Iterate within the experiment 16 18 – 9 o'clock Advertisment

    live 9 – 11 o'clock Analysis of replays & data 11 - 12 o'clock Review & planning 12 - 17 o'clock Write, design and implement changes & extensions 17 - 18 o'clock QA & bug fixing
  8. Learnings to navigate the tool jungle  Phrase revisable hypothesis

     Find type of experiment  Acquire budget › You are not the user › Automatate user research recruitment › The simpler, the better – and faster › User-test your user testing tools › Iterate within the experiment 18