Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Data Errors we Make by Sean Taylor at Big D...

The Data Errors we Make by Sean Taylor at Big Data Spain 2017

Where statistical errors come from, how they cause us to make bad decisions, and what to do about it.

https://www.bigdataspain.org/2017/talk/the-data-errors-we-make

Big Data Spain 2017
16th - 17th November Kinépolis Madrid

Big Data Spain

November 22, 2017
Tweet

More Decks by Big Data Spain

Other Decks in Technology

Transcript

  1. About Me • 5 years at Facebook as a Research

    Scientist • PhD in Information Systems from New York University • Research Interests: • Field Experiments • Forecasting • Sports and sports fans https://facebook.github.io/prophet/
  2. Data Algorithm Human
 Choices Estimate Decision Outcome Truth statistical 


    error practical 
 error Optimal Decision Optimal
 Outcome
  3. H0 is True Product is Bad H1 is True Product

    is Good Accept Null Hypothesis (Don’t ship product) Right decision Type II Error (wrong decision) Reject Null Hypothesis (Ship Product) Type I Error (wrong decision) Right decision
  4. Receiver Operating Characteristic (ROC) Curve tells us Type I and

    II error rates Type I error rate (1 - Type II error rate)
  5. Outline 1. Refinements to the Type I/II error model 2.

    A simple causal model of how we make errors 3. What we can effectively do about errors
  6. Refinement 1:
 Assign Costs to Errors H0 is True Product

    is Bad H1 is True Product is Good Accept Null Hypothesis (Don’t ship product) Right decision Type II Error (wrong decision) Reject Null Hypothesis (Ship Product) Type I Error (wrong decision) Right decision
  7. Refinement 1:
 Assign Costs to Errors H0 is True Product

    is Bad H1 is True Product is Good Accept Null Hypothesis (Don’t ship product) 0 -100 Reject Null Hypothesis (Ship Product) -200 +100
  8. Example: 
 Expected value of a product launch P(Type I)

    is 1% and P(Type II) is 20% P(good) * (100 * .80 + -100 * .2) + (1 - P(good)) * (-200 * .01 + 0 * .99) = (.5 * 60) + (.5 * -2) = 30 - 1 = 29
  9. Allowing more Type I errors lowers Type II rate. Optimal

    choice depends on payoffs and P(H1).
  10. P(Type I) is 5% and P(Type II) is 7% P(good)

    * (100 * .93 + -100 *.07) + (1 - P(good)) * (-200 * .05 + 0 * .95) = (.5 * 86) + (.5 * -10) = 43 - 5 = 38 > 29 Example 2: 
 Expected value of a product launch
  11. Refinement 2: Opportunity Cost Key Idea: If we devote resources

    to minimizing Type I and II errors for one problem, we will have fewer resources for other problems. • Few organizations makes a single decision, we usually make many of them. • Acquiring more data, investing more time into problems has diminishing marginal returns.
  12. Refinement 3: Mosteller’s Type III Errors 
 Type III error:

    “correctly rejecting the null hypothesis for the wrong reason” -- Frederick Mosteller More clearly: The process you used worked this time, but is unlikely to continue working in the future.
  13. Good Process vs. Good Outcome Good Outcome Bad Outcome Good

    Process Deserved Success Bad Break Bad Process Dumb Luck Poetic Justice
  14. Refinement 4: Kimball’s Type III Errors 
 Type III error:

    “the error committed by giving the right answer to the wrong problem” -- Allyn W. Kimball
  15. Common Pattern • High volume of of cheap, easy to

    measure “surrogate” 
 (e.g. steps, clicks) • Surrogate is correlated with true measurement of interest (e.g. overall health, purchase intention) • key question: sign and magnitude of “interpretation bias”
  16. Cause 2: Algorithms • The model/procedure we choose primarily concerns

    what side of the bias-variance tradeoff we'd like to be on. • Common mistakes are: • Using a model that’s too complex for the data. • Focusing too much on algorithms instead of gathering the right data or correctness.
  17. Optimizing models Reducing bias • Choose a more flexible model.

    Reducing variance • Choosing a less flexible model. • Get more data.
  18. Tree Induction vs. Logistic Regression: A Learning-Curve Analysis
 Perlich et

    al. (2003) • logistic regression is better for smaller training sets and tree induction for larger data sets • logistic regression is usually better when the signal-to- noise ratio is lower
  19. Cause 3: Human choices Many analysts, one dataset: Making transparent

    how variations in analytical choices affect results
 (Silberzahn et al. 2017) • 29 teams involving 61 analysts used the same dataset to address the same research question • Are soccer ⚽ referees are more likely to give red cards to dark skin toned players than light skin toned players?
  20. • effect sizes ranged from 0.89 to 2.93 in odds

    ratio units • 20 teams (69%) found a statistically significant positive effect • 9 teams (31%) observed a nonsignificant relationship
  21. Ways Forward • prevent errors • opinionated analysis development •

    test driven data analysis • be honest about uncertainty • estimate uncertainty using the bootstrap
  22. The Bootstrap R1 All Your Data R2 … R500 Generate

    random sub-samples s1 s2 s500 Compute statistics or estimate model parameters … } 0.0 2.5 5.0 7.5 -2 -1 0 1 2 Statistic Count Get a distribution over statistic of interest (usually the prediction) - take mean - CIs == 95% quantiles - SEs == standard deviation
  23. Summary Think about errors! • What kind of errors are

    we making? • Where did the come from? Prevent errors! • Use a reasonable and reproducible process. • Test your analysis as you test your code. Estimate uncertainty! • Models that estimate uncertainty are more useful than those that don’t. • They facilitate better learning and experimentation.