Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Data Science and Decisions 2022: Week 5

Will Lowe
April 06, 2022
18

Data Science and Decisions 2022: Week 5

Will Lowe

April 06, 2022
Tweet

Transcript

  1. RECAP 1 We’ve looked at normative theories of decision making

    → How should uncertainty be represented? With probability → How should consequences be represented? With preferences and utility functions ese theories do not care about what kind of actor you are → Human → Human plus machine → Completely machine → Organization → Frontal cortex But it has to be ‘implemented’ di erently in each
  2. PLAN 2 Is it hopeless to expect humans and organisations

    to realise all this ne theory? So. many. biases! Case studies: → Selection → Measurement stability → ree case studies in probability estimation → Risk and uncertainty → inking fast, slow, and in groups Bayesians all the way down? inking, fast, slow, and in groups
  3. THE VERY IDEA 3 F Logic, probability theory, utility, etc.

    → the common sense of ‘educated people’ → “common sense, reduced to calculation” (Condorcet, Poisson, Laplace) → “Man is not a conservative Bayesian. He is not a Bayesian at all.” (Kahneman & Tversky, )
  4. HOW TO THINK ABOUT BIAS 5 It’s useful to distinguish

    among behaviour that → falsi es a theory of decision making → misapplies a theory of decision making → reveals implementation details
  5. HOW TO THINK ABOUT BIAS 5 It’s useful to distinguish

    among behaviour that → falsi es a theory of decision making → misapplies a theory of decision making → reveals implementation details I → memory or processing power constraints → representational constraints → incomplete or approximate model E → IEEE Machine precision
  6. HOW TO THINK ABOUT COGNITIVE BIASES 6 E / ‘Suboptimal’

    decision making behaviour is an implementa- tion issue when it constitutes a shortcut that would not mat- ter in the majority of decisions a system was designed or se- lected to make is perspective is naturally (though not necessarily) evolutionary, and works for other deliberate design and selection processes E Fondness for cheating, honour, heroic sacri ce, concentrated sugar → is is not a Panglossian view! → No guarantee that the ‘right’ decision in an environment is virtuous, healthy or even non-lethal in the current one Tinbergen’s gulls prefered to feed things more chick-like than their chicks (ten Cate, )
  7. WHAT’S IT FOR? 7 S Selection, e.g. of decision mechanisms,

    is always simultaneously → Selection ‘for’, e.g. size → Selection ‘of’, e.g. colour e di erence is causal / counterfactual → If we were to repaint the balls, then they would not behave di erently → If we make them smaller, then they would S For guiding action, colour is as reliable a signal as size → In stable environments, causal inference this relationship may be enough (recall Gandrud’s remarks last week)
  8. CASE STUDIES IN POSTERIOR PROBABILITY 9 → Base rate neglect

    in posterior probability estimation → Bias in proportion estimation → Posterior probability when it really matters Kim is shy. Is it more likely that they work in a library or in sales? → Sales, probably → Many more people work in sales than in libraries
  9. BASE RATE NEGLECT 10 You take a test for a

    disease. What should you conclude about the probability you have the disease (D = ) from a positive test (T = )? P(D = T = ) = P(T = D = )P(D = ) P(T = ) or alternatively the probability ratio P(D = T = ) P(D = T = ) = P(T = D = ) P(T = D = ) P(D = ) P(D = ) Base rates are shown in red.
  10. BASE RATE NEGLECT 10 You take a test for a

    disease. What should you conclude about the probability you have the disease (D = ) from a positive test (T = )? P(D = T = ) = P(T = D = )P(D = ) P(T = ) or alternatively the probability ratio P(D = T = ) P(D = T = ) = P(T = D = ) P(T = D = ) P(D = ) P(D = ) Base rates are shown in red. → ese are the ‘priors’ that we discussed in more detail in previous weeks. We’ll mostly discuss the case when they are available but ignored
  11. IN MEDICAL REASONING 11 → P(D = ) = .

    ‘prevalence’ → P(T = D = ) = . ‘true positive rate’ / ‘sensitivity’ / ‘recall’ → P(T = D = ) = . ‘false positive rate’ / ‘type I error rate’ / -‘speci city’ For reference: [All the jargon in one place]
  12. IN MEDICAL REASONING 12 → P(D = ) = .

    → P(T = D = ) = . → P(T = D = ) = .
  13. IN MEDICAL REASONING 12 → P(D = ) = .

    → P(T = D = ) = . → P(T = D = ) = . Study: gynecologists (Gigerenzer et al., ) → % thought P(D = T = ) was between . and . → % thought it was about .
  14. IN MEDICAL REASONING 13 → P(D = ) = .

    → P(T = D = ) = . → P(T = D = ) = . What is P(D = T = ) actually? P(D = T = ) = . × . . × . + . × . = .
  15. IN MEDICAL REASONING 14 → P(D = ) = .

    → P(T = D = ) = . → P(T = D = ) = . P(D = T = ) P(D = T = ) = . . . . = . = .
  16. QUESTION FORMAT 15 Case closed? Maybe not Gigerenzer and Ho

    rage ( ) argue that We assume that as humans evolved, the “natural” format was frequencies as actually experienced in a series of events, rather than probabilities or percentages is suggests that the question can be reformulated in more ecologically valid terms to get better results We might think of this as sampling ‘in the direction of causation’
  17. QUESTION FORMAT 15 Case closed? Maybe not Gigerenzer and Ho

    rage ( ) argue that We assume that as humans evolved, the “natural” format was frequencies as actually experienced in a series of events, rather than probabilities or percentages is suggests that the question can be reformulated in more ecologically valid terms to get better results We might think of this as sampling ‘in the direction of causation’
  18. QUESTION FORMAT 16 is does improve performance, though still perhaps

    less than you’d hope... Two important points: → Information format matters and can be used to make people more rational → People still nd (normatively desirable) inference hard Not mentioned here, but you may nd practically useful → Gigerenzer’s later research program: ‘simple heuristics’ – simpler techniques and models that help people be more nearly rational
  19. ESTIMATING A PROPORTION 17 H ? In politics, perceived size

    is apparently more important than actual size Unfortunately, e.g. (Landy et al., ) → People have very inaccurate estimates of sub population sizes → Non-linearly related to the true proportion → Biased towards . Recall P(X) (Proportion) log P(X) − P(X) (Log odds)
  20. ESTIMATING A PROPORTION 18 Alternatively, this is an application of

    Bayes theorem → Work linear in the log odds (like we do when we t logistic regression models!) → Shrink estimates towards a population mean → Condition on any extra information available E Tree size estimation (Landy et al., )
  21. ESTIMATING A PROPORTION 19 Alternatively, this is an application of

    Bayes theorem → Work linear in the log odds (like we do when we t logistic regression models!) → Shrink estimates towards a population mean → Condition on any extra information available E Proportion markers are extra information (Hollands & Dyre, )
  22. EXPLAINING AWAY 21 R I B Physically, the the Brightness

    B of a surface is a combination of → surface illumination (I, larger when not in shadow) → intrinsic re ectance (R, larger for lighter colours) So what to conclude about re ectance? B P(R B, I) ∝ P(B I, R) mechanism P(I, R) environment → BA = BB = b → IB < IA because it is in shadow Conditional on this information P(RA BA = b, IA = high) < P(RB BB = b, IB = low)
  23. SUBPERSONAL BAYESIANS 22 Why would I care that my brain

    can do this if I can’t? → Well, maybe there’s hope! → although we saw that aggregating rational actors into rational groups was harder than expected... When things really really matter animals can get pretty optimal... e accuracy of ‘low level’ inference processes and the unreliability of ‘high level’ inference processes might make us wonder about another Kahneman and Tversky idea...
  24. RISK VERSUS ‘UNCERTAINTY’ 23 Ellsberg ( ) calls decisions where

    probabilities are available risky and when they are not uncertain (better: taken under ignorance) For Bayes everything is either known or risky. But consider... Urn Urn balls: some red, some black balls: red, black
  25. RISK VERSUS ‘UNCERTAINTY’ 23 Ellsberg ( ) calls decisions where

    probabilities are available risky and when they are not uncertain (better: taken under ignorance) For Bayes everything is either known or risky. But consider... Urn Urn balls: some red, some black balls: red, black and three questions (with their typical responses) . if red from Urn vs if black from Urn ? (mostly indi erent) . if red from Urn vs if black from Urn ? (mostly indi erent) eory says “you must think the probabilities of Urn are the probabilities of Urn !” . if red from Urn vs if red from Urn ? Most choose Urn . Is this evidence of that ‘uncertainty’ cannot be ignored?
  26. FRAMING 24 With no action people will die. Choose between

    the following options (see Tversky & Kahneman, ) F A. people will be saved B. P( saved) = / P(nobody is saved) = / F A. people will die B. P(nobody dies) = / P( will die) = /
  27. FRAMING 24 With no action people will die. Choose between

    the following options (see Tversky & Kahneman, ) F A. people will be saved B. P( saved) = / P(nobody is saved) = / F A. people will die B. P(nobody dies) = / P( will die) = / R → Frame : About / prefer A over B → Frame : About / prefer B over A But these are the same two choices, di erently described!
  28. PROSPECT THEORY 25 People are → Risk averse when gains

    are considered → Risk seeking when loses are considered
  29. THE SPEED OF THOUGHT 27 Two systems of thinking (Kahneman,

    ) → ‘System ’: fast, instinctive, and emotional (where the ‘heuristics and biases’ come in) → ‘System ’: slower, deliberative, and (more) logical Implication for decision making (very roughly stated): If the decision is important, slow down and let System kick in. Note: there are some studies of doubtful replicability reported there [link] – mostly priming, but be aware.
  30. COGNITIVE BIASES IN GROUPS 28 Why do those people believe

    such obviously falsehoods? → ey’ve activated System and not thought any more about it. ey should activate System !
  31. COGNITIVE BIASES IN GROUPS 28 Why do those people believe

    such obviously falsehoods? → ey’ve activated System and not thought any more about it. ey should activate System !
  32. COGNITIVE BIASES IN GROUPS 29 Cultural cognition refers to the

    tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether humans are causing global warming; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that de ne their cultural identities. (e.g. Kahan et al., ) Causes? → wishful thinking → social cost to disagreement → signalling, e.g. ‘bond testing’ (Zahavi, ) e dark side of ecologically valid reasoning...
  33. SUMMING UP 30 Not everything that seem like a bias

    will make you more likely to be wrong in ‘regular’ situations (see, e.g. Gigerenzer & Todd, ) → ‘Full rationality’ can be (arbitrarily) expensive in terms of computational resources (space, time, processing power) → ‘Bounded rationality’ is a more tractable goal for nite organisms like people and organisations → Problems arise when tasks are ‘irregular’ or unusual for the situations that decision making tools have been designed or selected for is will (provably) always be true → ‘No free lunch’ theorems (e.g. Wolpert, ; Wolpert & Macready, ) Some cognitive biases are fundamentally social Much bias can be mitigated, but only if we understand it rst...
  34. REFERENCES 31 Ellsberg, D. ( ). Risk, ambiguity, and the

    Savage axioms. e Quarterly Journal of Economics, ( ), – . Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M., & Woloshin, S. ( ). Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest, ( ), – . Gigerenzer, G., & Ho rage, U. ( ). How to improve Bayesian reasoning without instruction: Frequency formats.. Psychological Review, ( ), . Gigerenzer, G., & Todd, P. M. ( ). Simple heuristics that make us smart. Oxford University Press. Hollands, J. G., & Dyre, B. P. ( ). Bias in proportion judgments: e cyclical power model. Psychological Review, ( ), – . Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. ( ). e polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, ( ), – . Kahneman, D. ( ). inking, fast and slow. Farrar, Straus and Giroux. Kahneman, D., & Tversky, A. ( ). Subjective probability: A judgment of representativeness. Cognitive Psychology, , – .
  35. REFERENCES 32 Landy, D., Guay, B., & Marghetis, T. (

    ). Bias and ignorance in demographic perception. Psychonomic Bulletin & Review, , – . Lazer, D., Kennedy, R., King, G., & Vespignani, A. ( ). e parable of Google u: Traps in big data analysis. Science, ( ), – . McDermott, R., Fowler, J. H., & Smirnov, O. ( ). On the evolutionary origin of Prospect eory preferences. e Journal of Politics, ( ), – . Sober, E. ( ). e nature of selection. Chicago University Press. ten Cate, C. ( ). Niko Tinbergen and the red patch on the herring gull’s beak. Animal Behaviour, ( ), – . Tversky, A., & Kahneman, D. ( ). Judgement under uncertainty: Heuristics and biases. Science, ( ), – . Tversky, A., & Kahneman, D. ( ). Rational choice and the framing of decisions. e Journal of Business, ( ), – . Wolpert, D. H. ( ). e supervised learning no-free-lunch theorems. In R. Roy, M. K¨ oppen, S. Ovaska, T. Furuhashi, & F. Ho mann (Eds.), So Computing and Industry (pp. – ). Springer.
  36. REFERENCES 33 Wolpert, D. H., & Macready, W. G. (

    ). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, ( ), – . Zahavi, A. ( ). e testing of a bond. Animal Behaviour, ( ), – .