Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Decisions: Week 8

Decisions: Week 8

Will Lowe

August 31, 2021
Tweet

More Decks by Will Lowe

Other Decks in Education

Transcript

  1. R We’ve looked at normative theories of decision making →

    How should uncertainty be represented? With probability → How should consequences be represented? With preferences and utility functions ese theories do not care about what kind of actor you are → Human → Human plus machine → Completely machine → Organization → Frontal cortex But it has to be ‘implemented’ di erently in each
  2. P Is it hopeless to expect humans and organisations to

    realise all this ne theory? So. many. biases! ree case studies: → Risk vs uncertainty → Framing e ects and Prospect eory → Base rate neglect in medical contexts Bayesians all the way down? inking, fast, slow, and in groups
  3. T e Enlightenment view (Condorcet, Poisson, Laplace): → Probability theory,

    utility, etc. is the common sense of ‘educated people’, or common sense “reduced to calculation” Tversky and Kahneman ( ): → “Man is not a conservative Bayesian. He is not a Bayesian at all.” (Kahneman & Tversky, ) Normative and descriptive become decoupled → Circa : do we need to rethink everything?
  4. H It’s useful to distinguish between → bias as behaviour

    that compromises (falsi es?) theory → bias as behaviour comprehensible as ‘implementation failure’ Consider the bounds on ‘bounded rationality’ → representation constraints, e.g. nite machine precision → memory or processing power constraints → incomplete or approximate mental models are these theoretically compromising or ‘just’ implementation details?
  5. H An ‘ecological’ or ‘design’ perspective: → eoretically suboptimal behaviour

    is an implementation issue when it constitutes a shortcut that would have little e ect in the majority of decisions a system was designed or selected to make → Implicitly, though not necessarily, evolutionary Selection is always both ‘for’ and ‘of’ (see right) e di erence is counterfactual → repainting balls doesn’t change anything, but shrinking them will e ‘selection machine’ (Sober, ). Selection is for size but of colour.
  6. H One big caveat → ere is no guarantee that

    the ‘right’ decision in an environment is virtuous, healthy or even non-lethal Consider human pursuit of and fondness for cheating, concentrated sugar sources, and heroic sacri ce
  7. C : ‘ ’ Ellsberg ( ) calls decisions where

    probabilities are available risky and when they are not uncertain (better: taken under ignorance) For Bayes everything is either known or risky. But consider... Urn Urn balls: some red, some black balls: red, black
  8. C : ‘ ’ Ellsberg ( ) calls decisions where

    probabilities are available risky and when they are not uncertain (better: taken under ignorance) For Bayes everything is either known or risky. But consider... Urn Urn balls: some red, some black balls: red, black and three questions (with their typical responses) . if red from Urn vs if black from Urn ? (mostly indi erent) . if red from Urn vs if black from Urn ? (mostly indi erent) eory says “you must think the probabilities of Urn are the probabilities of Urn !” . if red from Urn vs if red from Urn ? Most choose Urn . Is this evidence of that ‘uncertainty’ cannot be ignored?
  9. C : F With no action people will die. Choose

    between the following options (see Tversky & Kahneman, ) F A. people will be saved B. P( saved) = / P(nobody is saved) = / F A. people will die B. P(nobody dies) = / P( will die) = /
  10. C : F With no action people will die. Choose

    between the following options (see Tversky & Kahneman, ) F A. people will be saved B. P( saved) = / P(nobody is saved) = / F A. people will die B. P(nobody dies) = / P( will die) = / R → Frame : About / prefer A over B → Frame : About / prefer B over A But these are the same two choices, di erently described!
  11. P People are → Risk averse when gains are considered

    → Risk seeking when loses are considered
  12. C : P A classic example of peoples’ supposedly non-Bayesian

    reasoning: Kim is shy. Is it more likely that they work in a library or in sales?
  13. C : P A classic example of peoples’ supposedly non-Bayesian

    reasoning: Kim is shy. Is it more likely that they work in a library or in sales? Sales probably, because there are generally a lot more salespeople than librarians...
  14. C : P You take a test for a disease.

    What should you conclude about the probability you have the disease from, say, a positive test? P(disease = test) = P(test disease = )P(disease = ) P(test disease = )P(disease = ) + P(test disease = )P(disease = ) or alternatively the probability ratio P(disease = test) P(disease = test) = P(test disease = ) P(test disease = ) P(disease = ) P(disease = ) Base rates are shown in red.
  15. C : P You take a test for a disease.

    What should you conclude about the probability you have the disease from, say, a positive test? P(disease = test) = P(test disease = )P(disease = ) P(test disease = )P(disease = ) + P(test disease = )P(disease = ) or alternatively the probability ratio P(disease = test) P(disease = test) = P(test disease = ) P(test disease = ) P(disease = ) P(disease = ) Base rates are shown in red. → ese are the ‘priors’ that we discussed in more detail in previous weeks. We’ll mostly discuss the case when they are available but ignored
  16. → P(disease = ) = . ‘prevalence’ → P(test =

    disease = ) = . ‘true positive rate’ / ‘sensitivity’ / ‘recall’ → P(test = disease = ) = . ‘false positive rate’ / ‘type I error rate’ / -‘speci city’ For reference: [All the jargon in one place]
  17. → P(disease = ) = . → P(test = disease

    = ) = . → P(test = disease = ) = .
  18. → P(disease = ) = . → P(test = disease

    = ) = . → P(test = disease = ) = . Study: gynecologists (Gigerenzer et al., ) → % thought P(disease = test = ) was between . and . → % thought it was about .
  19. → P(disease = ) = . → P(test = disease

    = ) = . → P(test = disease = ) = . What is P(disease = test = ) actually? P(disease = test = ) = . × . . × . + . × . = .
  20. → P(disease = ) = . → P(test = disease

    = ) = . → P(test = disease = ) = . P(disease = test = ) P(disease = test = ) = . . . . = . = .
  21. Q Case closed? Gigerenzer and Ho rage ( ) argue

    that We assume that as humans evolved, the “natural” format was frequencies as ac- tually experienced in a series of events, rather than probabilities or percentages is suggests that the question can be reformulated in more ecologically valid terms to get better results
  22. Q Case closed? Gigerenzer and Ho rage ( ) argue

    that We assume that as humans evolved, the “natural” format was frequencies as ac- tually experienced in a series of events, rather than probabilities or percentages is suggests that the question can be reformulated in more ecologically valid terms to get better results We might think of this as sampling ‘in the direction of causation’
  23. Q is does improve performance, though still perhaps less than

    you’d hope... Two important points: → Information format matters and can be used to make people more rational → People still nd (normatively desirable) inference hard Not mentioned here, but you may nd practically useful → Gigerenzer’s later research program: ‘simple heuristics’ – simpler techniques and models that help people be more nearly rational
  24. M ? is kind of sensitivity to formulation is, arguably,

    a bias in itself → But apparently not a fatal one Eminently manipulable!
  25. S B

  26. S B

  27. S B e brightness of a tile is determined by

    (at least) two factors → re ectance: lighter tiles are more bright → illumination: tiles in shadow are less bright Tiles A and B are the same brightness but you perceive the re ectance. because that’s more important for decisions brightness re ectance illumination
  28. S B e brightness of a tile is determined by

    (at least) two factors → re ectance: lighter tiles are more bright → illumination: tiles in shadow are less bright Tiles A and B are the same brightness but you perceive the re ectance. because that’s more important for decisions brightness re ectance illumination is is Bayes theorem: brightness is the ‘test’, tile re ectance is the ‘disease’ P(re ect. brightness, illum.) ∝ P(brightness re ect., illum.)P(re ect.)P(illum.) Intuition: → B is less illuminated because it’s in shadow → A is more illuminated, but has the same brightness as B → So B ‘must’ be a higher re ectance (lighter) tile than A
  29. S B Why would I care that my brain can

    do this if I can’t? → Well, maybe there’s hope! → although we saw that aggregating rational actors into rational groups was harder than expected... When things really really matter animals can get pretty optimal... e accuracy of ‘low level’ inference processes and the unreliability of ‘high level’ inference processes might make us wonder about another Kahneman and Tversky idea...
  30. T Two systems of thinking (Kahneman, ) → ‘System ’:

    fast, instinctive, and emotional (where the ‘heuristics and biases’ come in) → ‘System ’: slower, deliberative, and (more) logical Implication for decision making (very roughly stated): If the decision is important, slow down and let System kick in. Note: there are some studies of doubtful replicability reported there [link] – mostly priming, but be aware.
  31. C Why do those people believe such obviously falsehoods? →

    ey’ve activated System and not thought any more about it. ey should activate System !
  32. C Why do those people believe such obviously falsehoods? →

    ey’ve activated System and not thought any more about it. ey should activate System !
  33. C Cultural cognition refers to the tendency of individuals to

    conform their beliefs about dis- puted matters of fact (e.g., whether humans are causing global warming; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that de ne their cultural identities. (e.g. Kahan et al., ) Causes? → wishful thinking → social cost to disagreement → signalling, e.g. ‘bond testing’ (Zahavi, ) e dark side of ecologically valid reasoning...
  34. S Not everything that seem like a bias will make

    you more likely to be wrong in ‘regular’ situations (see, e.g. Gigerenzer & Todd, ) → ‘Full rationality’ can be (arbitrarily) expensive in terms of computational resources (space, time, processing power) → ‘Bounded rationality’ is a more tractable goal for nite organisms like people and organisations → Problems arise when tasks are ‘irregular’ or unusual for the situations that decision making tools have been designed or selected for is will (provably) always be true → ‘No free lunch’ theorems (e.g. Wolpert, ; Wolpert & Macready, ) Some cognitive biases are fundamentally social Much bias can be mitigated, but only if we understand it rst...
  35. S e desire to be more rational is admirable but

    can sometimes get pathological → ere is a small subculture of people interested in becoming ‘more Bayesian’ → ‘Rationalists’, ‘Skeptics’, ‘Less Wrong’, ‘E ective Altruism’ → Proceed cautiously...social context and implications in reasoning is usually under-appreciated in these discussions
  36. R Ellsberg, D. ( ). ‘Risk, ambiguity, and the Savage

    axioms’. e Quarterly Journal of Economics, ( ), – . Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M. & Woloshin, S. ( ). ‘Helping doctors and patients make sense of health statistics’. Psychological Science in the Public Interest, ( ), – . Gigerenzer, G. & Ho rage, U. ( ). ‘How to improve Bayesian reasoning without instruction: Frequency formats.’ Psychological Review, ( ), . Gigerenzer, G. & Todd, P. M. ( ). ‘Simple heuristics that make us smart’. Oxford University Press. Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. & Mandel, G. ( ). ‘ e polarizing impact of science literacy and numeracy on perceived climate change risks’. Nature Climate Change, ( ), – .
  37. R Kahneman, D. ( ). ‘ inking, fast and slow’.

    Farrar, Straus and Giroux. Kahneman, D. & Tversky, A. ( ). ‘Subjective probability: A judgment of representativeness’. Cognitive Psychology, , – . McDermott, R., Fowler, J. H. & Smirnov, O. ( ). ‘On the evolutionary origin of Prospect eory preferences’. e Journal of Politics, ( ), – . Sober, E. ( ). ‘ e nature of selection’. Chicago University Press. Tversky, A. & Kahneman, D. ( ). ‘Judgement under uncertainty: Heuristics and biases’. Science, ( ), – . Tversky, A. & Kahneman, D. ( ). ‘Rational choice and the framing of decisions’. e Journal of Business, ( ), – . Wolpert, D. H. ( ). e supervised learning no-free-lunch theorems. In R. Roy, M. Köppen, S. Ovaska, T. Furuhashi & F. Ho mann (Eds.), So Computing and Industry (pp. – ). Springer.
  38. R Wolpert, D. H. & Macready, W. G. ( ).

    ‘No free lunch theorems for optimization’. IEEE Transactions on Evolutionary Computation, ( ), – . Zahavi, A. ( ). ‘ e testing of a bond’. Animal Behaviour, ( ), – .