Weighing the Costs and Benefits of Mental Effort

Weighing the Costs and Benefits of Mental Effort

Be4356bd28d6b57e315fb4f8459e021a?s=128

Sebastian Musslick

March 15, 2019
Tweet

Transcript

  1. 2.

    Cognitive control – reconfigure information processing away from default (automatic)

    settings (Cohen et al., 1990; Botvinick & Cohen, 2015) read email follow talk
  2. 3.

    follow talk Constraints on Control Allocation to a Single Task

    ¡ Costs attached to increases in control signal intensity (Shenhav, Botvinick & Cohen, 2013; Shenhav et al., 2017)
  3. 4.

    Cognitive Control is Costly § We make cognitively frugal choices

    in choosing how to solve problems (e.g., Simon, 1956; Stanovich & West, 1998; Kahneman, 2003; Todd & Gigerenzer, 2012; Fiske & Taylor, 1991; Gilbert & Hixon, 1991; Petty & Wegener, 1999) cocktail sort merge sort vs. (Lieder et al., 2014)
  4. 5.

    Cognitive Control is Costly § We make cognitively frugal choices

    in choosing how to solve problems (e.g., Simon, 1956; Stanovich & West, 1998; Kahneman, 2003; Todd & Gigerenzer, 2012; Fiske & Taylor, 1991; Gilbert & Hixon, 1991; Petty & Wegener, 1999) § All else equal, we tend to choose easier over harder tasks (Kool et al., 2010; 2013) (Kool et al., 2010)
  5. 6.

    Cognitive Control is Costly § We make cognitively frugal choices

    in choosing how to solve problems (e.g., Simon, 1956; Stanovich & West, 1998; Kahneman, 2003; Todd & Gigerenzer, 2012; Fiske & Taylor, 1991; Gilbert & Hixon, 1991; Petty & Wegener, 1999) § All else equal, we tend to choose easier over harder tasks (Kool et al., 2010; 2013) § We are willing to forgo rewards to avoid investing cognitive effort (Westbrook & Braver,2015) Task Difficulty (Westbrook & Braver,2015)
  6. 8.

    Why is Cognitive Control Costly? Shenhav et al. (2017) Posner

    & Snyder, (1975); Shiffrin & Schneider (1977)
  7. 9.

    Why is Cognitive Control Costly? Shenhav et al. (2017) Baumeister

    & Heatherton (1996); Holroyd (2015) Posner & Snyder, (1975); Shiffrin & Schneider (1977)
  8. 10.

    Why is Cognitive Control Costly? Shenhav et al. (2017) Feng

    et al. (2014); Musslick et al., (2016, 2017) Baumeister & Heatherton (1996); Holroyd (2015) Posner & Snyder, (1975); Shiffrin & Schneider (1977)
  9. 12.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Decomposing Individual Differences in Cognitive Control C. Motivation and Cognitive Control in Depression D. Estimating the Cost of Mental Effort from Behavior
  10. 13.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Decomposing Individual Differences in Cognitive Control C. Motivation and Cognitive Control in Depression D. Estimating Mental Effort from Behavior
  11. 14.

    RED

  12. 15.

    A Theory of Control Allocation: Expected Value of Control GREEN

    EVC(signal,state) = Pr(outcome i i ∑ | signal,state)⋅Value(outcome i ) # $ % & ' (−Cost(signal) Shenhav, Botvinick, & Cohen (2013)
  13. 21.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Decomposing Individual Differences in Cognitive Control C. Motivation and Cognitive Control in Depression D. Estimating the Cost of Cognitive Control from Behavior
  14. 22.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift = driftCONTROL + driftAUTOMATIC
  15. 23.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift = driftCONTROL + driftAUTOMATIC
  16. 24.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence AUTOMATIC drift = driftCONTROL + ! WORD + ! COLOR
  17. 25.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence AUTOMATIC drift = driftCONTROL + ! WORD + ! COLOR
  18. 26.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence AUTOMATIC drift = ! WORD ·" WORD + ! COLOR ·" COLOR + " WORD + " COLOR CONTROL
  19. 27.

    Computational Model How is Control Implemented? Agent Task Environment Drift

    Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift = ! WORD ·" WORD + ! COLOR ·" COLOR + " WORD + " COLOR drift = · + · + +
  20. 28.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = · + · 1 + + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence
  21. 29.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = · + · 1 + + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence
  22. 30.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = ·(−10)+ · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence
  23. 31.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = 0 ·(−10)+ 0 · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence
  24. 32.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = 0 ·(−10)+ 0 · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift = −9 t 0
  25. 33.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = 0 ·(−10)+ 0 · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift =
  26. 34.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = 0 ·(−10)+ 15 · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift =
  27. 35.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR drift = 0 ·(−10)+ 15 · 1 + (−10) + 1 Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence drift = 6
  28. 36.

    drift = ! WORD ·" WORD + ! COLOR ·"

    COLOR + " WORD + " COLOR Computational Model How is Control Implemented? Agent Task Environment Drift Diffusion Model (DDM; Ratcliff, 1978; Bogacz; 2006) t 0 “red” “green” accumulated evidence #(%&''()*|,-., 0) 23(%&''()*|,-., 0) drift = 0 ·(−10)+ 15 · 1 + (−10) + 1 drift = 6
  29. 37.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) Control Allocation
  30. 38.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 1. Simulate performance Control Allocation
  31. 39.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance &'( !, * +,- = .(0122345| * +,-, 7) · :3;<2= − 01?5(7) Control Allocation
  32. 40.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance &'( !, * +,- = .(0122345| * +,-, 7) · :3;<2= − 01?5(7) Control Allocation
  33. 41.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance Control Allocation &'( !, * +,- = .(0122345| * +,-, 7) · :3;<2= − 01?5(7)
  34. 42.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) !WORD = 0 !COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance Control Allocation &'()*'*+,-,./+ 0/1,(!) ! 4*5/+6.7!8-,./+ 0/1,(!) 9! + ;<= !, ? @+A = B(0/88*5,| ? @+A, D) · 4*F-8G − 0/1,(D)
  35. 43.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance Control Allocation &'( !, * +,- = .(0122345| * +,-, 7) · :3;<2= − 01?5(7) 3. Select control signal that maximizes EVC 7∗ = max D &'( * +,-, 7
  36. 44.

    Computational Model How is Control Allocated? Agent Task Environment t

    0 “red” “green” accumulated evidence Control Implementation Musslick, Shenhav, Botvinick & Cohen (2015, RLDM) ! WORD = 0 ! COLOR = 15 t 0 2. Compute Expected Value of Control (EVC) 1. Simulate performance Control Allocation &'( !, * +,- = .(0122345| * +,-, 7) · :3;<2= − 01?5(7) 3. Select control signal that maximizes EVC 7∗ = max D &'( * +,-, 7
  37. 45.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Decomposing Individual Differences in Cognitive Control C. Motivation and Cognitive Control in Depression D. Estimating the Cost of Cognitive Control from Behavior
  38. 46.

    Cognitive Control Phenomena Basic Phenomena Incongruency Costs (Stroop, 1935) Switch

    Costs (Allport, 1994) Effects of Incentives on Task Performance Distractor Interference/Facilitation (Padmala & Pessoa) Reward & Switch Costs (Umemoto & Holroyd, 2015) Effects of Incentives on Task Choice Demand Avoidance (Kool et al., 2010) Cognitive Effort Discounting (Westbrook & Braver, 2015) Reward and Voluntary Task Switching (Arrington & Braun, 2017) Adaptation to Task Difficulty Congruency Sequence Effect (Gratton, 1992) Proportion Congruency Effect (Logan & Zbrodoff, 1979) Non-Monotonicity in Task Engagement (Gilzenrat, 2010) Stability-Flexibility Tradeoff (Goschke, 2000)
  39. 47.

    Cognitive Control Phenomena Basic Phenomena Incongruency Costs (Stroop, 1935) Switch

    Costs (Allport, 1994) Effects of Incentives on Task Performance Distractor Interference/Facilitation (Padmala & Pessoa) Reward & Switch Costs (Umemoto & Holroyd, 2015) Effects of Incentives on Task Choice Demand Avoidance (Kool et al., 2010) Cognitive Effort Discounting (Westbrook & Braver, 2015) Reward and Voluntary Task Switching (Arrington & Braun, 2017) Adaptation to Task Difficulty Congruency Sequence Effect (Gratton, 1992) Proportion Congruency Effect (Logan & Zbrodoff, 1979) Non-Monotonicity in Task Engagement (Gilzenrat, 2010) Stability-Flexibility Tradeoff (Goschke, 2000)
  40. 49.

    Effects of Incentives on Task Performance Padmala & Pessoa (2011)

    litation Reward ward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward litation Reward ward Interference Fascilitation -0.15 -0.1 -0.05 0 0.05 0.1 P(Error) No Reward Reward No Reward Reward 0 2 4 6 Control Signal Intensity Picture Control Signal Word Control Signal Picture Word 0 0.5 1 1.5 Reward-driven Change in Control Signal Intensity Picture Control Signal Word Control Signal Data
  41. 50.

    Effects of Incentives on Task Performance Padmala & Pessoa (2011)

    litation Reward ward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward litation Reward ward Interference Fascilitation -0.15 -0.1 -0.05 0 0.05 0.1 P(Error) No Reward Reward No Reward Reward 0 2 4 6 Control Signal Intensity Picture Control Signal Word Control Signal Picture Word 0 0.5 1 1.5 Reward-driven Change in Control Signal Intensity Picture Control Signal Word Control Signal Data
  42. 51.

    Effects of Incentives on Task Performance Padmala & Pessoa (2011)

    litation Reward ward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward litation Reward ward Interference Fascilitation -0.15 -0.1 -0.05 0 0.05 0.1 P(Error) No Reward Reward No Reward Reward 0 2 4 6 Control Signal Intensity Picture Control Signal Word Control Signal Picture Word 0 0.5 1 1.5 Reward-driven Change in Control Signal Intensity Picture Control Signal Word Control Signal Data Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward No Reward Re 0 2 4 6 Control Signal Intensity Picture Contr Word Contro EVC Model
  43. 52.

    Effects of Incentives on Task Performance Padmala & Pessoa (2011)

    litation Reward ward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward litation Reward ward Interference Fascilitation -0.15 -0.1 -0.05 0 0.05 0.1 P(Error) No Reward Reward No Reward Reward 0 2 4 6 Control Signal Intensity Picture Control Signal Word Control Signal Picture Word 0 0.5 1 1.5 Reward-driven Change in Control Signal Intensity Picture Control Signal Word Control Signal Data Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward No Reward Re 0 2 4 6 Control Signal Intensity Picture Contr Word Contro EVC Model rference Fascilitation No Reward Reward Interference Fascilitation -0.1 -0.05 0 0.05 0.1 Reaction time (s) No Reward Reward rference Fascilitation No Reward Reward Interference Fascilitation -0.15 -0.1 -0.05 0 0.05 0.1 P(Error) No Reward Reward No Reward Reward 0 2 4 6 Control Signal Intensity Picture Control Signal Word Control Signal Picture Word 0 0.5 1 1.5 Reward-driven Change in Control Signal Intensity
  44. 54.

    Effects of Incentives on Task Choice Arrington & Braun (2018)

    Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Data
  45. 55.

    Effects of Incentives on Task Choice Arrington & Braun (2018)

    Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Data
  46. 56.

    Effects of Incentives on Task Choice Arrington & Braun (2018)

    Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Data
  47. 57.

    Effects of Incentives on Task Choice Arrington & Braun (2018)

    Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Data EVC Model Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase EVC Model
  48. 58.

    Effects of Incentives on Task Choice Arrington & Braun (2018)

    Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase EVC Model Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Braun & Arrington (2018) Current Value Constant Current Value Decrease 0 0.2 0.4 0.6 0.8 1 Probability of Task Switches Other Value Constant Other Value Increase Data EVC Model EVC Model Current Value Constant Current Value Decrease 0.2 0.4 0.6 0.8 1 Expected Value of Switching Other Value Constant Other Value Increase
  49. 60.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape
  50. 61.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape Data
  51. 62.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Data EVC Model
  52. 63.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity
  53. 64.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 Accuracy -4 0 5 10 15 Point -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Human Participants
  54. 65.

    0 1 Trial Relative To Escape 0 0.5 1 Accuracy

    (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity Adaptation to Task Difficulty Gilzenrat et al. (2010) Task Difficulty Trial Escape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 Accuracy -4 0 5 10 15 Point -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0.45 0.5 0.55 Pupil Dilation (mm) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 10 20 Points 3 4 ape -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Control Signal Intensity -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 0.5 1 Accuracy (%) -4 -3 -2 -1 0 1 2 3 4 Trial Relative To Escape 0 5 10 15 20 25 Expected Value of Control 0.55 ) 1 EVC Model Human Participants
  55. 66.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Motivation and Cognitive Control in Depression C. Decomposing Individual Differences in Cognitive Control D. Estimating the Cost of Cognitive Control from Behavior
  56. 67.

    Cognitive Control and Depression § Depression is characterized by impairments

    in attention, memory, and cognitive control (Millan et al., 2012, Snyder, 2013 ) § Origins of cognitive control deficits remains poorly understood § Most of the existing models view cognitive control deficits in depression as the reduced ability to exert control (for a review see: Grahek et al., 2018) § Existing accounts are descriptive, lacking a mechanistic understanding
  57. 76.

    Cognitive Control and Depression - Control Efficacy - Grahek, Shenhav,

    Musslick, Krebs & Koster (under review) BLUE BLUE vs. Cognitive Effort Discounting (Wesbtoork & Braver, 2015)
  58. 78.

    Cognitive Control and Depression - Reward Sensitivity - Grahek, Shenhav,

    Musslick, Krebs & Koster (under review) BLUE BLUE vs. Cognitive Effort Discounting (Wesbtoork & Braver, 2015)
  59. 80.

    Cognitive Control and Depression - Control Cost - Grahek, Shenhav,

    Musslick, Krebs & Koster (under review) BLUE BLUE vs. Cognitive Effort Discounting (Wesbtoork & Braver, 2015)
  60. 81.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Motivation and Cognitive Control in Depression C. Decomposing Individual Differences in Cognitive Control D. Estimating the Cost of Cognitive Control from Behavior
  61. 83.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) EVC Agents Cognitive Control Capacity Task Automaticity Control Cost Learning Rate Reward Sensitivity
  62. 86.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Overall Error Rate
  63. 87.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Overall Error Rate
  64. 88.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Overall Error Rate
  65. 89.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Overall Error Rate
  66. 90.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Overall Error Rate
  67. 91.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Overall Error Rate Stroop Task
  68. 92.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Overall Error Rate Stroop Task
  69. 93.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Congruency Effect (Stroop Effect) Stroop (1935)
  70. 94.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Congruency Sequence Effect Gratton (1992)
  71. 95.

    Decomposing Individual Differences in Cognitive Control Musslick, Cohen & Shenhav

    (under review) BLUE BLUE vs. Stroop Task Logan & Zbrodoff (1979) Proportion Congruency Effect
  72. 96.

    Decomposing Individual Differences in Cognitive Control Can we reliably measure

    a person’s capacity to exert control? Not really.
  73. 97.

    Decomposing Individual Differences in Cognitive Control Which effects best index

    the capacity for cognitive control? Explained by amount of exerted control
  74. 98.

    Weighing the Costs and Benefits of Mental Effort A. Expected

    Value of Control Theory 1. The Theory 2. The Model 3. Simulations & Predictions B. Motivation and Cognitive Control in Depression C. Decomposing Individual Differences in Cognitive Control D. Estimating the Cost of Cognitive Control from Behavior
  75. 99.

    Why Estimate the Cost of Cognitive Control? ¡ Exerting cognitive

    control is costly (Botvinick & Braver, 2015; Shenhav et al., 2017) ¡ The cost of cognitive control imposes limitations on task performance (Kool et al., 2010) ¡ Individual differences in the cost of control explain behavior more generally in the real world and are linked to clinical symptoms (Westbrook, Kester & Braver, 2013; Gold et al., 2016) Low predictive validity for individual differences in control cost estimates (conversations with L. Bustamanete, W. Kool, C. Sayali & A. Westbrook, 2017-now) !
  76. 100.

    I. Compute EVC for every control signal !"# $, &

    = ( #)**+,- $, & " #)**+,- − #)/-($) $ … ,)3-*)4 /56374 53-+3/5-8 S … ,$**+3 /-7-+
  77. 101.

    I. Compute EVC for every control signal probability of correct

    outcome ! "# $, & '()*+(, &-.)/, $ 01' $, & = ! '(++34* $, & 1 '(++34* − '(6*($) $ … 4()*+(, 6-.)/, -)*3)6-*: S … 4$++3) 6*/*3
  78. 102.

    I. Compute EVC for every control signal probability of correct

    outcome ! "# $, & '()*+(, &-.)/, $ subjective value "001+12 314/+2 5 "# 65' $, & = ! '(++18* $, & 5 '(++18* − '(:*($) $ … 8()*+(, :-.)/, -)*1):-*> S … 8$++1) :*/*1
  79. 103.

    I. Compute EVC for every control signal probability of correct

    outcome ! "# $, & '()*+(, &-.)/, $ subjective value "001+12 314/+2 5 "# Cost of Cognitive Control '(6*($) '()*+(, &-.)/, $ 95' $, & = ! '(++1;* $, & 5 '(++1;* − '(6*($) $ … ;()*+(, 6-.)/, -)*1)6-*> S … ;$++1) 6*/*1
  80. 104.

    I. Compute EVC for every control signal II. Select control

    signal that maximizes EVC probability of correct outcome ! "# $, & '()*+(, &-.)/, $ subjective value "001+12 314/+2 5 "# Cost of Cognitive Control '(6*($) '()*+(, &-.)/, $ $∗ = max > ?5' $, & ?5' $, & = ! '(++1@* $, & 5 '(++1@* − '(6*($)
  81. 105.

    !"# $, & = ( #)**+,- $, & " #)**+,-

    − #)/-($) Estimating the Cost of Cognitive Control from Performance 2!"# $, & 2$ = 2( #)**+,- $, & 2$ 2" #)**+,- 2$ − 2#)/-($) 2$ 0 = 2( #)**+,- $∗, & 2$∗ 2" #)**+,- 2$∗ − 2#)/-($∗) 2$∗ 2#)/-($∗) 2$∗ = 2( #)**+,- $∗, & 2$∗ " #)**+,- Differentiate Consider cost for optimal control signal Solve for derivative of control cost
  82. 106.

    Cost Estimation !"#$%"& '()#*& + , -. +, ' Estimating

    the Cost of Cognitive Control from Performance: Theoretical Validation Simulated Agent accuracy , -. +, ' !"#$%"& '()#*& + subjective value -001%12 314*%2 5 -. cost !"6$(+) !"#$%"& '()#*& + +∗ = max > ?5! +, ' Task Environment RED RED RED $ $$ $$$ Agents perform a control-demanding task (e.g. Stroop) with a fixed task difficulty under varying reward for correct response $ $$ $$$ 2, !"%%1@$ +∗, ' 2+∗ 5 !"%%1@$ = 2!"6$(+∗) 2+∗ cost !"6$(+) !"#$%"& '()#*& +
  83. 107.

    Estimating the Cost of Cognitive Control from Performance: Theoretical Validation

    cost !"#$(&) !"($)"* +,-(.* & cost !"#$(&) !"($)"* +,-(.* & True Control Cost Function Estimated Control Cost Function ?
  84. 108.

    Estimating the Cost of Cognitive Control from Performance Under Perfect

    Knowledge 0 0.5 1 Control Signal Intensity u 0 1 2 3 4 5 6 Exponential (True) Exponential (Estimate) Quadratic (True) Quadratic (Estimate) Linear (True) Linear (Estimate)
  85. 109.

    Estimating the Cost of Cognitive Control from Performance Under Perfect

    Knowledge 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Probability Of Rewarded Outcome True Measured
  86. 110.
  87. 111.

    Cost Estimation !"#$%"& '()#*& + , -. +, ' Estimating

    the Cost of Cognitive Control from Performance Under Imperfect Knowledge Simulated Agent accuracy , -. +, ' !"#$%"& '()#*& + subjective value -001%12 314*%2 5 -. cost !"6$(+) !"#$%"& '()#*& + +∗ = max > ?5! +, ' Task Environment RED RED RED $ $$ $$$ Agents perform a control-demanding task (e.g. Stroop) with a fixed task difficulty under varying reward for correct response $ $$ $$$ 2, !"%%1@$ +∗, ' 2+∗ 5 !"%%1@$ = 2!"6$(+∗) 2+∗ cost !"6$(+) !"#$%"& '()#*& +
  88. 112.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge ! "#$$%&' (, * = ⁄ 1 (1 + %0102) task automaticity 2 Bob Alice 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity 0.2 0.4 0.6 0.8 Accuracy Alice Bob Assumed 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate) 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate)
  89. 113.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge ! "#$$%&' = )* "#$$%&' + , reward sensitivity ) Bob Alice 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate) 0 20 40 60 80 100 Reward ($) 0 0.5 1 1.5 Subjective Value Alice Bob Assumed 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate)
  90. 114.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge ! "#$$%&' = )* "#$$%&' + , accuracy bias , Bob Alice 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate) 0 20 40 60 80 100 Reward ($) 0 1 2 3 4 5 6 7 Subjective Value Alice Bob Assumed 0 0.2 0.4 0.6 0.8 1 Control Signal Intensity u 0 2 4 6 8 10 Control Costs Alice (True) Alice (Estimate) Bob (True) Bob (Estimate)
  91. 115.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge Validity of estimated costs: Correlation between true cost of control and estimated cost of control unaccounted variability with respect to task automaticity ~ 0 5 10 Standard Deviation of Task Automaticity a 0 0.5 1 Correlation Between True and Estimated Control Costs
  92. 116.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge Validity of estimated costs: Correlation between true cost of control and estimated cost of control 0 5 10 Standard Deviation of Task Automaticity a 0 0.5 1 Correlation Between True and Estimated Control Costs 0 5 10 Standard Deviation of Reward Sensitivity v 0 0.5 1 Correlation Between True and Estimated Control Costs 0 5 10 Standard Deviation of Accuracy Bias b 0 0.5 1 Correlation Between True and Estimated Control Costs task automaticity reward sensitivity accuracy bias
  93. 117.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge Paradigm A Paradigm B Proxy A for Cost of Cognitive Control Proxy B for Cost of Cognitive Control ? 0 0.5 1 Correlation Between Reward Sensitivity v Across Experiments -1 -0.5 0 0.5 1 Correlation Between Control Costs Across Experiments True Correlation reward sensitivity
  94. 118.

    Estimating the Cost of Cognitive Control from Performance Under Imperfect

    Knowledge Paradigm A Paradigm B Proxy A for Cost of Cognitive Control Proxy B for Cost of Cognitive Control ? 0 0.5 1 Correlation Between Reward Sensitivity v Across Experiments -1 -0.5 0 0.5 1 Correlation Between Control Costs Across Experiments True Correlation 0 0.5 1 Correlation Between Task Automaticity a Across Experiments -1 -0.5 0 0.5 1 Correlation Between Control Costs Across Experiments True Correlation 0 0.5 1 Correlation Between Accuracy Bias b Across Experiments -1 -0.5 0 0.5 1 Correlation Between Control Costs Across Experiments True Correlation task automaticity reward sensitivity accuracy bias
  95. 119.

    A Theory of Control Allocation: Expected Value of Control GREEN

    EVC(signal,state) = Pr(outcome i i ∑ | signal,state)⋅Value(outcome i ) # $ % & ' (−Cost(signal) Shenhav, Botvinick, & Cohen (2013)