Structured Decision-making and Adaptive Management For The Control Of Infectious Disease

Structured Decision-making and Adaptive Management For The Control Of Infectious Disease

B9ac79232e794df7c8e63e5e0df2fc26?s=128

Chris Fonnesbeck

March 01, 2012
Tweet

Transcript

  1. Structured Decision-making and Adaptive Management For The Control Of Infectious

    Disease Christopher Fonnesbeck Vanderbilt University Matthew Ferrari Penn State University Katriona Shea Penn State University Michael Tildesley University of Edinburgh Michael Runge USGS Patuxent Petra Klepac Princeton University Dylan George US Department of Defense Scott Isard Penn State University Andrew Flack Defense Threat Reduction Agency Thursday, March 1, 12
  2. Thursday, March 1, 12

  3. Thursday, March 1, 12

  4. Thursday, March 1, 12

  5. Thursday, March 1, 12

  6. Thursday, March 1, 12

  7. Thursday, March 1, 12

  8. Thursday, March 1, 12

  9. Decisions under Uncertainty Thursday, March 1, 12

  10. Foot-and-mouth Disease Tildesley et al., Nature 2006 Thursday, March 1,

    12
  11. 2001 UK outbreak Thursday, March 1, 12

  12. “What should we do?” Thursday, March 1, 12

  13. Thursday, March 1, 12

  14. Imperial College Model Deterministic, fast, approximates spatial structure Thursday, March

    1, 12
  15. Keeling Model Stochastic, spatial and flexible, but difficult to parameterize

    Thursday, March 1, 12
  16. DEFRA Interspread Model “Black box” model for predictive Swine Fever

    in New Zealand vels of culling e the spatial se hotspots Vet Record, 2001. Thursday, March 1, 12
  17. Thursday, March 1, 12

  18. Thursday, March 1, 12

  19. ‣6.5 million or more livestock destroyed ‣Loss of export markets

    ‣Closure of markets, shows and footpaths ‣£2 billion direct, £3 billion indirect costs ‣Public distress (60 suicides) ‣Political upheaval ‣Debate and recriminations Costs of 2001 Outbreak Thursday, March 1, 12
  20. Objective? Stop epidemic as quickly as possible? Minimize non-livestock economic

    losses? Minimize losses to farmers? Minimize political impact? Thursday, March 1, 12
  21. What keeps us from making optimal decisions? Thursday, March 1,

    12
  22. Structured Decision-making Thursday, March 1, 12

  23. 1 Objectives Thursday, March 1, 12

  24. 2 Decision Alternatives Thursday, March 1, 12

  25. 3 Valuation of Outcomes Thursday, March 1, 12

  26. 4 Models of System Response to Actions Thursday, March 1,

    12
  27. Uncertainty Thursday, March 1, 12

  28. aleatoric uncertainty Thursday, March 1, 12

  29. epistemic uncertainty Thursday, March 1, 12

  30. State Action Observation Stochasticity New State Thursday, March 1, 12

  31. State Action Observation Stochasticity New State stochasticity Thursday, March 1,

    12
  32. State Action Observation Stochasticity New State Process Uncertainty process uncertainty

    Thursday, March 1, 12
  33. partial observability State Action Observation Stochasticity New State Process Uncertainty

    Partial Observability Thursday, March 1, 12
  34. partial controllability State Action Observation Stochasticity New State Process Uncertainty

    Partial Controllability Partial Observability Thursday, March 1, 12
  35. Thursday, March 1, 12

  36. “...as we know, there are known knowns; there are things

    we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns - the ones we don't know we don't know..." - D. Rumsfeld Thursday, March 1, 12
  37. Sequential Decision Analysis Thursday, March 1, 12

  38. passive management Thursday, March 1, 12

  39. reactive management Thursday, March 1, 12

  40. passive adaptive management Thursday, March 1, 12

  41. active adaptive management Thursday, March 1, 12

  42. Decide Models Objectives Alternative Actions Thursday, March 1, 12

  43. Decide Monitor Models Objectives Alternative Actions Thursday, March 1, 12

  44. Decide Monitor Learn Models Objectives Alternative Actions Update Thursday, March

    1, 12
  45. Adaptive Resource Management Thursday, March 1, 12

  46. “Optimal decision-making in the face of uncertainty, with an aim

    to reducing uncertainty using informed management” Thursday, March 1, 12
  47. State Reward Action New State Thursday, March 1, 12

  48. Pr(xt+1 = j | xt = i,at (xt ) =

    d), d 2 A t State (t) State (t+1) State (t+2) Reward (t) Reward (t+1) Reward (t+2) Random Effects (t) Random Effects (t+1) Random Effects (t+2) Action (t) Action (t+1) Action (t+2) Thursday, March 1, 12
  49. State (t) State (t+1) State (t+2) Reward (t) Reward (t+1)

    Reward (t+2) Random Effects (t) Random Effects (t+1) Random Effects (t+2) Action (t) Action (t+1) Action (t+2) Information State (t) Thursday, March 1, 12
  50. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 vaccinate no vaccinate no outbreak New Cases Cost 0 0 100 100 0 0 100 1000 1-p p 1-p p state action state Thursday, March 1, 12
  51. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 vaccinate no vaccinate no outbreak New Cases Cost 0 0 100 100 0 0 100 1000 1-p p 1-p p = 0.111 p∗ state action state Thursday, March 1, 12
  52. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 1-p p 1-p p = 0 x0 = 0 x1 = 1 x1 = 1 x1 = 0 x1 = 1 a0 = 0 a0 1-p p 1-p p Model 1 Model 2 Thursday, March 1, 12
  53. Single Model Optimal Control Thursday, March 1, 12

  54. action-value function Policy Thursday, March 1, 12

  55. ( , ) = E[ r( | ) ∣ ]

    Qπ st at ∑ i=t T ai si st total expected reward Thursday, March 1, 12
  56. ( , ) Qπ st at = E[r( | )

    + r( | ) ∣ ] at st ∑ i=t+1 T ai si st = r( | ) + p( | , ) at st ∑ st+1 st+1 st at ×E[ r( | ) ∣ ] ∑ i=t+1 T ai si st+1 = r( | ) + p( | , ) ( , ) at st ∑ st+1 st+1 st at Qπ st+1 at+1 Thursday, March 1, 12
  57. Bellman equation ( , ) = r( | ) +

    p( | , ) ( , ) Qπ st at at st ∑ st+1 st+1 st at Qπ st+1 at+1 current reward expected future reward Thursday, March 1, 12
  58. optimal policy Thursday, March 1, 12

  59. ( , ) Q∗ st at max at = r(

    | ) + p( | , ) ( , ) ⎡ ⎣ at st ∑ st+1 st+1 st at Q∗ st+1 at+1 ⎤ ⎦ optimal policy Thursday, March 1, 12
  60. Optimality For a given pathway to be optimal, all subsections

    of that path must also be optimal Thursday, March 1, 12
  61. Multiple Models Optimal Control Thursday, March 1, 12

  62. Average cumulative reward ( , |p) Q ˉπ st at

    = E[ (t) r( | ) ∣ ] ∑ m pm ∑ i=t T ai si st = (t) ( , ) ∑ m pm Qπ st at model weight Thursday, March 1, 12
  63. ( , |p) Q ˉπ st at = (t) r(

    | ) + p( | , ) ( , ) ∑ m pm ⎡ ⎣ ∑ i=t T ai si ∑ st+1 st+1 st at Qπ st+1 at+1 ⎤ ⎦ = ( | , p) + (t)p( | , ) ( , ) r ˉ ai si ∑ m ∑ st+1 pm st+1 st at Qπ st+1 at+1 Thursday, March 1, 12
  64. Bayes’ Theorem (t + 1) pm = (t)p( | ,

    ) pm st+1 st at (t)p( | , ) ∑ m pm st+1 st at = (t)p( | , ) pm st+1 st at ( | , ) p ˉ st+1 st at Thursday, March 1, 12
  65. Optimal policy ( , |p) = ( | , p)

    + ( | , ) ( , ) Q ˉ∗ st at max at ⎡ ⎣ r ˉ ai si ∑ st+1 p ˉ st+1 st at Q ˉ∗ st+1 at+1 ⎤ ⎦ Thursday, March 1, 12
  66. Finding an Optimal Policy Thursday, March 1, 12

  67. Exhaustive Search Thursday, March 1, 12

  68. States 3 Levels 10 Decision 2 Levels 5 Stochastic 2

    Levels 5 Discretized Problem Thursday, March 1, 12
  69. 1.23 e172 yr using exhaustive search 1 Thursday, March 1,

    12
  70. Dynamic Programming Thursday, March 1, 12

  71. Stochastic DP V⇤(xT )=max a2A xT E h rT (aT

    | xT ) i V⇤(xT 1 )= max a2A xT 1 E h r(aT 1 | xT 1 )+γV⇤(xT ) i . . . V⇤(xt )=max a2A xt E h r(at | xt )+γV⇤(xt+1 ) i T = Time horizon Thursday, March 1, 12
  72. 7.38 hours using dynamic programming Thursday, March 1, 12

  73. States 6 Levels 30 Decision 9 Levels 5 Stochastic 3

    Levels 9 More Complex Problem Thursday, March 1, 12
  74. 2.43 e12 yr using dynamic programming Thursday, March 1, 12

  75. Alternatives? Thursday, March 1, 12

  76. reinforcement learning Sutton & Barto 1998 Thursday, March 1, 12

  77. Optimal strategy is learned by receiving reinforcement from a dynamic

    environment Thursday, March 1, 12
  78. Q0(st,at ) = Q(st,at )+α[rt+1 +γQ(st+1,at+1 ) Q(st,at )] learning

    rate temporal difference learning difference between estimates Thursday, March 1, 12
  79. Q (st, at ) = (1 )Q(st, at ) +

    [rt+1 + ⇥Q(st+1, at+1 )] Thursday, March 1, 12
  80. exploitation exploration random action optimal action ϵ 1 − ϵ

    Thursday, March 1, 12
  81. Initialize Q(s,a) Initialize s0 Choose initial action a0 from π

    Execute a Observe s' Choose next action a' from π Update Q(s',a') with r Advance s,a = s',a' Q(s,a) = Q(s',a') SARSA Repeat until convergence Initialization Thursday, March 1, 12
  82. Initialize Q(s,a) Initialize s0 Choose initial action a0 from π

    Execute a Observe s' Choose next action a' from π Update Q(s',a') with r Advance s,a = s',a' Q(s,a) = Q(s',a') SARSA Repeat until convergence Initialization Q (s, a) = Q(s, a) + [r + ⇥Q(s , a ) Q(s, a)] Thursday, March 1, 12
  83. Learning Thursday, March 1, 12

  84. Value of Information Thursday, March 1, 12

  85. Expected Value of Perfect Information Thursday, March 1, 12

  86. EVPI = (t) ( ) − ( , ) ∑

    i pi V∗ i xt V ˉ∗ xt pt Thursday, March 1, 12
  87. Foot-and-mouth Disease Thursday, March 1, 12

  88. Objective minimize cost of cattle + cost of vaccination Thursday,

    March 1, 12
  89. IP Infected Premises DC Dangerous Contacts CP Contiguous Premises RC

    Ring Culling V Vaccination Decision Alternatives Thursday, March 1, 12
  90. Kernel Models Distance from Source Transmission Risk Fat & Shallow

    UK Thin & Steep Thursday, March 1, 12
  91. Vaccine Effectiveness 50% effective 10% susceptibility, 80% transmission 90% effective

    Thursday, March 1, 12
  92. Belief Kernel Kernel IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best

    25.0% 1 9.94 5.07 3.42 2.99 2.2 9.94 2.2 50.0% 2 5.1 1.9 1.41 2.59 1.90 5.1 1.41 25.0% 3 5.11 0.54 0.71 1.6 1.29 5.11 0.54 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.39 Partial EVPI Partial EVPI 0.3475 (20%) 0.3475 (20%) 0.3475 (20%) Kernel EVPI Thursday, March 1, 12
  93. Belief Effectiveness IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best 33.3%

    90% 6.31 2.35 1.74 2.44 1.64 6.31 1.64 33.3% 10%/80% 6.31 2.35 1.74 2.44 1.70 6.31 1.70 33.3% 50% 6.31 2.35 1.74 2.44 2.14 6.31 1.74 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.69 Partial EVPI Partial EVPI 0.0475 (2.7%) 0.0475 (2.7%) 0.0475 (2.7%) Vaccine Effectiveness EVPI Thursday, March 1, 12
  94. Belief Kernel Vaccine IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best

    8.3% 1 90% 9.94 5.07 3.42 2.99 2.19 9.94 2.19 16.7% 2 90% 5.1 1.9 1.41 2.59 1.68 5.1 1.41 8.3% 3 90% 5.11 0.54 0.71 1.6 1 5.11 0.54 8.3% 1 10/80% 9.94 5.07 3.42 2.99 2.19 9.94 2.19 16.7% 2 10/80% 5.1 1.9 1.41 2.59 1.7 5.1 1.41 8.3% 3 10/80% 5.11 0.54 0.71 1.6 1.19 5.11 0.54 8.3% 1 50% 9.94 5.07 3.42 2.99 2.22 9.94 2.22 16.7% 2 50% 5.1 1.9 1.41 2.59 2.33 5.1 1.41 8.3% 3 50% 5.11 0.54 0.71 1.6 1.68 5.11 0.54 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.39 Partial EVPI Partial EVPI 0.3475 (20%) 0.3475 (20%) 0.3475 (20%) Combined EVPI Thursday, March 1, 12
  95. Measles Thursday, March 1, 12

  96. Thursday, March 1, 12

  97. Thursday, March 1, 12

  98. SEIR Model Susceptible Exposed Infectious Recovered Vaccination Thursday, March 1,

    12
  99. Objective min f( , ) ∑ t=1 T casest costt

    = min w( ) ∑ t=1 T casest +costt Thursday, March 1, 12
  100. V0 V5 V10 V15 Management Actions No vaccination Vaccinate <5

    years Vaccinate <10 years Vaccinate <15 years $0 $100 $175 $250 Cost Thursday, March 1, 12
  101. Who Acquires Infection From Whom (WAIFW) ⎡ ⎣ 6 2

    1 2 6 2 1 2 6 ⎤ ⎦ ⎡ ⎣ 3 3 3 3 3 3 3 3 3 ⎤ ⎦ assortative non-assortative Thursday, March 1, 12
  102. Susceptibility Vaccination Strategy Vaccination Strategy Vaccination Strategy Model weight <

    5 years < 10 years < 15 years Best Action < 5 years at risk .5 100 57 40 100 < 10 years at risk .25 25 86 60 86 < 15 years at risk .25 -12.5 42.8 70 70 Expected Benefit Expected Benefit 53.1 60.7 52.5 88.9 Thursday, March 1, 12
  103. Best Static Strategy risk <5 risk <10 risk <15 0.8

    0.2 0.8 0.6 0.4 0.6 0.4 0.6 0.4 0.2 0.8 0.2 Thursday, March 1, 12
  104. Thursday, March 1, 12

  105. Thursday, March 1, 12

  106. Adaptive Monitoring Thursday, March 1, 12

  107. When is adaptive management useful? Thursday, March 1, 12

  108. When is adaptive management useful? ➊ Sequential decision-making Thursday, March

    1, 12
  109. When is adaptive management useful? ➊ Sequential decision-making ➋ Decisions

    influence system behavior Thursday, March 1, 12
  110. When is adaptive management useful? ➊ Sequential decision-making ➋ Decisions

    influence system behavior ➌ There is uncertainty regarding the system and the expected consequences of decisions Thursday, March 1, 12
  111. Limitations of AM ➊ Requires informative monitoring ➋ Management body

    involved in all steps ➌ Institutional challenges ➍ Requires flexibility Thursday, March 1, 12
  112. Promise of AM ➊ Forces articulation of objectives, constraints, costs

    ➋ Explicit quantification of uncertainty ➌ Transparent mechanism for choosing among decision alternatives Thursday, March 1, 12