Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Structured Decision-making and Adaptive Management For The Control Of Infectious Disease

Structured Decision-making and Adaptive Management For The Control Of Infectious Disease

Chris Fonnesbeck

March 01, 2012
Tweet

More Decks by Chris Fonnesbeck

Other Decks in Science

Transcript

  1. Structured Decision-making and Adaptive Management For The Control Of Infectious

    Disease Christopher Fonnesbeck Vanderbilt University Matthew Ferrari Penn State University Katriona Shea Penn State University Michael Tildesley University of Edinburgh Michael Runge USGS Patuxent Petra Klepac Princeton University Dylan George US Department of Defense Scott Isard Penn State University Andrew Flack Defense Threat Reduction Agency Thursday, March 1, 12
  2. DEFRA Interspread Model “Black box” model for predictive Swine Fever

    in New Zealand vels of culling e the spatial se hotspots Vet Record, 2001. Thursday, March 1, 12
  3. ‣6.5 million or more livestock destroyed ‣Loss of export markets

    ‣Closure of markets, shows and footpaths ‣£2 billion direct, £3 billion indirect costs ‣Public distress (60 suicides) ‣Political upheaval ‣Debate and recriminations Costs of 2001 Outbreak Thursday, March 1, 12
  4. Objective? Stop epidemic as quickly as possible? Minimize non-livestock economic

    losses? Minimize losses to farmers? Minimize political impact? Thursday, March 1, 12
  5. partial controllability State Action Observation Stochasticity New State Process Uncertainty

    Partial Controllability Partial Observability Thursday, March 1, 12
  6. “...as we know, there are known knowns; there are things

    we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns - the ones we don't know we don't know..." - D. Rumsfeld Thursday, March 1, 12
  7. “Optimal decision-making in the face of uncertainty, with an aim

    to reducing uncertainty using informed management” Thursday, March 1, 12
  8. Pr(xt+1 = j | xt = i,at (xt ) =

    d), d 2 A t State (t) State (t+1) State (t+2) Reward (t) Reward (t+1) Reward (t+2) Random Effects (t) Random Effects (t+1) Random Effects (t+2) Action (t) Action (t+1) Action (t+2) Thursday, March 1, 12
  9. State (t) State (t+1) State (t+2) Reward (t) Reward (t+1)

    Reward (t+2) Random Effects (t) Random Effects (t+1) Random Effects (t+2) Action (t) Action (t+1) Action (t+2) Information State (t) Thursday, March 1, 12
  10. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 vaccinate no vaccinate no outbreak New Cases Cost 0 0 100 100 0 0 100 1000 1-p p 1-p p state action state Thursday, March 1, 12
  11. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 vaccinate no vaccinate no outbreak New Cases Cost 0 0 100 100 0 0 100 1000 1-p p 1-p p = 0.111 p∗ state action state Thursday, March 1, 12
  12. = 0 x0 = 0 x1 = 1 x1 =

    1 x1 = 0 x1 = 1 a0 = 0 a0 1-p p 1-p p = 0 x0 = 0 x1 = 1 x1 = 1 x1 = 0 x1 = 1 a0 = 0 a0 1-p p 1-p p Model 1 Model 2 Thursday, March 1, 12
  13. ( , ) = E[ r( | ) ∣ ]

    Qπ st at ∑ i=t T ai si st total expected reward Thursday, March 1, 12
  14. ( , ) Qπ st at = E[r( | )

    + r( | ) ∣ ] at st ∑ i=t+1 T ai si st = r( | ) + p( | , ) at st ∑ st+1 st+1 st at ×E[ r( | ) ∣ ] ∑ i=t+1 T ai si st+1 = r( | ) + p( | , ) ( , ) at st ∑ st+1 st+1 st at Qπ st+1 at+1 Thursday, March 1, 12
  15. Bellman equation ( , ) = r( | ) +

    p( | , ) ( , ) Qπ st at at st ∑ st+1 st+1 st at Qπ st+1 at+1 current reward expected future reward Thursday, March 1, 12
  16. ( , ) Q∗ st at max at = r(

    | ) + p( | , ) ( , ) ⎡ ⎣ at st ∑ st+1 st+1 st at Q∗ st+1 at+1 ⎤ ⎦ optimal policy Thursday, March 1, 12
  17. Optimality For a given pathway to be optimal, all subsections

    of that path must also be optimal Thursday, March 1, 12
  18. Average cumulative reward ( , |p) Q ˉπ st at

    = E[ (t) r( | ) ∣ ] ∑ m pm ∑ i=t T ai si st = (t) ( , ) ∑ m pm Qπ st at model weight Thursday, March 1, 12
  19. ( , |p) Q ˉπ st at = (t) r(

    | ) + p( | , ) ( , ) ∑ m pm ⎡ ⎣ ∑ i=t T ai si ∑ st+1 st+1 st at Qπ st+1 at+1 ⎤ ⎦ = ( | , p) + (t)p( | , ) ( , ) r ˉ ai si ∑ m ∑ st+1 pm st+1 st at Qπ st+1 at+1 Thursday, March 1, 12
  20. Bayes’ Theorem (t + 1) pm = (t)p( | ,

    ) pm st+1 st at (t)p( | , ) ∑ m pm st+1 st at = (t)p( | , ) pm st+1 st at ( | , ) p ˉ st+1 st at Thursday, March 1, 12
  21. Optimal policy ( , |p) = ( | , p)

    + ( | , ) ( , ) Q ˉ∗ st at max at ⎡ ⎣ r ˉ ai si ∑ st+1 p ˉ st+1 st at Q ˉ∗ st+1 at+1 ⎤ ⎦ Thursday, March 1, 12
  22. States 3 Levels 10 Decision 2 Levels 5 Stochastic 2

    Levels 5 Discretized Problem Thursday, March 1, 12
  23. Stochastic DP V⇤(xT )=max a2A xT E h rT (aT

    | xT ) i V⇤(xT 1 )= max a2A xT 1 E h r(aT 1 | xT 1 )+γV⇤(xT ) i . . . V⇤(xt )=max a2A xt E h r(at | xt )+γV⇤(xt+1 ) i T = Time horizon Thursday, March 1, 12
  24. States 6 Levels 30 Decision 9 Levels 5 Stochastic 3

    Levels 9 More Complex Problem Thursday, March 1, 12
  25. Q0(st,at ) = Q(st,at )+α[rt+1 +γQ(st+1,at+1 ) Q(st,at )] learning

    rate temporal difference learning difference between estimates Thursday, March 1, 12
  26. Q (st, at ) = (1 )Q(st, at ) +

    [rt+1 + ⇥Q(st+1, at+1 )] Thursday, March 1, 12
  27. Initialize Q(s,a) Initialize s0 Choose initial action a0 from π

    Execute a Observe s' Choose next action a' from π Update Q(s',a') with r Advance s,a = s',a' Q(s,a) = Q(s',a') SARSA Repeat until convergence Initialization Thursday, March 1, 12
  28. Initialize Q(s,a) Initialize s0 Choose initial action a0 from π

    Execute a Observe s' Choose next action a' from π Update Q(s',a') with r Advance s,a = s',a' Q(s,a) = Q(s',a') SARSA Repeat until convergence Initialization Q (s, a) = Q(s, a) + [r + ⇥Q(s , a ) Q(s, a)] Thursday, March 1, 12
  29. EVPI = (t) ( ) − ( , ) ∑

    i pi V∗ i xt V ˉ∗ xt pt Thursday, March 1, 12
  30. IP Infected Premises DC Dangerous Contacts CP Contiguous Premises RC

    Ring Culling V Vaccination Decision Alternatives Thursday, March 1, 12
  31. Belief Kernel Kernel IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best

    25.0% 1 9.94 5.07 3.42 2.99 2.2 9.94 2.2 50.0% 2 5.1 1.9 1.41 2.59 1.90 5.1 1.41 25.0% 3 5.11 0.54 0.71 1.6 1.29 5.11 0.54 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.39 Partial EVPI Partial EVPI 0.3475 (20%) 0.3475 (20%) 0.3475 (20%) Kernel EVPI Thursday, March 1, 12
  32. Belief Effectiveness IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best 33.3%

    90% 6.31 2.35 1.74 2.44 1.64 6.31 1.64 33.3% 10%/80% 6.31 2.35 1.74 2.44 1.70 6.31 1.70 33.3% 50% 6.31 2.35 1.74 2.44 2.14 6.31 1.74 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.69 Partial EVPI Partial EVPI 0.0475 (2.7%) 0.0475 (2.7%) 0.0475 (2.7%) Vaccine Effectiveness EVPI Thursday, March 1, 12
  33. Belief Kernel Vaccine IP IP/DC IP/DC/CP IP/DC/RC IP/DC/V Worst Best

    8.3% 1 90% 9.94 5.07 3.42 2.99 2.19 9.94 2.19 16.7% 2 90% 5.1 1.9 1.41 2.59 1.68 5.1 1.41 8.3% 3 90% 5.11 0.54 0.71 1.6 1 5.11 0.54 8.3% 1 10/80% 9.94 5.07 3.42 2.99 2.19 9.94 2.19 16.7% 2 10/80% 5.1 1.9 1.41 2.59 1.7 5.1 1.41 8.3% 3 10/80% 5.11 0.54 0.71 1.6 1.19 5.11 0.54 8.3% 1 50% 9.94 5.07 3.42 2.99 2.22 9.94 2.22 16.7% 2 50% 5.1 1.9 1.41 2.59 2.33 5.1 1.41 8.3% 3 50% 5.11 0.54 0.71 1.6 1.68 5.11 0.54 Average 6.31 2.35 1.74 2.44 1.82 6.31 1.74 Weighted minimum cost Weighted minimum cost Weighted minimum cost 1.39 Partial EVPI Partial EVPI 0.3475 (20%) 0.3475 (20%) 0.3475 (20%) Combined EVPI Thursday, March 1, 12
  34. Objective min f( , ) ∑ t=1 T casest costt

    = min w( ) ∑ t=1 T casest +costt Thursday, March 1, 12
  35. V0 V5 V10 V15 Management Actions No vaccination Vaccinate <5

    years Vaccinate <10 years Vaccinate <15 years $0 $100 $175 $250 Cost Thursday, March 1, 12
  36. Who Acquires Infection From Whom (WAIFW) ⎡ ⎣ 6 2

    1 2 6 2 1 2 6 ⎤ ⎦ ⎡ ⎣ 3 3 3 3 3 3 3 3 3 ⎤ ⎦ assortative non-assortative Thursday, March 1, 12
  37. Susceptibility Vaccination Strategy Vaccination Strategy Vaccination Strategy Model weight <

    5 years < 10 years < 15 years Best Action < 5 years at risk .5 100 57 40 100 < 10 years at risk .25 25 86 60 86 < 15 years at risk .25 -12.5 42.8 70 70 Expected Benefit Expected Benefit 53.1 60.7 52.5 88.9 Thursday, March 1, 12
  38. Best Static Strategy risk <5 risk <10 risk <15 0.8

    0.2 0.8 0.6 0.4 0.6 0.4 0.6 0.4 0.2 0.8 0.2 Thursday, March 1, 12
  39. When is adaptive management useful? ➊ Sequential decision-making ➋ Decisions

    influence system behavior ➌ There is uncertainty regarding the system and the expected consequences of decisions Thursday, March 1, 12
  40. Limitations of AM ➊ Requires informative monitoring ➋ Management body

    involved in all steps ➌ Institutional challenges ➍ Requires flexibility Thursday, March 1, 12
  41. Promise of AM ➊ Forces articulation of objectives, constraints, costs

    ➋ Explicit quantification of uncertainty ➌ Transparent mechanism for choosing among decision alternatives Thursday, March 1, 12