Slide 1

Slide 1 text

Mean Field Game with applications 2 1 August 15, 2020

Slide 2

Slide 2 text

5G communication AI UAV COVID 19 pandemic control 2

Slide 3

Slide 3 text

Mean field game 3

Slide 4

Slide 4 text

Mean field game system A typical time-dependent MFG system (derived from (9)) has the form 8 > > > > > < > > > > > : t 2 + H (x, Dx (x, t), ⇢(x, t), t) = 0 ⇢t 2 ⇢ divx (⇢DpH (x, Dx (x, t), ⇢(x, t), t)) = 0 ⇢ 0, R Td ⇢(x, t)dx = 1, t 2 [0, T] ⇢(x, T) = ⇢T (x), (x, 0) = 0(x, ⇢(x, 0)), x 2 Td . (10) Given: H : Td ⇥ Rd ⇥ X ⇥ R ! R is a periodic in space and convex in momentum Hamiltonian, where X = R+ or X = L 1(Td ; R+) or X = R+ ⇥ L 1(Td ; R+). H = L ⇤ is a di↵usion parameter 0 : Td ⇥ X ! R, ⇢T : Td ! R given initial-terminal conditions Unknowns: , ⇢ : Td ⇥ [0, T] ! R has the form 4

Slide 5

Slide 5 text

Machine Learning Framework for High-Dimensional Mean Field Games Lars Ruthotto, Stanley Osher, Wuchen Li, Levon Nurbekyan, and Samy Wu Fung 2020 5

Slide 6

Slide 6 text

Variational Formulation for Potential MFGs Variational Formulation for Potential MFGs: inf ⇢,v J MFG (v, ⇢) = Z T 0 Z I R d L(x, v)⇢(x, t)dxdt + Z T 0 F(⇢(·, t))dt + G(⇢(·, T)), s.t. @ t ⇢(x, t) + r · (⇢(x, t)v(x, t)) = 0, ⇢(x, 0) = ⇢ 0 (x) (1) Our Contribution: solve these types of problems in high dimensions Our Approach I use Lagrangian coordinates to discretize the continuity equation I parameterize potential function (where v = r ) by a neural network 2 6

Slide 7

Slide 7 text

Lagrangian-based ML Formulation min ✓ E ⇢0 c L (x, T) + c F (x, T) + G(z(x, T)) subject to @ t 0 B B B B @ z(x, t) l(x, t) c L (x, t) c F (x, t) c HJB (x, t) 1 C C C C A = 0 B B B B B @ r (z(x, t), t; ✓) (z(x, t), t; ✓) L ⇣ z(x, t), r (z(x, t), t; ✓) ⌘ F(z(x, t), t) c HJB (z(x, t), t) 1 C C C C C A , (2) where z(x, 0) = x, and the rest initialized with zero. I completely grid free I Laplacian of used to estimate determinant of induced transformation I (z(x, t), t; ✓) parameterized by neural network with weights ✓ 3 7

Slide 8

Slide 8 text

Fast Computation of Neural network parameterized as (s, ✓) = w>N(s, ✓ N )+ 1 2 s> ⇣ A + A> ⌘ s + c>s + b, ✓ = (w, ✓ N , vec(A), c, b), (3) where s = (x, t) 2 Rd+1 is the input feature and N(s, ✓ N ): Rd+1 ! Rm is a ResNet: u 0 = (K 0 s + b 0 ) u 1 = u 0 + h (K 1 u 0 + b 1 ) . . . . . . u M = u M 1 + h (K M u M 1 + b M ), (4) 4 8

Slide 9

Slide 9 text

Fast Computation of The Laplacian of the potential can be computed as (s, ✓) = tr ⇣ E>(r2 s (N(s, ✓ N )w) + (A + A>))E ⌘ , where the columns of E 2 IRd+1⇥d are given by the first d standard basis vectors in IRd+1. 5 9

Slide 10

Slide 10 text

Fast Computation of The Laplacian of the potential can be computed as (s, ✓) = tr ⇣ E>(r2 s (N(s, ✓ N )w) + (A + A>))E ⌘ , where the columns of E 2 IRd+1⇥d are given by the first d standard basis vectors in IRd+1. The trace of the first layer of N(s, ✓ N ) can be computed as t 0 = ( 00(K 0 s + b 0 ) z 1 )>((K 0 E) (K 0 E))1, and we continue with the remaining rows in reverse order to obtain (N(s, ✓ N )w) = t 0 + h M X i=1 t i , t i = ( 00(K i u i 1 + b i ) z i+1 )>((K i J i 1 ) (K i J i 1 ))1, where J i 1 = r s u> i 1 2 IRm⇥d 5 10

Slide 11

Slide 11 text

OT Example for Di↵erent Dimensions ⇢ 0 , initial density ⇢ 1 , target density d = 2, pull back d = 2, push fwd d = 2, characteristics d = 10, pull back d = 10, push fwd d = 10, characteristics d = 50, pull back d = 50, push fwd d = 50, characteristics d = 100, pull back d = 100, push fwd d = 100, characteristics qualitatively similar results in all dimensions 6 11

Slide 12

Slide 12 text

OT Example for Di↵erent Dimensions 0 100 200 300 400 500 101 101.1 101.2 101.3 101.4 101.5 101.6 101.7 101.8 101.9 102 iteration 0 100 200 300 400 500 101.1 101.2 101.3 101.4 101.5 101.6 101.7 iteration 0 100 200 300 400 500 100 101 102 iteration d=2 d=10 d=50 d=100 J MFG + C HJB , objective function J MFG , mean field objective C HJB , penalty function d N L G C HJB time/iter (s) 2 1,024 1.08e+01 1.41e-01 1.53e+00 1.437 10 4,096 1.08e+01 1.85e-01 1.25e+00 8.408 50 16,384 1.10e+01 2.41e-01 4.85e+00 65.706 100 36,864 1.11e+01 3.22e-01 7.37e+00 283.259 moderate growth in runtime across dimensions 7 12

Slide 13

Slide 13 text

Verifying (2d) Solutions ⇢ 0 , initial density ⇢ 1 , target density pull back Lagrangian, ML push forward characteristics Eulerian, finite volume ⇢ 0 , initial density ⇢ 1 , target density pull back Lagrangian, ML push forward characteristics Eulerian, FV Haber E, Horesh, R, 2015 8 13

Slide 14

Slide 14 text

APAC-Net: Alternating the Population and Agent Control Neural Networks Alex Tong Lin, Samy Wu Fung, Wuchen Li, Levon Nurbekyan, Stanely Osher 2020. 14 Stanley Osher

Slide 15

Slide 15 text

Mean Field Games I A mean field game seeks to model the behavior of a very large number of small interacting agents that each seek to optimize their own value function. 2 15

Slide 16

Slide 16 text

Variational Mean-Field Games I Namely for some mean-field games, the equilibrium solution can be found by minimizing an overall “energy” (e.g. multiply the value function for a single agent by ⇢): A(⇢, v) = min ⇢,v Z T 0 Z ⌦ ⇢(x, t)L(v) + F(x, ⇢) dx dt + Z ⌦ (x)⇢(x, T) dx where x = x(t), v = v(t) above, and where the optimization has the constraint, @t⇢ ⌫ ⇢ + div(⇢v) = 0. 3 16

Slide 17

Slide 17 text

It’s Just Sampling We can raise the constraints into the objective, perform integration by parts, and push the minimization of v inside to obtain, = max ' min ⇢ Z T 0 Z ⌦ F(x, ⇢(x, t)) dx dt + Z ⌦ ⇣ (x) '(x, T) ⌘ ⇢(x, T) dx + Z ⌦ '(x, 0)⇢(x, 0) dx + Z T 0 Z ⌦ ✓ @t'(x, t) + '(x, t) L⇤( r'(x, t)) ◆ ⇢(x, t) dx dt which means we end up with a sampling problem. Then the idea is to turn ⇢ and ' into neural networks and train as in GANs (Generative Adversarial Networks). 4 17

Slide 18

Slide 18 text

GAN Training I Example GAN loss function: min G max D E x⇠real [D(x)] E z⇠N [D(G(z))] s.t.krDk  1 = min G max D E x⇠real [D(x)] E y⇠G(z),z⇠N [D(y)] s.t.krDk  1 I Training has a discriminator and a generator. The generator produces samples (analogous to our ⇢), and the discriminator evaluates the quality of those samples (analogous to our '). 5 18

Slide 19

Slide 19 text

Math and GANs Recall the solution to the mean-field game turned into the min-max problem: max ' min ⇢ Z ⌦ ⇣ (x) '(x, T) ⌘ ⇢(x, T) dx + Z ⌦ '(x, 0)⇢(x, 0) dx + Z T 0 Z ⌦ F(x, ⇢(x, t)) dx dt + Z T 0 Z ⌦ ✓ @t'(x, t) + '(x, t) L⇤( r'(x, t)) ◆ ⇢(x, t) dx dt And supposing a certain form for F, then this can be expressed in expectation form, max ' min ⇢ E y⇠⇢(x,T ) [ (y) '(y, T)] + E y⇠⇢(x,0) ['(y, 0)] + Z T 0 E y⇠⇢(x,t) [F(y)] dt + Z T 0 E y⇠⇢(x,t) [@t'(y, t) + '(y, t) L ⇤( r'(y, t))] dt 6 19

Slide 20

Slide 20 text

High Dimensions and Stochastic I First to solve high-dimensional, stochastic mean-field games. I No more need to discretize space, only need to solve for trajectories. 7 20

Slide 21

Slide 21 text

Numerical Results - Obstacles I H(p) = kpk2 I Left is ⌫ = 0 (deterministic), and right is ⌫ = 0.4 (stochastic) I 100 dimensional obstacle problem with an obstacle in the first two dimensions (“cylindrical” obstacle). 8 21

Slide 22

Slide 22 text

Numerical Results - Obstacles & Congestion I H(p) = kpk2 I 100 dimensional congestion problem with congestion in the first two dimensions. ⌫ = 0 (left), and ⌫ = 0.1 (right) I Congestion term G(x, ⇢(x, t)) = R ⌦ R ⌦ 1 kx yk2+1 ⇢(x, T)⇢(y, T) dx dy, 9 22 G(T, ⇢)

Slide 23

Slide 23 text

Dynamic Optimal Transport of Nonlinear Control-A ne Systems Karthik Elamvazhuthi, Siting Liu, Wuchen Li, and Stanley Osher 23

Slide 24

Slide 24 text

Contributions I Extend the the dynamical formulation of optimal transport to control-a ne systems. I Established existence results for the dynamical formulation for general Lagrangian costs and control constraints. I Numerically solved the optimal transport problem for the case of quadratic cost using Primal Dual Hybrid Gradient Method. Considered the homogeneous of degree one cost for the driftless systems. I Numerically solved the case of homogeneous of degree one cost by eliminating the time variable and hence reducing dimensionality of the problem. 2 24

Slide 25

Slide 25 text

Motivations Transport swarms of robots from one configuration to another,each robot modeled according to following di↵erential equation: 8 > < > : ˙ ✓ = u1 ˙ x1 = u2 cos ✓ ˙ x2 = u2 sin ✓, u1 : Turning Rate, u2 : Forward/Backward Velocity. Technical Challenge : Number of controls < Dimension of state space. 3 25

Slide 26

Slide 26 text

Fluid Dynamical formulation of Optimal Transport inf u(t,x),⇢(t,x) Z T 0 Z Rd L(t, x, u(t, x))⇢(t, x)dxdt subject to continuity equation constraint @t⇢ + r · (g0⇢) + n X i=1 r · (uigi⇢) = 0, initial and terminal constraints on density ⇢(0, ·) = ⇢0, ⇢(T, ·) = ⇢T . g0 : drift vector-field, gi : control vector-fields. Goal: Find optimal control in feedback form u(t, x) such that solution ⇢(t, x) satisfies initial and terminal constraints and minimizes expected value of control Z T 0 Z Rd L(t, x, u(t, x))⇢(t, x)dxdt 4 26

Slide 27

Slide 27 text

Theoretical Contributions Under assumption on reachability properties of the controlled ODE system ˙ x = g0(x) + n X i=1 vi(t)gi(x) and continuity, coercivity and convexity of the running cost L(t, x, u). I Establish existence of solutions for the optimal transport problem. I Extend previous theoretical results that only considered linear control systems and quadratic running cost L(t, x, u) = |u|2. 5 27

Slide 28

Slide 28 text

Primal Dual Formulations and Numerical Algorithms We consider optimal transport cost Wp (⇢0, ⇢T ), where p = 1, 2. Wp(⇢0, ⇢T )p = inf u,⇢ n Z T 0 Z ⌦ 1 p n X i=1 ⇢(t, x)|ui(t, x)|pdxdt s.t. CE holds. o When p = 2, we formalize an unconstrained min-max problem inf m,⇢ sup L(m, ⇢, ), L(m, ⇢, ) = Z Rd ⇢T (x) (T, x) ⇢0(x) (0, x)dx + Z T 0 Z Rd n X i=1 |mi(t, x)|2 2⇢ ⇢ ( t + r · g0(t, x)) r · n X i=1 mi(t, x)gi(x) ! dxdt. We apply G-prox Primal-Dual Hybrid Gradient Algorithm (PDHG) to solve the saddle point problem on the discretized domain. The rate of convergence independent of grid size, which is important for large scale problems. 6 28

Slide 29

Slide 29 text

Example: Unicycle Figure: The initial and terminal distributions of unicycle model for two scenarios: initial density (green) splits into two parts (yellow) with di↵erent orientations. Case 1 (left) and Case 2 (right). The red arrows indicate the first coordinate ✓ . Recall the control rule: 8 > < > : ˙ ✓ = u1 ˙ x1 = u2 cos ✓ ˙ x2 = u2 sin ✓, u1 : Turning Rate, u2 : Forward/Backward Velocity. 7 29

Slide 30

Slide 30 text

Example: Unicycle W2 Di↵erent final density orientations lead to two qualitatively di↵erent controls. For case 1, the densities steer and then split to their final positions. For case 2, the density reach x1 -direction position first, and then split to move horizontally towards final positions. 8 30

Slide 31

Slide 31 text

Example: Unicycle W1 Figure: W1 solutions . For sub-Riemannian W1 cost with driftless system, we formalize the optimization problem to be independent of time-variable which reduces the dimensionality of the problem significantly. 9 31

Slide 32

Slide 32 text

Computational Methods for Non-potential Mean Field Game System with Mixed Couplings Levon Nurbekyan, Siting Liu, Matthew Jacobs, Wuchen Li, and Stanley Osher 32

Slide 33

Slide 33 text

Contributions I A novel method to model and solve MFG system with mixed coupling (nonlocal and local interactions). I Demonstrate that our approach is compatible with existing methods to solve single-agent control problems. I Combine ideas from convex optimization, kernel methods in machine learning and variational MFG. I Solve a class of non-potential MFG by casting it as primal-dual paris of monotone inclusions. 2 33

Slide 34

Slide 34 text

Nonlocal Mean Field Game System t + H(x, r ) = f0 (t, x, ⇢(x, t)) + f1 ✓ t, x, Z ⌦ K(x, y)⇢(y, t)dy ◆ ⇢t r · (⇢r pH(x, r )) = 0 ⇢(x, 0) = ⇢0(x), (x, 1) = g0 (x, ⇢(x, 1)) + g1 ✓ x, Z ⌦ S(x, y)⇢(y, 1)dy ◆ I Initial-terminal conditions: ⇢0(x), g0 (x, ⇢(x, 1)) + g1 x, R ⌦ S(x, y)⇢(y, 1)dy I Unknowns: ⇢, : ⌦ ⇥ [0, 1] ! R I Nonlocal interactions: f1 x, R ⌦ K(x, y)⇢(y, t)dy I Local interactions: f0 (t, x, ⇢(x, t)) 3 34

Slide 35

Slide 35 text

Nonlocal Interactions of MFG Potential MFG: K1(x, y) = X i ixiyi (Maximal spread) K2(x, y) = Y i µi exp ✓ |xi yi |2 2 2 i ◆ (Gaussian Repulsion) Nonpotential MFG: , + (x) = e x2 2 2 x<0 + e x2 2 2 + x 0 K3(x, y) = µ Y i i, , i,+ (yi xi) K4(x, y) = K3(Q 1x, Q 1y), Q : coordinates transformation matrix Feature expansions: K(x, y) = X i,j ki,j⇣i(x)⇣j(x) ⇣j(x): feature functions that capture the interactions between agents, e.g. Fourier series, polynomials etc. This is a computational e cient way of modeling the nonlocal interactions. 4 35

Slide 36

Slide 36 text

Example 1: Maximal Spread K(x, y) = X i ixiyi Larger i prompts larger spread in xi direction. 5 36

Slide 37

Slide 37 text

Example 2: Inhomogeneous Gaussian Repulsion K2(x, y) = µ Y i exp ✓ |xi yi |2 2 2 i ◆ Smaller i yields stronger repulsion in xi direction. Obstacles modeled through soft penalty u(x), entering such region yields a relatively high cost. L(x, v) = 1 2v 2+u(x), u(x) = 100. 6 37

Slide 38

Slide 38 text

Example 3: Mixed Couplings with Static Obstacles I Both local and nonlocal interactions, uneven splitting caused by asymmetric nonlocal kernel (nonpotential). I Density constraint over space ⇥ time: ⇢(x, t)  cUpperBound, (x, t) 2 ⌦ ⇥ [0, T]. I Obstacles (yellow) modeled via density constraint:⇢(x, t) = 0, x 2 ⌦ Obstacles. 7 38

Slide 39

Slide 39 text

A mean-field game inverse problem Lisang Ding, Wuchen Li, Stan Osher, Wotao Yin 2020.07.04 39

Slide 40

Slide 40 text

Introduction In Benamou-Brenier format, mean-field game could be formulated as: minimize ⇢,v Z T 0 Z Td 1 2⇢vT GM vdx + F(⇢(·, t)) dt + G(⇢(·, T)) subject to ⇢t + r · (⇢v) = 0 ⇢(·, 0) = ⇢0, (1) We call (1) the forward problem and call the problem of recovering the ground metric GM and the cost functionals F from the observations of ⇢ and v the inverse problem. Particularly, in maximum mean discrepancy (MMD) case, F(⇢(·, t)) = Z Td 1 2⇢(x, t)(K ⇤ ⇢)(x, t)dx. (2) We are interested in the convolution kernel K in (2). 2 40

Slide 41

Slide 41 text

Introduction Observations: I ⇢: density distribution of the players during the game I v: velocity field of the agents Target parameters (parameters to be recreated): I Ground metric GM : a metric function on the sample space, depicting the geometric contour of the transport space. I convolution kernel in MMD K: depicts the pairwise impact between the particles, which relies on nothing but the relative distance of any two players. 3 41

Slide 42

Slide 42 text

Formulations The inverse problem can be formulated as: minimize ✓,⇢,v ↵ 2 k⇢ ˆ ⇢k2 + 2 kv ˆ vk2 + ↵0 2 k⇢0 ˆ ⇢0 k2 + k⇢T ˆ ⇢T k2 + p kr✓kp p subject to r · (⇢v) + ⇢t = 0 (GM v)t + r ✓ 1 2 vT GM v ⇢ F(⇢) ◆ = 0 @(GM v)i @xj = @(GM v)j @xi , i 6= j Z si(ˆ xi,·) (GM v)idSx = 0, ˆ x i 2 Td 1 , i = 1, 2, . . . , d. (3) The objective function consists of L2 distance between density distribution, velocity field and their observations, and an additional Hp norm on target parameter ✓. The constraints restrict the solution to satisfy MFG PDEs. 4 42

Slide 43

Slide 43 text

Ground metric recreate (2D) We solve the inverse problem (3) with primal-dual algorithm and Bregman iteration. Numeric result by applying primal-dual algorithm: 0 50 0.5 50 1 0 0 0 50 0.5 50 1 0 0 0 50 0.1 50 0.2 0 0 Figure: Ground metric recreated in 2-dimensional basis under H 2 regularization. Parameters in the model were taken as ↵ = 1000/kˆ ⇢k2 , = 1/kˆ vk2 , ↵0 = 0, = 10 2. From left to right: g0 learned from the observations, real ¯ g0 , the absolute di↵erence between the learned g0 and the real data |g0 ¯ g0 |. 5 43

Slide 44

Slide 44 text

Convolution kernel recreate (2D) 0 20 0.5 20 10 1 10 0 0 0 20 0.5 20 10 1 10 0 0 0 20 0.1 20 10 0.2 10 0 0 Figure: Convolution kernel recreated in 2-dimensional basis under H 2 regularization. Parameters in the model were taken as ↵ = 100/kˆ ⇢k2 , = 1/kˆ vk2 , ↵0 = 0, = 10 3 From left to right: kernel K(x, 0) learned from the observations, real kernel Kr(x, 0), the absolute di↵erence between the learned parameter and the real one |K(x, 0) Kr(x, 0)|. 6 44

Slide 45

Slide 45 text

Noise on observation (1D) The observations ˆ ⇢, ˆ v were noised to di↵erent extents by i.i.d. additive noise on each pixel. ✏⇢ ⇠ ✏ ⇤kˆ ⇢k · U[ 0.5, 0.5] i.i.d. ✏v ⇠ ✏ ⇤kˆ vk · U[ 0.5, 0.5]d i.i.d., (4) where ✏ ⇤ > 0 is the noise factor. 0.8 50 1 20 1.2 0 0 -0.2 50 0 30 20 0.2 10 0 0 0.5 50 1 20 1.5 0 0 -0.2 50 0 30 20 0.2 10 0 0 0.5 50 1 1.5 20 0 0 -0.2 50 0 30 20 0.2 10 0 0 Figure: The noised ˆ ⇢, ˆ v in di↵erent noise level. The first row is for noised ˆ ⇢, and the second row is for noised ˆ v, with ✏ ⇤ = 0.1, 0.4, 1. 7 45

Slide 46

Slide 46 text

Result for noised case (1D) Select ↵ = 1 kˆ ⇢k2 , ↵0 = 0, = 1 kˆ vk2 , and di↵erent to recreate the ground metric in 1-dimensional case. 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.2 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0 0.5 1 0 20 40 60 0 0.5 1 0 20 40 60 0.2 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0 0.5 1 1.5 0 20 40 60 0 0.5 1 1.5 0 20 40 60 0 0.5 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 0 20 40 60 0.4 0.6 0.8 1 Figure: Recreated ground metric GM when the observations were corrupted with noise. In each sub-figure, the red curve is the learned ground metric while the blue curve is the real metric. From up to down, noise factor ✏ ⇤ = 0.1, 0.4, 1. From left to right, varies from 10 8 to 10 3. 8 46

Slide 47

Slide 47 text

Bregman iteration (1D) 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 0 50 0.4 0.6 0.8 1 Figure: The recreated metric kernel from noised data (✏ ⇤ = 1) in first 7 Bregman iterations. = 10 2. (The last figure is for recreated ground metric from primal-dual algorithm when taking = 10 4 as an appropriate optimal value.) 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 0 50 0 0.5 1 Figure: The recreated convolution kernel from noised data (✏ ⇤ = 1) in first 11 Bregman iterations. = 10 1. (The last figure is for recreated convolution kernel from primal-dual algorithm when taking = 10 3 as an appropriate optimal value.) 9 47

Slide 48

Slide 48 text

Controlling propagation of epidemics: Mean-field SIR games Stanley Osher Joint work with Wonjun Lee, Siting Liu, Hamidou Tembine and Wuchen Li 48

Slide 49

Slide 49 text

Classic Epidemic Model The classical Epidemic model is the SIR model (Kermack and McKendrick, 1927) 8 > > > > > < > > > > > : dS dt = SI dI dt = SI I dR dt = I where S, I,R : [0, T] ! [0, 1] represent the density of the susceptible population, infected population, and recovered population, respectively, given time t. The nonnegative constants and represent the rates of susceptible becoming infected and infected becoming recovered. 5 49

Slide 50

Slide 50 text

Spatial SIR To model the spatial e↵ect of virus spreading ,the spatial SIR model is considered: 8 > > > > > > > < > > > > > > > : @ t ⇢ S (t, x) + ⇢ S (t, x) Z ⌦ K(x, y)⇢ I (t, y)dy ⌘2 S 2 ⇢ S (t, x) = 0 @ t ⇢ I (t, x) ⇢ I (x) Z ⌦ K(x, y)⇢ S (t, y)dy + ⇢ I (t, x) ⌘2 I 2 ⇢ I (t, x) = 0 @ t ⇢ R (t, x) ⇢ I (t, x) ⌘2 R 2 ⇢ R (t, x) = 0 Here ⌦ is a given spatial domain and K(x, y) is a symmetric positive definite kernel modeling the physical distancing. E.g. R Kd⇢ I is the exposure to infectious agents. 6 50

Slide 51

Slide 51 text

vI 51

Slide 52

Slide 52 text

Variational formulation Denote m i = ⇢ i v i . Define the Lagrangian functional for Mean field game SIR problem by L((⇢ i , m i , i ) i=S,I,R ) =P(⇢ i , m i ) i=S,I,R Z T 0 Z ⌦ X i=S,I,R i ✓ @ t ⇢ i + r · m i ⌘2 i 2 ⇢ i ◆ dxdt + Z T 0 Z ⌦ I ⇢ I K ⇤ ⇢ S S ⇢ S K ⇤ ⇢ I + ⇢ I ( R I )dxdt. Using this Lagrangian functional, we convert the minimization problem into a saddle problem. inf (⇢i,mi)i=S,I,R sup ( i)i=S,I,R L((⇢ i , m i , i ) i=S,I,R ). 16 52

Slide 53

Slide 53 text

Algorithm Algorithm: PDHG for mean field game SIR system Input: ⇢ i (0, ·) (i = S, I, R) Output: ⇢ i , m i , i (i = S, I, R) for x 2 ⌦, t 2 [0, T] While relative error > tolerance ⇢(k+1) i = argmin ⇢ L(⇢, m(k) i , (k) i ) + 1 2⌧i k⇢ ⇢(k) i k2 L2 m(k+1) i = argmin m L(⇢(k+1), m, (k) i ) + 1 2⌧i km m(k) i k2 L2 (k+ 1 2 ) i = argmax L(⇢(k+1), m(k+1) i , ) 1 2 i k (k) i k2 H2 (k+1) i = 2 (k+ 1 2 ) i (k) i end 17 53

Slide 54

Slide 54 text

Examples I 19 Small recovery rate 54

Slide 55

Slide 55 text

Example II 20 Large recovery rate 55

Slide 56

Slide 56 text

Example III 21 Small recovery rate 56

Slide 57

Slide 57 text

Discussions Importance of spatial SIR variational problems. I Consider more status of populations, going beyond S, I, R. I Construct discrete spatial domain model, including airport, train station, hospital, school etc. I Propose inverse mean field SIR problems. Learning parameters in the model by daily life data. I Combine mean field game SIR models with AI and machine learning algorithms, including APAC, Neural variational ODE, Neural Fokker-Planck equations, etc. 22 57