Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Transport information dynamics with applications

Wuchen Li
January 28, 2021

Transport information dynamics with applications

In this talk, I briefly review several dynamical equations, raised in optimal transport, information geometry, and mean-field game modeling.

Wuchen Li

January 28, 2021
Tweet

More Decks by Wuchen Li

Other Decks in Research

Transcript

  1. Motivation Transport information dynamics are considered in optimal transport, information

    geometry and mean field games. Nowadays they play vital roles in physics, pandemic control, 5G communications, Lie group control, finance with applications in AI optimization and Bayesian sampling problems. 2
  2. Data—poor situation scienti f i c computing AI: Optimization &

    Inference Social science; Pandemic control; Robotics. & Mean f i eld games
  3. In this talk, we will design fast numerics towards the

    Mean field game system, focus on the following examples: I Mean field games; I Earth Mover’s distance; I Schr¨ odinger bridge problem. 3
  4. Mean field: One to all, all for one. I Strategy

    set: S = {C, D}; I Players: Infinity, i.e. players form (⇢C, ⇢D) with ⇢C + ⇢D = 1; I Payo↵s: F(⇢) = (FC(⇢), FD(⇢))T = W⇢, where W = ✓ 3 0 2 2 ◆ , meaning a Deer worthing 6, a rabbit worthing 2. 4
  5. Finite player potential games All players minimize the potential: inf

    X,u 1 N Z t 0 N X i=1 L(Xi(s), ui(s)) F(X1(s), · · · , XN (s))ds + G(X1(0), · · · , XN (0)) , where F, G are given potential, terminal functions, and the infimum is taken among all player i’s controls (strategy) vectors ui(s) and position Xi(s): d ds Xi = ui(s) , 0  s  t , Xi(t) = xi . 8
  6. Mean field potential games If the number of players goes

    to infinity, and F, G satisfy certain symmetric properties, then one approximates the game by the following minimization problem: inf ⇢,u Z t 0 { Z Td L(x, u(s, x))⇢(s, x)dx F(⇢(s, ·)}ds + G(⇢(0, ·)) , where the infimum is taken among all vector fields u(s, x) and density ⇢(s, x): @⇢ @s + r · (⇢u) = 0 , 0  s  t , ⇢(t, ·) = ⇢(·) . 9
  7. Analogs E.g. t = 0 −3 −2 −1 0 1

    2 3 −3 −2 −1 0 1 2 3 0 0.01 0.02 0.03 0.04 0.05 t = 1 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0.5 1 1.5 2 2.5 3 x 10−5 In above two systems, many similar structures have been discovered: I Primal dual PDEs [Larsy, Lions]; I Hamilton-Jacobi equation in probability set [Gangbo]. 10
  8. Dynamics A typical time-dependent MFG system (derived from (9)) has

    the form 8 > > > > > < > > > > > : t 2 + H (x, Dx (x, t), ⇢(x, t), t) = 0 ⇢t 2 ⇢ divx (⇢DpH (x, Dx (x, t), ⇢(x, t), t)) = 0 ⇢ 0, R Td ⇢(x, t)dx = 1, t 2 [0, T] ⇢(x, T) = ⇢T (x), (x, 0) = 0(x, ⇢(x, 0)), x 2 Td . (10) Given: H : Td ⇥ Rd ⇥ X ⇥ R ! R is a periodic in space and convex in momentum Hamiltonian, where X = R+ or X = L 1(Td ; R+) or X = R+ ⇥ L 1(Td ; R+). H = L ⇤ is a di↵usion parameter 0 : Td ⇥ X ! R, ⇢T : Td ! R given initial-terminal conditions Unknowns: , ⇢ : Td ⇥ [0, T] ! R has the form 11
  9. Goal We plan to numerically solve mean field games, with

    applications in Lie group control, inverse problems and pandemics. Di culties I Curse of dimensionality (Infinite dimension); I Structure keeping spatial discretization (Time reversible). Main tools: I Hopf-Lax formula in probability density space+AI; I Primal dual algorithms; I Neural ODEs; I Generative adversary networks; I Markov Chain Monte Carlo methods. 11
  10. Minimal flux problem Denote m(s, x) = ⇢(s, x)u(s, x).

    The variational problem forms inf ⇢,u Z t 0 { Z Td L(x, m(s, x) ⇢(s, x) )⇢(s, x)dx F(⇢(s, ·)}ds + G(⇢(0, ·)) , where the infimum is taken among all flux function m(s, x) and density ⇢(s, x): @⇢ @s + r · m = 0 , 0  s  t , ⇢(t, ·) = ⇢(·) . 12
  11. Finite volume discretization To mimic the minimal flux problem, we

    consider the discrete flux function div(m)|i = 1 x d X v=1 (mi+ 1 2 ev mi 1 2 ev ) , and the cost functional L(m, ⇢) = 8 > > < > > : P i+ ev 2 2E L ✓ m i+ 1 2 ev g i+ 1 2 ev ◆ gi+ 1 2 ev if gi+ ev 2 > 0 ; 0 if gi+ ev 2 = 0 and mi+ ev 2 = 0 ; +1 Otherwise . where gi+ 1 2 ev := 1 2 (⇢i + ⇢i+ev ) is the discrete probability on the edge i + ev 2 2 E. The time interval [0, 1] is divided into N interval, t = 1 N . 13
  12. Computational Mean field games Consider the discrete optimal control system:

    ˜ U(t, ⇢) := inf m,⇢ N X n=1 L(m n , ⇢ n ) N X n=1 F(⇢ n ) + G(⇢ 0 ) where the minimizer is taken among {⇢}n i , {m}n i+ ev 2 , such that ( ⇢ n+1 i ⇢n i + t · div(m)|i = 0 , ⇢N i = ⇢i . 14
  13. Primal-Dual structure Let H be the Legendre transform of L.

    sup inf m,⇢ ⇢ X n L(m n , ⇢ n ) t X n tF({⇢}n i ) + G({⇢}0 i ) + X n X i n i ⇢ n+1 i ⇢ n i + t · div(m)|i = sup inf ⇢ ⇢ inf m X n 0 @L(m n , ⇢ n ) + X i+ ev 2 2E 1 x ( n i n i+ev )mi+ 1 2 ev 1 A t X n tF({⇢}n i ) + G({⇢}0 i ) + X n X i n i ⇢ n+1 i ⇢ n i = sup inf ⇢ ⇢ X n X i+ ev 2 2E H ✓ 1 x ( n i n i+ev ) ◆ gi+ 1 2 ev t X n tF({⇢}n i ) + G({⇢}0 i ) + X n X i n i ⇢ n+1 i ⇢ n i 15
  14. Example 1: Kinetic energy A typical Lagrangian is the kinetic

    energy L(x, u) = kuk2 . Consider inf m,⇢ Z t 0 Z Td m2 (s, x) ⇢(s, x) dx F(⇢(s, ·))ds + G(⇢(0, ·)) such that @⇢(s, x) @s + r · (m(s, x)) = 0 , ⇢(t, ·) = ⇢ . 16
  15. Transport metric 7 dW (⇢, µ)2 = inf ⇢s,us n

    Z 1 0 Z R u2 s ⇢sdxdt: @s⇢s + r · (⇢sus) = 0, ⇢0 = ⇢, ⇢1 = µ o
  16. Case 1 ⇢, optimal, rx optimal −3 −2 −1 0

    1 2 3 −3 −2 −1 0 1 2 3 0.5 1 1.5 2 2.5 3 x 10−5 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 18
  17. Case 1: Evolution of Density [t = 0] t =

    0 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 0.01 0.02 0.03 0.04 0.05 t = 0.2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 2 4 6 8 10 12 14 x 10−3 t = 0.4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 2.5 3 x 10−3 t = 0.6 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 1 2 3 4 5 6 7 x 10−4 t = 0.8 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 2 4 6 8 10 12 14 x 10−5 t = 1 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0.5 1 1.5 2 2.5 3 x 10−5 19
  18. Case 2 ⇢, optimal, rx optimal −3 −2 −1 0

    1 2 3 −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10−5 −3 −2 −1 0 1 2 −3 −2 −1 0 1 2 −0.5 0 0.5 1 1.5 2 2.5 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 20
  19. Case 2: Evolution of Density t = 0 −3 −2

    −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 1 2 3 4 5 6 7 8 x 10−3 t = 0.2 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 2.5 3 3.5 x 10−3 t = 0.4 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 2 4 6 8 10 12 14 x 10−4 t = 0.6 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10−4 t = 0.8 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 5 10 15 x 10−5 t = 1 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10−5 21
  20. Robots and UAVs Robot Experiment Description: Step 1. 8 robots

    are initialized randomly in the region. According to MFG, they move towards the center as a Gaussian distribution. Step 2. The swarm is separated into 2 parts as 2 Gaussian distributions at the boundary of the region. Theoretically, the number of robots can be infinite. Practically, UAVs can also be applied. This experiment is finished in Georgia Institute of Technology, Robotarium project. 8
  21. Example 2: Earth Mover’s distance A special attention is paid

    into the homogenous degree one Lagrangian L(x, u) = kuk . Consider inf m,⇢ Z 1 0 Z Td km(t, x)kdxdt such that @⇢(t, x) @t + r · (m(t, x)) = 0 , ⇢(0, ·) = ⇢ 0 , ⇢(1, ·) = ⇢ 1 . By Jensen’s inequality in time. Let ˜ m(x) = R 1 0 m(t, x)dt, one minimizer is attached at a time independent optimization: inf ˜ m { Z Td k ˜ m(x)kdx: r · ˜ m(x) + ⇢ 1 (x) ⇢ 0 (x) = 0} This is an L1 minimization problem, which shares many similarities to the one in compressed sensing. 22
  22. L1 Primal Dual system In this setting, the discretized minimization

    problem forms minimize m kmk subject to div(m) + p 1 p 0 = 0 , We solve it by looking at its saddle point structure. Denote = ( i)N i=1 as a Lagrange multiplier: min m max kmk + T (div(m) + p 1 p 0 ) . The iteration steps are as follows (using Chambolle and Pock): ( mk+1 = arg minm kmk + ( k )T div(m) + km mkk2 2 2µ ; k+1 = arg max T div(2mk+1 mk + p1 p0 ) k kk2 2 2⌧ . 23
  23. Algorithm: 2 line codes Primal-dual method for EMD 1. For

    k = 1, 2, · · · Iterates until convergence 2. m k+1 i+ 1 2 = shrink2(mk i+ 1 2 + µr k i+ 1 2 , µ) ; 3. k+1 i = k i + ⌧{div(2m k+1 i mk i ) + p1 i p0 i } ; 4. End Here the shrink2 operator for the Euclidean metric is shrink2(y, ↵) := y kyk2 max{kyk2 ↵, 0} , where y 2 R2 . 24
  24. Classic Epidemic Model The classical Epidemic model is the SIR

    model (Kermack and McKendrick, 1927) 8 > > > > > < > > > > > : dS dt = SI dI dt = SI I dR dt = I where S, I,R : [0, T] ! [0, 1] represent the density of the susceptible population, infected population, and recovered population, respectively, given time t. The nonnegative constants and represent the rates of susceptible becoming infected and infected becoming recovered. 5 29
  25. Spatial SIR To model the spatial e↵ect of virus spreading

    ,the spatial SIR model is considered: 8 > > > > > > > < > > > > > > > : @ t ⇢ S (t, x) + ⇢ S (t, x) Z ⌦ K(x, y)⇢ I (t, y)dy ⌘2 S 2 ⇢ S (t, x) = 0 @ t ⇢ I (t, x) ⇢ I (x) Z ⌦ K(x, y)⇢ S (t, y)dy + ⇢ I (t, x) ⌘2 I 2 ⇢ I (t, x) = 0 @ t ⇢ R (t, x) ⇢ I (t, x) ⌘2 R 2 ⇢ R (t, x) = 0 Here ⌦ is a given spatial domain and K(x, y) is a symmetric positive definite kernel modeling the physical distancing. E.g. R Kd⇢ I is the exposure to infectious agents. 6 30
  26. Variational formulation Denote m i = ⇢ i v i

    . Define the Lagrangian functional for Mean field game SIR problem by L((⇢ i , m i , i ) i=S,I,R ) =P(⇢ i , m i ) i=S,I,R Z T 0 Z ⌦ X i=S,I,R i ✓ @ t ⇢ i + r · m i ⌘2 i 2 ⇢ i ◆ dxdt + Z T 0 Z ⌦ I ⇢ I K ⇤ ⇢ S S ⇢ S K ⇤ ⇢ I + ⇢ I ( R I )dxdt. Using this Lagrangian functional, we convert the minimization problem into a saddle problem. inf (⇢i,mi)i=S,I,R sup ( i)i=S,I,R L((⇢ i , m i , i ) i=S,I,R ). 16 32
  27. Algorithm Algorithm: PDHG for mean field game SIR system Input:

    ⇢ i (0, ·) (i = S, I, R) Output: ⇢ i , m i , i (i = S, I, R) for x 2 ⌦, t 2 [0, T] While relative error > tolerance ⇢(k+1) i = argmin ⇢ L(⇢, m(k) i , (k) i ) + 1 2⌧i k⇢ ⇢(k) i k2 L2 m(k+1) i = argmin m L(⇢(k+1), m, (k) i ) + 1 2⌧i km m(k) i k2 L2 (k+ 1 2 ) i = argmax L(⇢(k+1), m(k+1) i , ) 1 2 i k (k) i k2 H2 (k+1) i = 2 (k+ 1 2 ) i (k) i end 17 33