Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Active Subspace is Nearly Stationary

The Active Subspace is Nearly Stationary

Presentation at 15th International Conference on Approximation Theory, San Antonio, May 2016

Paul Constantine

May 24, 2016
Tweet

More Decks by Paul Constantine

Other Decks in Science

Transcript

  1. ACTIVE SUBSPACES for dimension reduction in parameter studies This material

    is based upon work supported by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Award Number DE-SC-0011077. PAUL CONSTANTINE Ben L. Fryrear Assistant Professor Applied Mathematics & Statistics Colorado School of Mines activesubspaces.org! @DrPaulynomial! SLIDES: DISCLAIMER: These slides are meant to complement the oral presentation. Use out of context at your own risk. In collaboration with: RACHEL WARD UT Austin ARMIN EFTEKHARI UT Austin
  2. f( x ) ⇡ g(UT x ) Ridge approximation UT

    : Rm ! Rn g : Rn ! R where
  3. f( x ) ⇡ g(UT x ) Ridge approximation What

    is U? What is the approximation error? What is g?
  4. f( x ) ⇡ g(UT x ) Ridge approximation What

    is the approximation error? f( x ) g(UT x ) L2(⇢) = ✓Z (f( x ) g(UT x ))2 ⇢( x ) d x ◆1 2 Use the weighted root-mean-squared error: Given weight function
  5. µ(y) = Z f(Uy + V z) ⇡(z|y) dz f(

    x ) ⇡ g(UT x ) Ridge approximation Use the conditional average: Subspace coordinates What is g? Complement subspace and coordinates Conditional density µ(UT x ) is the best approximation (Pinkus, 2015).
  6. f( x ) ⇡ g(UT x ) Ridge approximation What

    is U? Define the error function: R(U) = 1 2 Z (f( x ) µ(UT x ))2 ⇢( x ) d x Minimize the error: minimize U R ( U ) subject to U 2 G ( n, m ) Grassmann manifold of n-dimensional subspaces
  7. Define the active subspace Consider a function and its gradient

    vector, The average outer product of the gradient and its eigendecomposition, Partition the eigendecomposition, Rotate and separate the coordinates, ⇤ =  ⇤1 ⇤2 , W = ⇥ W 1 W 2 ⇤ , W 1 2 Rm⇥n x = W W T x = W 1W T 1 x + W 2W T 2 x = W 1y + W 2z active variables inactive variables f = f( x ), x 2 Rm, rf( x ) 2 Rm, ⇢ : Rm ! R + C = Z rf rfT ⇢ d x = W ⇤W T
  8. i = Z rfT wi 2 ⇢ d x ,

    i = 1, . . . , m The eigenpairs identify perturbations that change the function more, on average. LEMMA LEMMA Z (ryf)T (ryf) ⇢ d x = 1 + · · · + n Z (rzf)T (rzf) ⇢ d x = n+1 + · · · + m
  9. An approximation result Conditional average Active subspace Poincaré constant Eigenvalues

    associated with inactive subspace f( x ) µ(W T 1 x ) L2(⇢)  C ( n+1 + · · · + m)1 2
  10. k ¯ rR(W 1)kF  L ⇣ 2m1 2 +

    (m n)1 2 ⌘ ( n+1 + · · · + m)1 2 The active subspace is nearly stationary. R(U) = 1 2 Z (f( x ) µ(UT x ))2 ⇢( x ) d x Recall: Assume (1) Lipschitz continuous function (2) Gaussian density function Gradient on the Grassmann manifold Active subspace Frobenius norm Dimensions Lipschitz constant Eigenvalues associated with inactive subspace
  11. IDEA Use active subspace as the starting point for numerical

    ridge approximation. U0 Given an initial subspace and samples (1)  Compute (2)  Fit a polynomial with the pairs (3)  Minimize residual over subpsaces (4)  Set and repeat { xi, f( xi)} { yi, f( xi)} pN (y, ✓) U⇤ = argmin U2G(n,m) X i ⇣ f( xi) p(UT xi, ✓⇤) ⌘2 ✓⇤ = argmin ✓ X i f( xi) p( yi, ✓) 2 yi = UT 0 xi U0 = U⇤
  12. An example where it doesn’t work 0 π/4 π/2 3π/4

    π SUBSPACE ANGLE 0 5 10 15 20 25 30 ERROR Active subspace U = [0; 1] Inactive subspace U = [1; 0] f ( x1, x2) = 5 x1 + sin(10 ⇡x2) C =  25 0 0 526
  13. An example where it works DRAG COEFFICIENT as a function

    of 18 shape parameters Uniform on a hypercube SU2 CFD solver with adjoint solver for gradients
  14. An example where it works C = Z rf rfT

    ⇢ d x = W ⇤W T Recall: DRAG COEFFICIENT as a function of 18 shape parameters Uniform on a hypercube SU2 CFD solver with adjoint solver for gradients
  15. Residual as a function of alternating iteration for different starting

    subspaces RANDOM IDENTITY ACTIVE SUBSPACE Increasing polynomial degree (total) Increasing subspace dimension Residual Alternating iteration
  16. Residual as a function of alternating iteration for different starting

    subspaces RANDOM IDENTITY ACTIVE SUBSPACE Increasing polynomial degree (total) Increasing subspace dimension
  17. How do active subspaces relate to [insert method]? What if

    I don’t have gradients? What kinds of models does this work on? PAUL CONSTANTINE Ben L. Fryrear Assistant Professor Colorado School of Mines activesubspaces.org! @DrPaulynomial! QUESTIONS? Active Subspaces SIAM (2015)