Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ASME V&V 2016 talk on probabilistic active subs...

ASME V&V 2016 talk on probabilistic active subspaces

Gaussian processes with built-in dimensionality reduction .

Rohit Tripathy

May 18, 2016
Tweet

More Decks by Rohit Tripathy

Other Decks in Research

Transcript

  1. Probabilistic Active Subspaces: Learning High-dimensional Noisy Functions without Gradients Rohit

    Tripathy School of Mechanical Engineering Purdue University predictivesciencelab.org
  2. The Team 2 Rohit Tripathy [email protected] PhD student Marcial Gonzalez

    [email protected] Assistant Professor Mechanics Ilias Bilionis [email protected] Assistant Professor Uncertainty Quantification predictivesciencelab.org
  3. The Paper Submitted to Journal of Computational Physics. http://arxiv.org/abs/1602.04550 as

    “Gaussian processes with built-in dimensionality reduction: Applications in high-dimensional uncertainty propagation”
  4. MONTE CARLO • Simplest way to propagate uncertainty. • Convergence

    rate independent of number of dimensions. • Realistic problems require tens of thousands of simulations. • “Monte Carlo is fundamentally unsound” - O’ Hagen (1987). 6
  5. The Surrogate Idea • Do a finite number of simulations.

    • Replace model with an approximation: • The surrogate is usually cheap to evaluate. • Solve the UQ problem with the surrogate. 7
  6. Examples of Surrogates • generalized linear models. • generalized polynomial

    chaos. • Neural networks. • support vector machines. • Gaussian Processes. 8
  7. Problems • The response is very expensive. • The input

    is high-dimensional (> 20). • We cannot compute derivatives of the response. • The response may be noisy. “How do you build a surrogate under these conditions?”
  8. Active Subspaces Input Parameters Physical model Quantities of interest Reduced

    dim space orthogonal projection matrix Gaussian process
  9. Probabilistic Approach to Active Subspaces Needs to remain orthogonal maximize

    log-likelihood Line search with efficient global optimization (Bayesian global optimization)
  10. Comparison to the Classic Approach Common: 200 observations plus gradients

    W Probabilistic: just 200 observations W (almost) identical
  11. How do we find the reduced space dimension? Use the

    Bayesian information criterion - a very crude approximation to model evidence -
  12. Does BIC find the right active dimension? True d=1, BIC

    becomes flat after d=1 True d=2, BIC becomes flat after d=2 True d=3, BIC becomes flat after d=3
  13. Granular Crystals • Unique, highly non-linear dynamical properties. • Formation

    and propagation of highly localized elastic stress w • Dynamics described by a fully elastic model known as the He
  14. Next Steps • Extend to non-linear dimensionality reduction • Flat

    architecture - infinite mixture of AS-GP models. • Deep architecture - deep GPs with AS-GP components. • Epistemic uncertainty on W (MCMC on Stiefel manifold (Giro