Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep UQ: A framework for high dimensional uncer...

Deep UQ: A framework for high dimensional uncertainty quantification using deep neural networks

We present a deep neural network approach to learning surrogate models for high-dimensional stochastic systems under limited data conditions. The proposed framework is demonstrated on a challenging elliptic PDE problem with heterogeneous stochastic permeability.

Rohit Tripathy

March 01, 2019
Tweet

More Decks by Rohit Tripathy

Other Decks in Science

Transcript

  1. Deep UQ: A framework for high dimensional uncertainty quantification using

    deep neural networks Rohit Tripathy, Ilias Bilionis Predictive Science Lab School of Mechanical Engineering Purdue University
  2. UNCERTAINTY PROPAGATION 2 r (a(x, ⇠)ru(x, ⇠)) = s(x, ⇠)

    <latexit sha1_base64="UmZxx6JMzGLhUAR5SLmplt1l4QI=">AAAD5HicfVPLjtMwFHUTHkN4TAeWbCyqSq1UqqRCAiGNNDwWLAdEZyrVncpxndYa5yHbgaksr9mwACG2fBQ7foUVTpoyaQZhKfLVuefce64dhxlnUvn+r5bjXrt+4+beLe/2nbv39tsH909kmgtCxyTlqZiEWFLOEjpWTHE6yQTFccjpaXj+qsiffqBCsjR5r9YZncV4mbCIEawsND9o/e6iMOULuY7tptEFM95jlOCQY4g4jVQP91CM1SqM9IUZQI3KnlrQhWkKTR9Wyi0r5Dk1udmp0FRZkWDLlerDQyj/z/S6UY1QCHYaLc3Gcd3jqsLgpW7brtq9bpkimOvXxpYs+UhDtMJKb1XYmDPdY/3CVR3PtzisyiEz1+wwsOjI931jixf0yDyHf7u8MBApFlN5CU1MJcdCpB8rr6F+t9XX2vWavgb1E0Es2VEPanM3kmejebvjD/1ywatBUAUdUK3jefsnWqQkj2miCMdSTgM/UzONhWKEU+OhXNIMk3O8pFMbJtgOOdPlZRjYtcgCRqmwX6JgidYVGseyuGzLLDzKZq4A/5Wb5ip6NtMsyXJFE7JpFOUcqhQWfzxcMEGJ4msbYCKY9QrJCgtMlH0Xnj2EoDny1eBkNAz8YfD2SefoZXUce+AheAR6IABPwRF4A47BGBAHO5+cL85XN3I/u9/c7xuq06o0D8DOcn/8AQuYTE4=</latexit> <latexit sha1_base64="UmZxx6JMzGLhUAR5SLmplt1l4QI=">AAAD5HicfVPLjtMwFHUTHkN4TAeWbCyqSq1UqqRCAiGNNDwWLAdEZyrVncpxndYa5yHbgaksr9mwACG2fBQ7foUVTpoyaQZhKfLVuefce64dhxlnUvn+r5bjXrt+4+beLe/2nbv39tsH909kmgtCxyTlqZiEWFLOEjpWTHE6yQTFccjpaXj+qsiffqBCsjR5r9YZncV4mbCIEawsND9o/e6iMOULuY7tptEFM95jlOCQY4g4jVQP91CM1SqM9IUZQI3KnlrQhWkKTR9Wyi0r5Dk1udmp0FRZkWDLlerDQyj/z/S6UY1QCHYaLc3Gcd3jqsLgpW7brtq9bpkimOvXxpYs+UhDtMJKb1XYmDPdY/3CVR3PtzisyiEz1+wwsOjI931jixf0yDyHf7u8MBApFlN5CU1MJcdCpB8rr6F+t9XX2vWavgb1E0Es2VEPanM3kmejebvjD/1ywatBUAUdUK3jefsnWqQkj2miCMdSTgM/UzONhWKEU+OhXNIMk3O8pFMbJtgOOdPlZRjYtcgCRqmwX6JgidYVGseyuGzLLDzKZq4A/5Wb5ip6NtMsyXJFE7JpFOUcqhQWfzxcMEGJ4msbYCKY9QrJCgtMlH0Xnj2EoDny1eBkNAz8YfD2SefoZXUce+AheAR6IABPwRF4A47BGBAHO5+cL85XN3I/u9/c7xuq06o0D8DOcn/8AQuYTE4=</latexit> <latexit sha1_base64="UmZxx6JMzGLhUAR5SLmplt1l4QI=">AAAD5HicfVPLjtMwFHUTHkN4TAeWbCyqSq1UqqRCAiGNNDwWLAdEZyrVncpxndYa5yHbgaksr9mwACG2fBQ7foUVTpoyaQZhKfLVuefce64dhxlnUvn+r5bjXrt+4+beLe/2nbv39tsH909kmgtCxyTlqZiEWFLOEjpWTHE6yQTFccjpaXj+qsiffqBCsjR5r9YZncV4mbCIEawsND9o/e6iMOULuY7tptEFM95jlOCQY4g4jVQP91CM1SqM9IUZQI3KnlrQhWkKTR9Wyi0r5Dk1udmp0FRZkWDLlerDQyj/z/S6UY1QCHYaLc3Gcd3jqsLgpW7brtq9bpkimOvXxpYs+UhDtMJKb1XYmDPdY/3CVR3PtzisyiEz1+wwsOjI931jixf0yDyHf7u8MBApFlN5CU1MJcdCpB8rr6F+t9XX2vWavgb1E0Es2VEPanM3kmejebvjD/1ywatBUAUdUK3jefsnWqQkj2miCMdSTgM/UzONhWKEU+OhXNIMk3O8pFMbJtgOOdPlZRjYtcgCRqmwX6JgidYVGseyuGzLLDzKZq4A/5Wb5ip6NtMsyXJFE7JpFOUcqhQWfzxcMEGJ4msbYCKY9QrJCgtMlH0Xnj2EoDny1eBkNAz8YfD2SefoZXUce+AheAR6IABPwRF4A47BGBAHO5+cL85XN3I/u9/c7xuq06o0D8DOcn/8AQuYTE4=</latexit>
  3. UNCERTAINTY PROPAGATION 3 Input uncertainty: QoI density: QoI mean: QoI

    variance: g a suitable probability distribution: ⇠ ⇠ p(⇠). (1 Given our beliefs about ⇠, we wish to characterize the statistical prop s of the output f(⇠) such as the mean: µf = Z f(⇠)p(⇠)d⇠, (2 variance, Z the probability density, pf (y) = Z y f(⇠) p(⇠)d⇠. is formally known as the uncertainty propagation pr Given our beliefs about ⇠, we wish to characterize the statisti rties of the output f(⇠) such as the mean: µf = Z f(⇠)p(⇠)d⇠, he variance, 2 f = Z f(⇠) µf p(⇠)d⇠, Given our beliefs about ⇠, we wish to characterize the statistica rties of the output f(⇠) such as the mean: µf = Z f(⇠)p(⇠)d⇠, he variance, 2 f = Z f(⇠) µf p(⇠)d⇠, and the probability density,
  4. THE SURROGATE IDEA • The expectations have to be computed

    numerically. • Monte Carlo, although independent in the dimensionality, converges very slowly in the number of samples of f. • Idea -> Replace the simulator of f with a surrogate model. 4
  5. 6 REVIEW: DIMENSIONALITY REDUCTION Ref.: [1]- Ghanem and Spanos. Stochastic

    finite elements: a spectral approach (2003). [2]- Constantine et. al. Active subspace methods in theory and practice: applications to kriging surfaces. (2014). [3]-Tripathy et. al. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation. (2016). [4]-Ma and Zabaras. Kernel principal component analysis for stochastic input model generation. (2011). • Truncated Karhunen-Loeve Expansion (also known as Linear Principal Component analysis)[1]. • Kernel PCA[4]. (Non-linear model reduction). • Active Subspaces[3, 4]. (Output space + Linear dimensionality reduction). In general, we need non-linear projections, utilize all available information, and do it without gradients.
  6. DEEP NEURAL NETWORKS (DNN) o Universal function approximators[1]. o Layered

    representation of information[2]. o Linear regression can be thought of as a special case of DNNs (no hidden layers). o Tremendous success in recent times in applications such as image classification [2], autonomous driving [3]. o AD-capable libraries such as TensorFlow, keras, PyTorch, MxNet etc. References: [1]-Hornik . Approximation capabilities of multilayer feedforward networks. (1991). [2]-Krishevsky et al. Imagenet classification with deep convolutional neural networks. (2012). [3]-Chen et. al. Deepdriving: Learning affordance for direct perception in autonomous driving. (2015). 7
  7. DEEP NEURAL NETWORKS (DNN) 8 ⇠ <latexit sha1_base64="/RtVYbScp3cX9XDNZ6mRaR9UgG8=">AAADyXichVJbixMxFE5nvKzjZbv66EuwFFqoZaYIirCwXhDBl1XsbqHplkyaacNmLiaZtTXkyX/om0/+FTPTGXe2KxoIOXzn+3K+c5Iw40wq3//ZctwbN2/d3rvj3b13/8F+++DhiUxzQeiYpDwVkxBLyllCx4opTieZoDgOOT0Nz98U+dMLKiRLk89qk9FZjJcJixjBykLzg9YvFKZ8ITexPTRaM+N1n6IEhxxDxGmkeriHYqxWYaTXZgB32X1YkfP/0QRbrlQfHsLo30yv2yQUAo3KPnXIc2qWZmurBgVdmFWFwUtdXa46vW6ZIpjrt8ZeWfKRhmiFla5V2Jgz3WP9wlUTz2scVtchM9fsMLDoyPd9O7GSHpmX8E+VVwYixWIqL6GJqeRYiPRr5TXUn2p9o1xv19egORHEkivqQaPvneTZaN7u+EO/XPB6EFRBB1TreN7+gRYpyWOaKMKxlNPAz9RMY6EY4dR4KJc0w+QcL+nUhgm2Tc50+RgGdi2ygFEq7E4ULNGmQuNYFo9tmYVHuZsrwL/lprmKXsw0S7Jc0YRsC0U5hyqFxbeGCyYoUXxjA0wEs14hWWGBibKf37NDCHZbvh6cjIaBPww+Puscva7GsQcegyegBwLwHByB9+AYjAFx3jncyZ0L94P7xV2737ZUp1VpHoEry/3+G89tP3o=</latexit> <latexit sha1_base64="/RtVYbScp3cX9XDNZ6mRaR9UgG8=">AAADyXichVJbixMxFE5nvKzjZbv66EuwFFqoZaYIirCwXhDBl1XsbqHplkyaacNmLiaZtTXkyX/om0/+FTPTGXe2KxoIOXzn+3K+c5Iw40wq3//ZctwbN2/d3rvj3b13/8F+++DhiUxzQeiYpDwVkxBLyllCx4opTieZoDgOOT0Nz98U+dMLKiRLk89qk9FZjJcJixjBykLzg9YvFKZ8ITexPTRaM+N1n6IEhxxDxGmkeriHYqxWYaTXZgB32X1YkfP/0QRbrlQfHsLo30yv2yQUAo3KPnXIc2qWZmurBgVdmFWFwUtdXa46vW6ZIpjrt8ZeWfKRhmiFla5V2Jgz3WP9wlUTz2scVtchM9fsMLDoyPd9O7GSHpmX8E+VVwYixWIqL6GJqeRYiPRr5TXUn2p9o1xv19egORHEkivqQaPvneTZaN7u+EO/XPB6EFRBB1TreN7+gRYpyWOaKMKxlNPAz9RMY6EY4dR4KJc0w+QcL+nUhgm2Tc50+RgGdi2ygFEq7E4ULNGmQuNYFo9tmYVHuZsrwL/lprmKXsw0S7Jc0YRsC0U5hyqFxbeGCyYoUXxjA0wEs14hWWGBibKf37NDCHZbvh6cjIaBPww+Puscva7GsQcegyegBwLwHByB9+AYjAFx3jncyZ0L94P7xV2737ZUp1VpHoEry/3+G89tP3o=</latexit>

    <latexit sha1_base64="/RtVYbScp3cX9XDNZ6mRaR9UgG8=">AAADyXichVJbixMxFE5nvKzjZbv66EuwFFqoZaYIirCwXhDBl1XsbqHplkyaacNmLiaZtTXkyX/om0/+FTPTGXe2KxoIOXzn+3K+c5Iw40wq3//ZctwbN2/d3rvj3b13/8F+++DhiUxzQeiYpDwVkxBLyllCx4opTieZoDgOOT0Nz98U+dMLKiRLk89qk9FZjJcJixjBykLzg9YvFKZ8ITexPTRaM+N1n6IEhxxDxGmkeriHYqxWYaTXZgB32X1YkfP/0QRbrlQfHsLo30yv2yQUAo3KPnXIc2qWZmurBgVdmFWFwUtdXa46vW6ZIpjrt8ZeWfKRhmiFla5V2Jgz3WP9wlUTz2scVtchM9fsMLDoyPd9O7GSHpmX8E+VVwYixWIqL6GJqeRYiPRr5TXUn2p9o1xv19egORHEkivqQaPvneTZaN7u+EO/XPB6EFRBB1TreN7+gRYpyWOaKMKxlNPAz9RMY6EY4dR4KJc0w+QcL+nUhgm2Tc50+RgGdi2ygFEq7E4ULNGmQuNYFo9tmYVHuZsrwL/lprmKXsw0S7Jc0YRsC0U5hyqFxbeGCyYoUXxjA0wEs14hWWGBibKf37NDCHZbvh6cjIaBPww+Puscva7GsQcegyegBwLwHByB9+AYjAFx3jncyZ0L94P7xV2737ZUp1VpHoEry/3+G89tP3o=</latexit> Y ✓ R. Typically, f depends on the solution of some PDE ⇠. Furthermore, f is unknown in closed form and informa only be obtained by querying the simulator at feasible valu for the possibility that the output observation, y, may be of the true solution f(⇠), i.e., y = f(⇠) + ✏, where ✏ is 9 (z) = z 1 + exp( z)
  8. STOCHASTIC ELLIPTIC PARTIAL DIFFERENTIAL EQUATION 9 r(a(x)ru(x)) = 0, x

    = (x1, x2) 2 ⌦ = [0, 1]2, u = 0, 8x1 = 1, u = 1, 8x1 = 0, @u @n = 0, 8x2 = 1. PDE: Boundary conditions: Uncertain diffusion: Exponential covariance: log he inputs ⇠ to the function f are not known merous engineering tasks). We formalize le probability distribution: ⇠ ⇠ p(⇠).
  9. 10 ts ⇠ to the function f are not known

    exactly (a common s engineering tasks). We formalize our beliefs about ⇠ bability distribution: ⇠ ⇠ p(⇠). (1) about ⇠, we wish to characterize the statistical prop- f(⇠) such as the mean: Z FVM u(x, ⇠) <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> ecomes computationally infeasible without resorting to ction. opagation ts ⇠ to the function f are not known exactly (a common s engineering tasks). We formalize our beliefs about ⇠ bability distribution: ⇠ ⇠ p(⇠). (1) FVM u(x, ⇠) <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> Random Conductivity Response
  10. 11 engineering tasks). We formalize our beliefs about ⇠ ability

    distribution: ⇠ ⇠ p(⇠). (1) about ⇠, we wish to characterize the statistical prop- (⇠) such as the mean: Z u(x, ⇠) <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> <latexit sha1_base64="zjo15pgqapEmaXtcrG02hqFE6YQ=">AAACB3icbVDLSgMxFM3UV62vUZeCBItQQcqMCLosunFZwT6gM5RMmmlDM8mQZKRlmJ0bf8WNC0Xc+gvu/Bsz7Sy0eiHkcM693HNPEDOqtON8WaWl5ZXVtfJ6ZWNza3vH3t1rK5FITFpYMCG7AVKEUU5ammpGurEkKAoY6QTj61zv3BOpqOB3ehoTP0JDTkOKkTZU3z5Mal6E9CgI00l26gWCDdQ0Ml/qTWh20rerTt2ZFfwL3AJUQVHNvv3pDQROIsI1ZkipnuvE2k+R1BQzklW8RJEY4TEakp6BHEVE+ensjgweG2YAQyHN4xrO2J8TKYpU7s505p7VopaT/2m9RIeXfkp5nGjC8XxRmDCoBcxDgQMqCdZsagDCkhqvEI+QRFib6ComBHfx5L+gfVZ3nbp7e15tXBVxlMEBOAI14IIL0AA3oAlaAIMH8ARewKv1aD1bb9b7vLVkFTP74FdZH9+rn5nP</latexit> Figure 9: Comparisons of DNN prediction of the PDE solution to that correct solution for 4 randomly
  11. 12 WHAT KIND OF DATA WE ARE USING? • 6,000

    random conductivities generated by: • Sampling length-scales. • Sampling the conductivity on a 32x32 grid. • For each one of the sampled conductivities, we solved the PDE on a 32x32 grid. • This generated ~6,000,000 input-output pairs of: • 33% used for training, 33% for validation error, 33% for testing predictions.
  12. NETWORK ARCHITECTURE Surrogate Link Projection D Active subspace: 13 f(⇠)

    = g(h(⇠)) <latexit sha1_base64="Zr0DhZf1fdvD1iBfacd5sbAVmD0=">AAACH3icbVDLSgMxFM3UV62vUZdugkVoN2VGRN0IRTcuK9gHdIaSSe+0oZkHSUYsw/yJG3/FjQtFxF3/xrQdQVsvBE7Ouffm5HgxZ1JZ1sQorKyurW8UN0tb2zu7e+b+QUtGiaDQpBGPRMcjEjgLoamY4tCJBZDA49D2RjdTvf0AQrIovFfjGNyADELmM0qUpnrmuV9xHlkVX+HUmW1LPZ5ANsgqP3cB/WyYVZyAqKHnp7o7q1Z7ZtmqWbPCy8DOQRnl1eiZX04/okkAoaKcSNm1rVi5KRGKUQ5ZyUkkxISOyAC6GoYkAOmmMwcZPtFMH/uR0CdUeMb+nkhJIOU48HTn1KVc1Kbkf1o3Uf6lm7IwThSEdP6Qn3CsIjwNC/eZAKr4WANCBdNeMR0SQajSkZZ0CPbil5dB67RmWzX77qxcv87jKKIjdIwqyEYXqI5uUQM1EUVP6AW9oXfj2Xg1PozPeWvByGcO0Z8yJt+yC6Nl</latexit> <latexit sha1_base64="Zr0DhZf1fdvD1iBfacd5sbAVmD0=">AAACH3icbVDLSgMxFM3UV62vUZdugkVoN2VGRN0IRTcuK9gHdIaSSe+0oZkHSUYsw/yJG3/FjQtFxF3/xrQdQVsvBE7Ouffm5HgxZ1JZ1sQorKyurW8UN0tb2zu7e+b+QUtGiaDQpBGPRMcjEjgLoamY4tCJBZDA49D2RjdTvf0AQrIovFfjGNyADELmM0qUpnrmuV9xHlkVX+HUmW1LPZ5ANsgqP3cB/WyYVZyAqKHnp7o7q1Z7ZtmqWbPCy8DOQRnl1eiZX04/okkAoaKcSNm1rVi5KRGKUQ5ZyUkkxISOyAC6GoYkAOmmMwcZPtFMH/uR0CdUeMb+nkhJIOU48HTn1KVc1Kbkf1o3Uf6lm7IwThSEdP6Qn3CsIjwNC/eZAKr4WANCBdNeMR0SQajSkZZ0CPbil5dB67RmWzX77qxcv87jKKIjdIwqyEYXqI5uUQM1EUVP6AW9oXfj2Xg1PozPeWvByGcO0Z8yJt+yC6Nl</latexit> <latexit sha1_base64="Zr0DhZf1fdvD1iBfacd5sbAVmD0=">AAACH3icbVDLSgMxFM3UV62vUZdugkVoN2VGRN0IRTcuK9gHdIaSSe+0oZkHSUYsw/yJG3/FjQtFxF3/xrQdQVsvBE7Ouffm5HgxZ1JZ1sQorKyurW8UN0tb2zu7e+b+QUtGiaDQpBGPRMcjEjgLoamY4tCJBZDA49D2RjdTvf0AQrIovFfjGNyADELmM0qUpnrmuV9xHlkVX+HUmW1LPZ5ANsgqP3cB/WyYVZyAqKHnp7o7q1Z7ZtmqWbPCy8DOQRnl1eiZX04/okkAoaKcSNm1rVi5KRGKUQ5ZyUkkxISOyAC6GoYkAOmmMwcZPtFMH/uR0CdUeMb+nkhJIOU48HTn1KVc1Kbkf1o3Uf6lm7IwThSEdP6Qn3CsIjwNC/eZAKr4WANCBdNeMR0SQajSkZZ0CPbil5dB67RmWzX77qxcv87jKKIjdIwqyEYXqI5uUQM1EUVP6AW9oXfj2Xg1PozPeWvByGcO0Z8yJt+yC6Nl</latexit> f(⇠) = g(WT ⇠), W 2 RD⇥d, d ⌧ D. <latexit sha1_base64="u1WPxs6dzBJVKMvP35mCgJ5jcxw=">AAACnXicjVFdaxQxFM1M1dbxo1v7pg9eXAq7IMtMKbQvwmIrqIhU6XYLm+2SyWR2QzOZIckUl5B/5S/xzX9jZnYL2ip4IXDuuefk3tykleDaxPHPINy4d//B5tbD6NHjJ0+3OzvPznVZK8pGtBSlukiJZoJLNjLcCHZRKUaKVLBxenXc1MfXTGleyjOzrNi0IHPJc06J8dSs830v7+FvvA9vwOL2OpuKmrm5693kimVu4Xq4IGaR5tarXb8f/ZftxjN27vIMGsNrwDj6hwQwl7DKU/vVXdoTwIYXTEPmWl8GWAg4Gcw63XgQtwF3QbIGXbSO01nnB85KWhdMGiqI1pMkrszUEmU4FcxFuNasIvSKzNnEQ0l806lth3Sw55kM8lL5Iw207O8OSwqtl0Xqlc3s+natIf9Wm9QmP5paLqvaMElXjfJagCmh+SrIuGLUiKUHhCruZwW6IIpQ4z808ktIbj/5LjjfHyTxIPly0B2+Xa9jC71Ar1APJegQDdF7dIpGiAbPg2HwIfgYvgzfhZ/CzytpGKw9u+iPCMe/AL72zQM=</latexit> <latexit sha1_base64="u1WPxs6dzBJVKMvP35mCgJ5jcxw=">AAACnXicjVFdaxQxFM1M1dbxo1v7pg9eXAq7IMtMKbQvwmIrqIhU6XYLm+2SyWR2QzOZIckUl5B/5S/xzX9jZnYL2ip4IXDuuefk3tykleDaxPHPINy4d//B5tbD6NHjJ0+3OzvPznVZK8pGtBSlukiJZoJLNjLcCHZRKUaKVLBxenXc1MfXTGleyjOzrNi0IHPJc06J8dSs830v7+FvvA9vwOL2OpuKmrm5693kimVu4Xq4IGaR5tarXb8f/ZftxjN27vIMGsNrwDj6hwQwl7DKU/vVXdoTwIYXTEPmWl8GWAg4Gcw63XgQtwF3QbIGXbSO01nnB85KWhdMGiqI1pMkrszUEmU4FcxFuNasIvSKzNnEQ0l806lth3Sw55kM8lL5Iw207O8OSwqtl0Xqlc3s+natIf9Wm9QmP5paLqvaMElXjfJagCmh+SrIuGLUiKUHhCruZwW6IIpQ4z808ktIbj/5LjjfHyTxIPly0B2+Xa9jC71Ar1APJegQDdF7dIpGiAbPg2HwIfgYvgzfhZ/CzytpGKw9u+iPCMe/AL72zQM=</latexit> <latexit sha1_base64="u1WPxs6dzBJVKMvP35mCgJ5jcxw=">AAACnXicjVFdaxQxFM1M1dbxo1v7pg9eXAq7IMtMKbQvwmIrqIhU6XYLm+2SyWR2QzOZIckUl5B/5S/xzX9jZnYL2ip4IXDuuefk3tykleDaxPHPINy4d//B5tbD6NHjJ0+3OzvPznVZK8pGtBSlukiJZoJLNjLcCHZRKUaKVLBxenXc1MfXTGleyjOzrNi0IHPJc06J8dSs830v7+FvvA9vwOL2OpuKmrm5693kimVu4Xq4IGaR5tarXb8f/ZftxjN27vIMGsNrwDj6hwQwl7DKU/vVXdoTwIYXTEPmWl8GWAg4Gcw63XgQtwF3QbIGXbSO01nnB85KWhdMGiqI1pMkrszUEmU4FcxFuNasIvSKzNnEQ0l806lth3Sw55kM8lL5Iw207O8OSwqtl0Xqlc3s+natIf9Wm9QmP5paLqvaMElXjfJagCmh+SrIuGLUiKUHhCruZwW6IIpQ4z808ktIbj/5LjjfHyTxIPly0B2+Xa9jC71Ar1APJegQDdF7dIpGiAbPg2HwIfgYvgzfhZ/CzytpGKw9u+iPCMe/AL72zQM=</latexit>
  13. Discrepancy / log likelihood ✓ = {Wi, bi }L i=1

    All network parameters (weights and biases): Loss function: TRAINING THE DNN FOR FIXED ARCHITECTURE 14 Regularizer / Log prior (Bayesian Global Optimization on validation error)
  14. 15 DEMONSTRATION ON THE PROBLEM Lowest Validation error (and selected

    DNN) Bayesian Global Optimization Hyper-parameter Negative Validation Error A different DNN trained with ADAM.
  15. Fig.: (From left column to right) (a) Input random field;

    (b) True (FVM) solution; (c) Predicted solution. 16
  16. ARBITRARY LENGTHSCALE PREDICTIONS Relative errors in predicted solution. • Blue

    dot – Lengthscales not represented in the training set. • Black x – Lengthscales represented in the training set. OBSERVATION: Higher relative error for inputs with smaller lengthscales. 17
  17. UNCERTAINTY PROPAGATION SPECIFIC LENGTHSCALE 18 Fig. : Comparison of Monte

    Carlo* (left) mean and variance and surrogate (right) mean and variance for the PDE solution. Lengthscales: * 106 MC samples.
  18. 19 Fig.: Comparison of solution pdf at x = (0.484,0.484)

    obtained from MCS* and DNN surrogate. Fig.: Comparison of solution pdf at x = (0.328, 0.641) obtained from MCS* and DNN surrogate. * 106 MC samples. UNCERTAINTY PROPAGATION SPECIFIC LENGTHSCALE
  19. FUTURE WORK • Physics-informed models (For eg. - leveraging the

    variational principle associated with the stochastic PDE). • How to induce function space priors in parametric models? • Architectural considerations – For e.g. - fully convolutional architectures. THANK YOU ! 21 Slides: https://speakerdeck.com/rohitkt10/dnn-for-hd-uq Paper: https://www.sciencedirect.com/science/article/pii/S0021999118305655
  20. Setting: We have a suite of simulators of varying fidelity:

    22 MULTIFIDELITY CASE f1, f2 · · · , fn Accuracy D1, D2, · · · , Dn Size
  21. 23 `x = 0.3, `y = 0.3 Lengthscales: KL expansion:

    log a(x) = N X i=1 p i i(x)⇠i. N = 350 # terms: ELLIPTIC PDE REVISITED Bi-fidelity dataset size: Nlow = 900, Nhigh = 300
  22. 24 Fig. : How many samples of the purely high

    fidelity dataset would we need to converge to the reduce the error obtained through the multifidelity case ?