Slide 1

Slide 1 text

Probabilistic Programming with Christopher Fonnesbeck Department of Biostatistics Vanderbilt University Medical Center

Slide 2

Slide 2 text

Probabilistic Programming

Slide 3

Slide 3 text

Stochastic language "primitives" Distribution over values: X ~ Normal(μ, σ) x = X.random(n=100) Distribution over functions: Y ~ GaussianProcess(mean_func(x), cov_func(x)) y = Y.predict(x2) Conditioning: p ~ Beta(1, 1) z ~ Bernoulli(p) # z|p

Slide 4

Slide 4 text

Bayesian Inference

Slide 5

Slide 5 text

Inverse Probability

Slide 6

Slide 6 text

Why Bayes? ❝The Bayesian approach is attractive because it is useful. Its usefulness derives in large measure from its simplicity. Its simplicity allows the investigation of far more complex models than can be handled by the tools in the classical toolbox.❞ —Link and Barker 2010

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

Probabilistic Programming in three easy steps

Slide 10

Slide 10 text

Encode a Probability Model⚐ 1 ⚐ in Python

Slide 11

Slide 11 text

Prior distribution Quantifies the uncertainty in latent variables

Slide 12

Slide 12 text

Prior distribution Quantifies the uncertainty in latent variables

Slide 13

Slide 13 text

Prior distribution Quantifies the uncertainty in latent variables

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

Likelihood function Conditions our model on the observed data

Slide 17

Slide 17 text

Likelihood function Conditions our model on the observed data

Slide 18

Slide 18 text

Models the distribution of hits observed from at-bats.

Slide 19

Slide 19 text

Counts per unit time

Slide 20

Slide 20 text

Infer Values for latent variables 2

Slide 21

Slide 21 text

Posterior distribution

Slide 22

Slide 22 text

Posterior distribution

Slide 23

Slide 23 text

Posterior distribution

Slide 24

Slide 24 text

Probabilistic programming abstracts the inference procedure

Slide 25

Slide 25 text

Check your Model 3

Slide 26

Slide 26 text

Model checking

Slide 27

Slide 27 text

WinBUGS

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

model { for (j in 1:J){ y[j] ~ dnorm (theta[j], tau.y[j]) theta[j] ~ dnorm (mu.theta, tau.theta) tau.y[j] <- pow(sigma.y[j], -2) } mu.theta ~ dnorm (0.0, 1.0E-6) tau.theta <- pow(sigma.theta, -2) sigma.theta ~ dunif (0, 1000) }

Slide 30

Slide 30 text

PyMC3 ☞ started in 2003 ☞ PP framework for fitting arbitrary probability models ☞ based on Theano ☞ implements "next generation" Bayesian inference methods ☞ NumFOCUS sponsored project !" github.com/pymc-devs/pymc3

Slide 31

Slide 31 text

Calculating Gradients in Theano >>> from theano import function, tensor as tt >>> x = tt.dmatrix('x') >>> s = tt.sum(1 / (1 + tt.exp(-x))) >>> gs = tt.grad(s, x) >>> dlogistic = function([x], gs) >>> dlogistic([[3, -1],[0, 2]]) array([[ 0.04517666, 0.19661193], [ 0.25 , 0.10499359]])

Slide 32

Slide 32 text

Calculating Gradients in Theano >>> from theano import function, tensor as tt >>> x = tt.dmatrix('x') >>> s = tt.sum(1 / (1 + tt.exp(-x))) >>> gs = tt.grad(s, x) >>> dlogistic = function([x], gs) >>> dlogistic([[3, -1],[0, 2]]) array([[ 0.04517666, 0.19661193], [ 0.25 , 0.10499359]])

Slide 33

Slide 33 text

Example: Radon exposure✴ ✴ Gelman et al. (2013) Bayesian Data Analysis

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

Unpooled model Model radon in each county independently. where (counties)

Slide 36

Slide 36 text

Priors with Model() as unpooled_model: α = Normal('α', 0, sd=1e5, shape=counties) β = Normal('β', 0, sd=1e5) σ = HalfCauchy('σ', 5)

Slide 37

Slide 37 text

>>> type(β) pymc3.model.FreeRV

Slide 38

Slide 38 text

>>> type(β) pymc3.model.FreeRV >>> β.distribution.logp(-2.1).eval() array(-12.4318639983954)

Slide 39

Slide 39 text

>>> type(β) pymc3.model.FreeRV >>> β.distribution.logp(-2.1).eval() array(-12.4318639983954) >>> β.random(size=4) array([ -10292.91760326, 22368.53416626, 124851.2516102, 44143.62513182]])

Slide 40

Slide 40 text

Transformed variables with unpooled_model: θ = α[county] + β*floor

Slide 41

Slide 41 text

Likelihood with unpooled_model: y = Normal('y', θ, sd=σ, observed=log_radon)

Slide 42

Slide 42 text

Model graph

Slide 43

Slide 43 text

Calculating Posteriors

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

Bayesian approximation ☞ Maximum a posteriori (MAP) estimate ☞ Laplace (normal) approximation ☞ Rejection sampling ☞ Importance sampling ☞ Sampling importance resampling (SIR) ☞ Approximate Bayesian Computing (ABC)

Slide 47

Slide 47 text

MCMC Markov chain Monte Carlo simulates a Markov chain for which some function of interest is the unique, invariant, limiting distribution.

Slide 48

Slide 48 text

MCMC Markov chain Monte Carlo simulates a Markov chain for which some function of interest is the unique, invariant, limiting distribution. This is guaranteed when the Markov chain is constructed that satisfies the detailed balance equation:

Slide 49

Slide 49 text

Metropolis sampling

Slide 50

Slide 50 text

Metropolis sampling** ** 2000 iterations, 1000 tuning

Slide 51

Slide 51 text

Hamiltonian Monte Carlo Uses a physical analogy of a frictionless particle moving on a hyper-surface Requires an auxiliary variable to be specified ☞ position (unknown variable value) ☞ momentum (auxiliary)

Slide 52

Slide 52 text

Hamiltonian MC ➀ Sample a new velocity from univariate Gaussian ➁ Perform n leapfrog steps to obtain new state ➂ Perform accept/reject move of

Slide 53

Slide 53 text

Hamiltonian MC** ** 2000 iterations, 1000 tuning

Slide 54

Slide 54 text

No U-Turn Sampler (NUTS) Hoffmann and Gelman (2014)

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

Variational Inference

Slide 59

Slide 59 text

Variational Inference Variational inference minimizes the Kullback-Leibler divergence from approximate distributions, but we can't calculate the true posterior distribution.

Slide 60

Slide 60 text

Evidence Lower Bound (ELBO)

Slide 61

Slide 61 text

ADVI* * Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., & Blei, D. M. (2016, March 2). Automatic Differentiation Variational Inference. arXiv.org.

Slide 62

Slide 62 text

Maximizing the ELBO

Slide 63

Slide 63 text

Estimating Beta(147, 255) posterior

Slide 64

Slide 64 text

with partial_pooling: approx = fit(n=100000) Average Loss = 1,115.5: 100%|██████████| 100000/100000 [00:13<00:00, 7690.51it/s] Finished [100%]: Average Loss = 1,115.5

Slide 65

Slide 65 text

with partial_pooling: approx = fit(n=100000) Average Loss = 1,115.5: 100%|██████████| 100000/100000 [00:13<00:00, 7690.51it/s] Finished [100%]: Average Loss = 1,115.5 >>> approx

Slide 66

Slide 66 text

with partial_pooling: approx_sample = approx.sample(1000) traceplot(approx_sample, varnames=['mu_a', 'σ_a'])

Slide 67

Slide 67 text

No content

Slide 68

Slide 68 text

New PyMC3 Features ☞ Gaussian processes ☞ Elliptical slice sampling ☞ Sequential Monte Carlo methods ☞ Time series models

Slide 69

Slide 69 text

Convolutional variational autoencoder♌ import keras class Decoder: ... def decode(self, zs): keras.backend.theano_backend._LEARNING_PHASE.set_value(np.uint8(0)) return self._get_dec_func()(zs) ♌ Taku Yoshioka (c) 2016

Slide 70

Slide 70 text

with pm.Model() as model: # Hidden variables zs = pm.Normal('zs', mu=0, sd=1, shape=(minibatch_size, dim_hidden), dtype='float32') # Decoder and its parameters dec = Decoder(zs, net=cnn_dec) # Observation model xs_ = pm.Normal('xs_', mu=dec.out.ravel(), sd=0.1, observed=xs_t.ravel(), dtype='float32')

Slide 71

Slide 71 text

Bayesian Deep Learning in PyMC3✧ ✧ Thomas Wiecki 2016

Slide 72

Slide 72 text

The Future ☞ Operator Variational Inference ☞ Normalizing Flows ☞ Riemannian Manifold HMC ☞ Stein variational gradient descent ☞ ODE solvers

Slide 73

Slide 73 text

Jupyter Notebook Gallery

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

The PyMC3 Team ☞ Colin Carroll ☞ Peadar Coyle ☞ Bill Engels ☞ Maxim Kochurov ☞ Junpeng Lao ☞ Osvaldo Martin ☞ Kyle Meyer ☞ Austin Rochford ☞ John Salvatier ☞ Adrian Seyboldt ☞ Thomas Wiecki ☞ Taku Yoshioka