Slide 1

Slide 1 text

data analysis with Markov chain Monte Carlo Dan Foreman-Mackey CCPP@NYU

Slide 2

Slide 2 text

   dan.iel.fm dfm @exoplaneteer Dan Foreman-Mackey

Slide 3

Slide 3 text

I'm a grad student in Camp Hogg at NYU David W. Hogg

Slide 4

Slide 4 text

I'm an engineer in Camp Hogg at NYU David W. Hogg

Slide 5

Slide 5 text

David W. Hogg I build tools for data analysis in astronomy(mostly)

Slide 6

Slide 6 text

emceethe MCMC Hammer introducing arxiv.org/abs/1202.3665 dan.iel.fm/emcee it's hammer time!

Slide 7

Slide 7 text

I work on the Local Group exoplanets variable stars image modeling calibration

Slide 8

Slide 8 text

I work on the Local Group exoplanets variable stars image modeling calibration not writing papers

Slide 9

Slide 9 text

I work on the Local Group exoplanets variable stars image modeling calibration not writing papers

Slide 10

Slide 10 text

Physics Data The graphical model of my research. a sketch of

Slide 11

Slide 11 text

Physics Data The graphical model of my research. a sketch of the generative view of data analysis

Slide 12

Slide 12 text

2 = N X n=1 [ yn ( m xn + b )]2 2 n a line ˆ yn = m xn + b + ✏n ✏n ⇠ N(0; 2 n ) synthetic data: noise model: p( { yn, xn, 2 n } | m, b) / exp ✓ 1 2 2 ◆ for example

Slide 13

Slide 13 text

Physics The graphical model of my research. a sketch of inference Data

Slide 14

Slide 14 text

p(data | physics) likelihood function/generative model p(physics | data) / p(physics) p(data | physics) posterior probability

Slide 15

Slide 15 text

but physics is everything. including things we don't care about! good/bad news:

Slide 16

Slide 16 text

a more realistic model: p(D | ✓, ↵) data cool physics garbage

Slide 17

Slide 17 text

what if we underestimated our error bars? N Y n=1 1 p 2 ⇡ ( 2 n + 2 ) exp  1 2 [yn (m xn + b)] 2 2 n + 2 p ({ yn, xn, 2 n } | m, b, 2) = "physics" "jitter" (not physics) (by some additive variance)

Slide 18

Slide 18 text

p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ marginalize. Do The Right Thing™

Slide 19

Slide 19 text

Z d 2 p( 2 ) N Y n=1 1 p 2 ⇡ ( 2 n + 2 ) exp  1 2 [yn (m xn + b)] 2 2 n + 2 p ({ yn, xn, n } | m, b ) = in our example BOOM!

Slide 20

Slide 20 text

What do you really want?

Slide 21

Slide 21 text

What do you really want? Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓

Slide 22

Slide 22 text

probably. What do you really want? Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓

Slide 23

Slide 23 text

Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓ p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ marginalization expectation

Slide 24

Slide 24 text

Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓ p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ marginalization expectation large number of dimensions

Slide 25

Slide 25 text

Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓ p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ marginalization expectation large number of dimensions whoa!

Slide 26

Slide 26 text

Ep[f(✓)] = 1 Z Z f(✓) p(✓) p(D | ✓) d✓ p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ marginalization expectation This is HARD. (in general) large number of dimensions whoa!

Slide 27

Slide 27 text

OK now that we agree…

Slide 28

Slide 28 text

Z f( x ) p( x ) d x ⇡ 1 N N X n=1 f( xn) xn ⇠ p( x ) where: error: number of independent samples as you learned in middle school / 1 p N0

Slide 29

Slide 29 text

Z f( x ) p( x ) d x ⇡ 1 N N X n=1 f( xn) xn ⇠ p( x ) where: error: number of independent samples as you learned in middle school / 1 p N0

Slide 30

Slide 30 text

MCMC draws samples from a probability function and all you need to be able to do is evaluate the function (up to a constant)

Slide 31

Slide 31 text

Metropolis–Hastings

Slide 32

Slide 32 text

Metropolis–Hastings in an ideal world

Slide 33

Slide 33 text

start here perhaps Metropolis–Hastings in an ideal world

Slide 34

Slide 34 text

propose a new position Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x )

Slide 35

Slide 35 text

propose a new position Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x ) accept? p(accept) = min ✓ 1, p( x ) p( x 0) q( x ; x 0) q( x 0; x ) ◆

Slide 36

Slide 36 text

propose a new position Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x ) definitely. accept? p(accept) = min ✓ 1, p( x ) p( x 0) q( x ; x 0) q( x 0; x ) ◆

Slide 37

Slide 37 text

propose a new position Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x ) definitely. accept? p(accept) = min ✓ 1, p( x ) p( x 0) q( x ; x 0) q( x 0; x ) ◆ only relative probabilities

Slide 38

Slide 38 text

Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x )

Slide 39

Slide 39 text

Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x )

Slide 40

Slide 40 text

Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x ) accept? p(accept) = min ✓ 1, p( x ) p( x 0) q( x ; x 0) q( x 0; x ) ◆

Slide 41

Slide 41 text

not this time. Metropolis–Hastings in an ideal world x 0 ⇠ q( x 0; x ) accept? p(accept) = min ✓ 1, p( x ) p( x 0) q( x ; x 0) q( x 0; x ) ◆

Slide 42

Slide 42 text

Metropolis–Hastings in an ideal world

Slide 43

Slide 43 text

double count! Metropolis–Hastings in an ideal world

Slide 44

Slide 44 text

Metropolis–Hastings in an ideal world

Slide 45

Slide 45 text

Metropolis–Hastings in an ideal world

Slide 46

Slide 46 text

Metropolis–Hastings in the real world

Slide 47

Slide 47 text

Metropolis–Hastings in the real world

Slide 48

Slide 48 text

Metropolis–Hastings in the real world

Slide 49

Slide 49 text

Metropolis–Hastings in the real world

Slide 50

Slide 50 text

Metropolis–Hastings in the real world

Slide 51

Slide 51 text

Metropolis–Hastings in the real world the Small Acceptance Fraction problem

Slide 52

Slide 52 text

Metropolis–Hastings in the real world

Slide 53

Slide 53 text

Metropolis–Hastings in the real world

Slide 54

Slide 54 text

Metropolis–Hastings in the real world the Huge Acceptance Fraction problem

Slide 55

Slide 55 text

a brief aside.

Slide 56

Slide 56 text

YOUR LIKELIHOOD FUNCTION IS EXPENSIVE

Slide 57

Slide 57 text

Metropolis–Hastings in the real world

Slide 58

Slide 58 text

YOUR LIKELIHOOD FUNCTION IS EXPENSIVE REMEMBER?

Slide 59

Slide 59 text

Metropolis–Hastings in the real world

Slide 60

Slide 60 text

Metropolis–Hastings in the real world D(D+1)/2 tuning parameters that's

Slide 61

Slide 61 text

Metropolis–Hastings in the real world D(D+1)/2 tuning parameters that's the dimension of your problem

Slide 62

Slide 62 text

Metropolis–Hastings in the real world D(D+1)/2 tuning parameters that's the dimension of your problem Scientific Awesomeness how hard is MCMC Metropolis Hastings how things Should be (~number of parameters)

Slide 63

Slide 63 text

that being said, go code up your own Metropolis–Hastings code right now. (well not right right now)

Slide 64

Slide 64 text

Jonathan Goodman Jonathan Weare (dfm.io/mcmc-gw10) "Ensemble samplers with affine invariance"

Slide 65

Slide 65 text

Ensemble Samplers in the real world

Slide 66

Slide 66 text

Ensemble Samplers in the real world

Slide 67

Slide 67 text

Ensemble samplers with affine invariance

Slide 68

Slide 68 text

y A x + b Affine Transformation hard easy

Slide 69

Slide 69 text

y A x + b Affine Transformation hard easy THE SAME

Slide 70

Slide 70 text

Ensemble Samplers in the real world

Slide 71

Slide 71 text

Ensemble Samplers in the real world

Slide 72

Slide 72 text

choose a helper Ensemble Samplers in the real world

Slide 73

Slide 73 text

choose a helper Ensemble Samplers in the real world

Slide 74

Slide 74 text

choose a helper Ensemble Samplers in the real world propose a new position

Slide 75

Slide 75 text

choose a helper Ensemble Samplers in the real world accept? p(accept) = min ✓ 1, ZD 1 p( x ) p( x 0) ◆ propose a new position

Slide 76

Slide 76 text

Ensemble Samplers in the real world

Slide 77

Slide 77 text

Ensemble Samplers in the real world

Slide 78

Slide 78 text

choose a helper Ensemble Samplers in the real world

Slide 79

Slide 79 text

Ensemble Samplers in the real world

Slide 80

Slide 80 text

go code up your own Ensemble Sampler code right now?

Slide 81

Slide 81 text

emceethe MCMC Hammer introducing arxiv.org/abs/1202.3665 dan.iel.fm/emcee it's hammer time!

Slide 82

Slide 82 text

emceethe MCMC Hammer introducing arxiv.org/abs/1202.3665 dan.iel.fm/emcee it's hammer time! pip install emcee to install:

Slide 83

Slide 83 text

it's hammer time! using emcee is easy let me show you...

Slide 84

Slide 84 text

N Y n=1 1 p 2 ⇡ ( 2 n + 2 ) exp  1 2 [yn (m xn + b)] 2 2 n + 2 p ({ yn, xn, 2 n } | m, b, 2) =

Slide 85

Slide 85 text

import numpy as np import emcee def lnprobfn(p, x, y, yerr): m, b, d2 = p if not 0 <= d2 <= 1: return -np.inf ivar = 1.0 / (yerr ** 2 + d2) chi2 = np.sum((y - m * x - b) ** 2 * ivar) return -0.5 * (chi2 - np.sum(np.log(ivar))) # Load data. # x, y, yerr = ... # Set up sampler and initialize the walkers. nwalkers, ndim = 100, 3 sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprobfn, args=(x, y, yerr)) p0 = [[np.random.rand(), 10 * np.random.rand(), np.random.rand()] for k in range(nwalkers)] # Go. sampler.run_mcmc(p0, 1000) m, b, d2 = sampler.flatchain.T

Slide 86

Slide 86 text

import numpy as np def lnprobfn(p, x, y, yerr): m, b, d2 = p if not 0 <= d2 <= 1: return -np.inf ivar = 1.0 / (yerr ** 2 + d2) chi2 = np.sum((y - m * x - b) ** 2 * ivar) return -0.5 * (chi2 - np.sum(np.log(ivar))) p ( 2 ) = ⇢ 1 , if 0  2  1 0 , otherwise 2 = N X n=1 ( yn m xn + b )2 2 n + 2 ln p ({ xn, yn, n } | m, b, 2) = 1 2 2 1 2 N X n=1 ln 2 n + 2

Slide 87

Slide 87 text

import emcee nwalkers, ndim = 100, 3 sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprobfn, args=(x, y, yerr)) p0 = [[np.random.rand(), 10 * np.random.rand(), np.random.rand()] for k in range(nwalkers)] sampler.run_mcmc(p0, 1000) m, b, d2 = sampler.flatchain.T mk ⇠ U(0, 1) bk ⇠ U(0, 10) 2 k ⇠ U(0, 1) initialize each walker run MCMC for 1000 steps get the samples initialize the sampler

Slide 88

Slide 88 text

burn-in

Slide 89

Slide 89 text

No content

Slide 90

Slide 90 text

No content

Slide 91

Slide 91 text

convergence

Slide 92

Slide 92 text

acceptance fraction?

Slide 93

Slide 93 text

acceptance fraction? whoa!

Slide 94

Slide 94 text

acceptance fraction? whoa! that's a lot of significant digits...

Slide 95

Slide 95 text

dfm/acor 

Slide 96

Slide 96 text

displaying results

Slide 97

Slide 97 text

No content

Slide 98

Slide 98 text

No content

Slide 99

Slide 99 text

dfm/triangle.py 

Slide 100

Slide 100 text

sharing results you need a number for your abstract?!?

Slide 101

Slide 101 text

1 sort the samples 2 compute moments/quantiles (0.16, 0.5, 084)

Slide 102

Slide 102 text

1 sort the samples 2 compute moments/quantiles (0.16, 0.5, 084) X = ¯ X+ + what does it MEAN?

Slide 103

Slide 103 text

1 sort the samples 2 compute moments/quantiles (0.16, 0.5, 084) X = ¯ X+ + what does it MEAN? 3 machine readable samplings (including priors values)

Slide 104

Slide 104 text

1 sort the samples 2 compute moments/quantiles (0.16, 0.5, 084) X = ¯ X+ + what does it MEAN? 3 machine readable samplings (including priors values) my pipe dream

Slide 105

Slide 105 text

Brendon Brewer's words of wisdom...

Slide 106

Slide 106 text

emcee isn't always The Right Choice™ Remember: Brendon Brewer's words of wisdom...

Slide 107

Slide 107 text

what about multimodal densities? ✓ p(✓ | D)

Slide 108

Slide 108 text

what about multimodal densities? ✓ p(✓ | D)

Slide 109

Slide 109 text

what about multimodal densities? ✓ p(✓ | D)

Slide 110

Slide 110 text

what about multimodal densities? ✓ p(✓ | D) p(accept)

Slide 111

Slide 111 text

what about multimodal densities? ✓ p(✓ | D) p(accept)

Slide 112

Slide 112 text

what if we want to compare models? p(✓ | D, M) = 1 Z(M) p(✓ | M) p(D | ✓, M) Z = p(D | M) = Z p(D, ✓ | M) d✓

Slide 113

Slide 113 text

what if we want to compare models? p(✓ | D, M) = 1 Z(M) p(✓ | M) p(D | ✓, M) Z = p(D | M) = Z p(D, ✓ | M) d✓

Slide 114

Slide 114 text

what to do?

Slide 115

Slide 115 text

what to do? ✓ p(✓ | D) 1 burn-in & priors

Slide 116

Slide 116 text

what to do? ✓ p(✓ | D) 1 burn-in & priors Z = p(D | M) = Z p(D, ✓ | M) d✓ 2 use a different algorithm (gasp!)

Slide 117

Slide 117 text

what to do? ✓ p(✓ | D) 1 burn-in & priors Z = p(D | M) = Z p(D, ✓ | M) d✓ 2 use a different algorithm (gasp!) github.com/eggplantbren/DNest3

Slide 118

Slide 118 text

sometimes it is useful

Slide 119

Slide 119 text

emcee is community supported.

Slide 120

Slide 120 text

emcee is community supported.

Slide 121

Slide 121 text

emcee has good documentation.

Slide 122

Slide 122 text

emcee has a live support team. [email protected]

Slide 123

Slide 123 text

Alex Conley (UC Boulder) Jason Davies (Jason Davies Ltd.) Will Meierjurgen Farr (Northwestern) David W. Hogg (NYU) Dustin Lang (CMU) Phil Marshall (Oxford) Ilya Pashchenko (ASC LPI, Moscow) Adrian Price-Whelan (Columbia) Jeremy Sanders (Cambridge) Joe Zuntz (Oxford) Eric Agol (UW) Jo Bovy (IAS) Jacqueline Chen (MIT) John Gizis (Delaware) Jonathan Goodman (NYU) Marius Millea (UC Davis) Jennifer Piscionere (Vanderbilt) contributors

Slide 124

Slide 124 text

take home. p(D | ✓, ↵) your model:

Slide 125

Slide 125 text

take home. p(D | ✓, ↵) it's hammer time! p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ your model:

Slide 126

Slide 126 text

take home. p(D | ✓, ↵) ✓ = it's hammer time! p(D | ✓) / Z p(↵) p(D | ✓, ↵) d↵ your model:

Slide 127

Slide 127 text

Alex Conley (UC Boulder) Jason Davies (Jason Davies Ltd.) Will Meierjurgen Farr (Northwestern) David W. Hogg (NYU) Dustin Lang (CMU) Phil Marshall (Oxford) Ilya Pashchenko (ASC LPI, Moscow) Adrian Price-Whelan (Columbia) Jeremy Sanders (Cambridge) Joe Zuntz (Oxford) Eric Agol (UW) Jo Bovy (IAS) Jacqueline Chen (MIT) John Gizis (Delaware) Jonathan Goodman (NYU) Marius Millea (UC Davis) Jennifer Piscionere (Vanderbilt) thanks!