Slide 1

Slide 1 text

AN ASTRONOMER’S INTRODUCTION TO GAUSSIAN PROCESSES Dan Foreman-Mackey CCPP@NYU // github.com/dfm // @exoplaneteer // dfm.io

Slide 2

Slide 2 text

github.com/dfm/gp-tutorial

Slide 3

Slide 3 text

I write code for good & astrophysics.

Slide 4

Slide 4 text

I (probably) do data science.

Slide 5

Slide 5 text

Photo credit James Silvester silvesterphoto.tumblr.com not data science.

Slide 6

Slide 6 text

cb Flickr user Marcin Wichary data science.

Slide 7

Slide 7 text

MODEL DATA Data Science

Slide 8

Slide 8 text

MODEL DATA Data Science p (data | model)

Slide 9

Slide 9 text

I work with Kepler data.

Slide 10

Slide 10 text

NOISE I get really passionate about

Slide 11

Slide 11 text

astrophysical instrumental NOISE

Slide 12

Slide 12 text

astrophysical instrumental NOISE star spots stellar variability transiting exoplanets temperature variations pointing shifts cosmic rays

Slide 13

Slide 13 text

635 640 645 650 655 660 time [KBJD] 0.004 0.003 0.002 0.001 0.000 0.001 0.002 0.003 relative flux Ruth’s favourite object: KIC 3223000

Slide 14

Slide 14 text

astrophysical instrumental NOISE star spots stellar variability transiting exoplanets temperature variations pointing shifts cosmic rays

Slide 15

Slide 15 text

Kepler 32

Slide 16

Slide 16 text

Kepler 32

Slide 17

Slide 17 text

astrophysical instrumental NOISE star spots stellar variability transiting exoplanets temperature variations pointing shifts cosmic rays

Slide 18

Slide 18 text

Why not model all the things?

Slide 19

Slide 19 text

The power of correlated noise. 1

Slide 20

Slide 20 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 y = m x + b

Slide 21

Slide 21 text

0 10 20 30 40 0 10 20 30 40 The true covariance of the observations.

Slide 22

Slide 22 text

log p( y | m, b) = 1 2 X n  (yn m xn b) 2 2 n + log 2 ⇡ 2 n independent Gaussian with known variance Let’s assume that the noise is…

Slide 23

Slide 23 text

Or equivalently… log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡

Slide 24

Slide 24 text

Or equivalently… log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡ diagonal

Slide 25

Slide 25 text

Or equivalently… log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡ Ndata diagonal

Slide 26

Slide 26 text

Or equivalently… log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡ Ndata residual vector r = ⇥ y1 ( m x1 + b ) · · · yn ( m xn + b ) ⇤T diagonal

Slide 27

Slide 27 text

Linear least-squares. A = 2 6 6 6 4 x1 1 x2 1 . . . . . . xn 1 3 7 7 7 5 C = 2 6 6 6 4 2 1 0 · · · 0 0 2 2 · · · 0 . . . . . . ... . . . 0 0 · · · 2 n 3 7 7 7 5 y = 2 6 6 6 4 y1 y2 . . . yn 3 7 7 7 5  m b = S AT C 1 y S = ⇥ AT C 1 A ⇤ 1 maximum likelihood & in this case only mean of posterior posterior covariance

Slide 28

Slide 28 text

Linear least-squares. A = 2 6 6 6 4 x1 1 x2 1 . . . . . . xn 1 3 7 7 7 5 C = 2 6 6 6 4 2 1 0 · · · 0 0 2 2 · · · 0 . . . . . . ... . . . 0 0 · · · 2 n 3 7 7 7 5 y = 2 6 6 6 4 y1 y2 . . . yn 3 7 7 7 5  m b = S AT C 1 y S = ⇥ AT C 1 A ⇤ 1 maximum likelihood & in this case only mean of posterior posterior covariance assuming uniform priors

Slide 29

Slide 29 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 truth

Slide 30

Slide 30 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 truth posterior constraint?

Slide 31

Slide 31 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 truth posterior constraint?

Slide 32

Slide 32 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 truth posterior constraint?

Slide 33

Slide 33 text

0 10 20 30 40 0 10 20 30 40 But we know the true covariance matrix.

Slide 34

Slide 34 text

log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡

Slide 35

Slide 35 text

log p ( y | m, b ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡ 0 10 20 30 40 0 10 20 30 40

Slide 36

Slide 36 text

Linear least-squares. A = 2 6 6 6 4 x1 1 x2 1 . . . . . . xn 1 3 7 7 7 5 C = 2 6 6 6 4 2 1 0 · · · 0 0 2 2 · · · 0 . . . . . . ... . . . 0 0 · · · 2 n 3 7 7 7 5 y = 2 6 6 6 4 y1 y2 . . . yn 3 7 7 7 5  m b = S AT C 1 y S = ⇥ AT C 1 A ⇤ 1 maximum likelihood & in this case only mean of posterior posterior covariance 0 10 20 30 40 0 10 20 30 40

Slide 37

Slide 37 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 Before.

Slide 38

Slide 38 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 After.

Slide 39

Slide 39 text

cbd Flickr user MyFWCmedia the responsible scientist.

Slide 40

Slide 40 text

So… we’re finished, right?

Slide 41

Slide 41 text

In The Real World™, we never know the noise.

Slide 42

Slide 42 text

Just gotta model it!

Slide 43

Slide 43 text

kernel or covariance function log p ( y | m, b, a, s ) = 1 2 rT C 1 r 1 2 log det C N 2 log 2 ⇡ function of model parameters Cij = 2 i ij + a exp ✓ (xi xj) 2 2 s ◆ for example

Slide 44

Slide 44 text

log p ( m, b, a, s | y ) = log p ( y | m, b, a, s ) + log p ( m, b, a, s ) constant it's hammer time! emceethe MCMC Hammer arxiv.org/abs/1202.3665 dan.iel.fm/emcee +

Slide 45

Slide 45 text

2 1 0 1 2 b 2.5 0.0 2.5 ln a 0.0 0.5 1.0 1.5 m 2.5 0.0 2.5 ln s 2 1 0 1 2 b 2.5 0.0 2.5 ln a 2.5 0.0 2.5 ln s

Slide 46

Slide 46 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4

Slide 47

Slide 47 text

6 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4

Slide 48

Slide 48 text

The formal Gaussian process. 2

Slide 49

Slide 49 text

log p ( y | m, b, ✓ ) = 1 2 rT K✓ 1 r 1 2 log det K✓ N 2 log 2 ⇡ K✓ij = 2 i ij + k✓( xi, xj) where The model. drop-in replacement for your current log-likelihood function!

Slide 50

Slide 50 text

HUGE the data are drawn from one Gaussian * the square of the number of data points. *

Slide 51

Slide 51 text

“Prior” samples. 2 1 0 1 2 3 exponential squared l = 0.5 l = 1 l = 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1 2 1 0 1 0 2 4 6 8 10 t 2 1 0 1 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1 k✓( r ) = exp ✓ r2 2 l2 ◆

Slide 52

Slide 52 text

“Prior” samples. 2 1 0 1 2 3 exponential squared l = 0.5 l = 1 l = 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1 2 1 0 1 0 2 4 6 8 10 t 2 1 0 1 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1 k✓( r ) = exp ✓ r2 2 l2 ◆ exponential squared

Slide 53

Slide 53 text

k✓( r ) = " 1 + p 3 r l # exp p 3 r l ! cos ✓ 2 ⇡ r P ◆ “Prior” samples. 2 1 0 1 0 2 4 6 8 10 t 2 1 0 1 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1

Slide 54

Slide 54 text

k✓( r ) = " 1 + p 3 r l # exp p 3 r l ! cos ✓ 2 ⇡ r P ◆ “Prior” samples. 2 1 0 1 0 2 4 6 8 10 t 2 1 0 1 2 3 quasi-periodic l = 2, P = 3 l = 3, P = 3 l = 3, P = 1 quasi-periodic

Slide 55

Slide 55 text

log p ( y | m, b, ✓ ) = 1 2 rT K✓ 1 r 1 2 log det K✓ N 2 log 2 ⇡ Computational complexity. O(N3) naïvely: compute LU decomposition // evaluate log-det // apply inverse

Slide 56

Slide 56 text

import numpy as np from scipy.linalg import cho_factor, cho_solve ! def simple_gp_lnlike (x, y, yerr, a, s): r = x[:, None] - x[None, :] C = np.diag(yerr**2) + a*np.exp(-0.5*r**2/(s*s)) factor, flag = cho_factor(C) logdet = np.sum(2*np.log(np.diag(factor))) return -0.5 * (np.dot(y, cho_solve((factor, flag), y)) + logdet + len(x)*np.log(2*np.pi))

Slide 57

Slide 57 text

2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 log10 N 3 2 1 0 1 2 log10 runtime/seconds

Slide 58

Slide 58 text

exponential squared quasi-periodic

Slide 59

Slide 59 text

exponential squared quasi-periodic t ⌃

Slide 60

Slide 60 text

exponential squared quasi-periodic t ⌃ O(N⇠2) sparse. still:

Slide 61

Slide 61 text

FFT?

Slide 62

Slide 62 text

“ Aren’t kernel matrices Hierarchical Off-Diagonal Low-Rank? — no astronomer ever

Slide 63

Slide 63 text

K(3) = K3 ⇥ K2 ⇥ K1 ⇥ K0 Full rank; Low-rank; Identity matrix; Zero matrix; Ambikasaran, DFM, et al. (in prep)

Slide 64

Slide 64 text

github.com/dfm/george

Slide 65

Slide 65 text

import numpy as np from george import GaussianProcess, kernels ! def george_lnlike(x, y, yerr, a, s): kernel = a * kernels.RBFKernel(s) gp = GaussianProcess(kernel) gp.compute(x, yerr) return gp.lnlikelihood(y)

Slide 66

Slide 66 text

2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 log10 N 3 2 1 0 1 2 log10 runtime/seconds

Slide 67

Slide 67 text

2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 log10 N 3 2 1 0 1 2 log10 runtime/seconds

Slide 68

Slide 68 text

and short cadence data? one month of data in 4 seconds

Slide 69

Slide 69 text

Applications to Kepler data. 3

Slide 70

Slide 70 text

Parameter Recovery with Sivaram Ambikasaran, et al.

Slide 71

Slide 71 text

No content

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

No content

Slide 78

Slide 78 text

No content

Slide 79

Slide 79 text

No content

Slide 80

Slide 80 text

No content