Dan Foreman-Mackey
January 12, 2023
440

# Open Software for Astrophysics, AAS241

Slides for my plenary talk at the 241st American Astronomical Society meeting.

January 12, 2023

## Transcript

3. ### AAS 225 / 2015 / Seattle AAS 231 / 2018

/ National Harbor
4. ### °0.6 °0.3 0.0 0.3 0.6 raw [ppt] 0 5 10

15 20 25 time [days] °0.30 °0.15 0.00 de-trended [ppt] N = 1000 reference: DFM+ (2017)
5. ### °0.6 °0.3 0.0 0.3 0.6 raw [ppt] 0 5 10

15 20 25 time [days] °0.30 °0.15 0.00 de-trended [ppt] N = 1000 reference: DFM+ (2017)

8. ### reference: Aigrain & DFM (2022) ignoring correlated noise accounting for

correlated noise

10. ### a Gaussian Process is a drop - in replacement for

chi - squared

16. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K)[1] return -0.5 * (gof + norm)
17. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K)[1] return -0.5 * (gof + norm)

19. ### from george.kernels import * k1 = 1.5 * ExpSquaredKernel(2.3) k2

= 5.5 * Matern32Kernel(0.1) kernel = 0.5 * (k1 + k2)

21. ### from george import GP gp = GP(kernel) gp.compute(x, yerr) gp.log_likelihood(y)

gp.f i t(y) ???

wheel
24. ### faster: celerite* 2 * yes, that truly is how you

pronounce it…
25. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K)[1] return -0.5 * (gof + norm)
26. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K)[1] return -0.5 * (gof + norm)

28. ### 102 103 104 105 number of data points [N] 10

5 10 4 10 3 10 2 10 1 100 computational cost [seconds] 1 2 4 8 16 32 64 128 256 direct O(N) 100 101 number o reference: DFM, Agol, Ambikasaran, Angus (2017)
29. ### 102 103 104 105 number of data points [N] 10

4 10 3 10 2 10 1 100 computational cost [seconds] 1 2 4 8 16 32 64 128 256 O(N) 100 101 number o reference: DFM, Agol, Ambikasaran, Angus (2017)

33. ### 7 [1] 1 (ish) dimensional input [2] specif i c

type of kernel restrictions:

39. ### import numpy as np def linear_least_squares(x, y) : A =

np.vander(x, 2) return np.linalg.lstsq(A, y)[0]
40. ### import jax.numpy as jnp def linear_least_squares(x, y) : A =

jnp.vander(x, 2) return jnp.linalg.lstsq(A, y)[0]
41. ### import jax.numpy as jnp @jax.jit def linear_least_squares(x, y) : A

= jnp.vander(x, 2) return jnp.linalg.lstsq(A, y)[0]