180

# Open Software for Astrophysics, AAS241

Slides for my plenary talk at the 241st American Astronomical Society meeting. January 12, 2023

## Transcript

2. None

4. ### AAS 225 / 2015 / Seattle AAS 231 / 2018

/ National Harbor
5. ### °0.6 °0.3 0.0 0.3 0.6 raw [ppt] 0 5 10

15 20 25 time [days] °0.30 °0.15 0.00 de-trended [ppt] N = 1000 reference: DFM+ (2017)
6. ### °0.6 °0.3 0.0 0.3 0.6 raw [ppt] 0 5 10

15 20 25 time [days] °0.30 °0.15 0.00 de-trended [ppt] N = 1000 reference: DFM+ (2017)

9. ### reference: Aigrain & DFM (2022) ignoring correlated noise accounting for

correlated noise

11. ### a Gaussian Process is a drop - in replacement for

chi - squared

15. None

18. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K) return -0.5 * (gof + norm)
19. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K) return -0.5 * (gof + norm)

21. ### from george.kernels import * k1 = 1.5 * ExpSquaredKernel(2.3) k2

= 5.5 * Matern32Kernel(0.1) kernel = 0.5 * (k1 + k2)

23. ### from george import GP gp = GP(kernel) gp.compute(x, yerr) gp.log_likelihood(y)

gp.f i t(y) ???

wheel
26. None
27. ### faster: celerite* 2 * yes, that truly is how you

pronounce it…
28. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K) return -0.5 * (gof + norm)
29. ### import numpy as np def log_likelihood(params, x, diag, r) :

K = build_kernel_matrix(params, x, diag) gof = r.T @ np.linalg.solve(K, r) norm = np.linalg.slogdet(K) return -0.5 * (gof + norm)
30. None

32. ### 102 103 104 105 number of data points [N] 10

5 10 4 10 3 10 2 10 1 100 computational cost [seconds] 1 2 4 8 16 32 64 128 256 direct O(N) 100 101 number o reference: DFM, Agol, Ambikasaran, Angus (2017)
33. ### 102 103 104 105 number of data points [N] 10

4 10 3 10 2 10 1 100 computational cost [seconds] 1 2 4 8 16 32 64 128 256 O(N) 100 101 number o reference: DFM, Agol, Ambikasaran, Angus (2017)
34. None

38. ### 7  1 (ish) dimensional input  specif i c

type of kernel restrictions:

44. ### import numpy as np def linear_least_squares(x, y) : A =

np.vander(x, 2) return np.linalg.lstsq(A, y)
45. ### import jax.numpy as jnp def linear_least_squares(x, y) : A =

jnp.vander(x, 2) return jnp.linalg.lstsq(A, y)
46. ### import jax.numpy as jnp @jax.jit def linear_least_squares(x, y) : A

= jnp.vander(x, 2) return jnp.linalg.lstsq(A, y)
47. None