Slide 1

Slide 1 text

Fred J. Hickernell, Illinois Institute of Technology August 18, 2024 Quasi-Monte Carlo Methods What, Why, and How? Thanks to • The organizers • The US National Science Foundation #2316011

Slide 2

Slide 2 text

My aim for the next 75 minutes You will

Slide 3

Slide 3 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are

Slide 4

Slide 4 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo

Slide 5

Slide 5 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo • Use qMC properly

Slide 6

Slide 6 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo • Use qMC properly • Feel free to interrupt and ask questions

Slide 7

Slide 7 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo • Use qMC properly • Feel free to interrupt and ask questions • Join our friendly qMC research community

Slide 8

Slide 8 text

My aim for the next 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo • Use qMC properly • Feel free to interrupt and ask questions • Join our friendly qMC research community Try the computations at https://tinyurl.com/QMCTutorialNotebook

Slide 9

Slide 9 text

Overview μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 10

Slide 10 text

Overview • Where in practice μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 11

Slide 11 text

Overview • Where in practice • Constructing low discrepancy (LD) x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 12

Slide 12 text

Overview • Where in practice • Constructing low discrepancy (LD) x0 , x1 , … • Discrepancy (quality) measures for x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 13

Slide 13 text

Overview • Where in practice • Constructing low discrepancy (LD) x0 , x1 , … • Discrepancy (quality) measures for x0 , x1 , … • Choosing so that n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 14

Slide 14 text

Overview • Where in practice • Constructing low discrepancy (LD) x0 , x1 , … • Discrepancy (quality) measures for x0 , x1 , … • Choosing so that n |μ − ̂ μn | ≤ ε • Making our original problem look like the above μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 15

Slide 15 text

Overview • Where in practice • Constructing low discrepancy (LD) x0 , x1 , … • Discrepancy (quality) measures for x0 , x1 , … • Choosing so that n |μ − ̂ μn | ≤ ε • Making our original problem look like the above • Ongoing research μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 16

Slide 16 text

Where does this arise in practice? μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn f(X) = option payo ff underground water pressure with random rock porosity pixel intensity from random ray option price average water pressure average pixel intensity = μ

Slide 17

Slide 17 text

Where does this arise in practice? μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn f(X) = option payo ff underground water pressure with random rock porosity pixel intensity from random ray option price average water pressure average pixel intensity = μ may be dozens or hundreds d

Slide 18

Slide 18 text

Overview • Where in practice • Constructing low discrepancy (LD) • Discrepancy (quality) measures for • Choosing so that • Making our original problem look like the above • Ongoing research x0 , x1 , … x0 , x1 , … n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 19

Slide 19 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 20

Slide 20 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 21

Slide 21 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 22

Slide 22 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 23

Slide 23 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn Grids do not fi ll space well
 Hard to extend

Slide 24

Slide 24 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn IID fi ll space better better than grids 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi2 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi3 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi4 64 Independent and Identically Distributed (IID) points (d = 6)

Slide 25

Slide 25 text

How to choose ? x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn LD points fi ll space even better! 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi2 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi3 0.00 0.25 0.50 0.75 1.00 xi1 0.00 0.25 0.50 0.75 1.00 xi4 64 Low Discrepancy (LD) Points (d = 6)

Slide 26

Slide 26 text

Quasi-Monte Carlo (qMC) methods use low discrepancy (LD) or evenly spread sequences instead of grids or IID sequences to solve problems more efficiently

Slide 27

Slide 27 text

Integration lattices [DKP 2022] 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x1 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x2 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x3 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x15 Lattice xi = i(1, 11)/16 (mod 1), i = 0, . . . , 15

Slide 28

Slide 28 text

Integration lattices [DKP 2022] xi = ih/n (mod 1), i = 0,…, n − 1 In general xi + xj (mod 1) = xi+jmodn Group structure 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x1 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x2 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x3 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x15 Lattice xi = i(1, 11)/16 (mod 1), i = 0, . . . , 15

Slide 29

Slide 29 text

Integration lattices [DKP 2022] xi = ih/n (mod 1), i = 0,…, n − 1 In general xi + xj (mod 1) = xi+jmodn Group structure Good chosen by computer search h 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x1 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x2 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x3 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x15 Lattice xi = i(1, 11)/16 (mod 1), i = 0, . . . , 15

Slide 30

Slide 30 text

Shifted integration lattices 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ Lattice xi = i(1, 11)/16 + ¢ (mod 1), i = 0, . . . , 15

Slide 31

Slide 31 text

Shifted integration lattices xi = ih/n + Δ (mod 1), i = 0,…, n − 1, Δ ∼ 𝒰 [0,1]d In general Shifted lattice is a coset 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ Lattice xi = i(1, 11)/16 + ¢ (mod 1), i = 0, . . . , 15

Slide 32

Slide 32 text

Shifted integration lattices xi = ih/n + Δ (mod 1), i = 0,…, n − 1, Δ ∼ 𝒰 [0,1]d In general Shifted lattice is a coset Random shifts make unbiased and shift points away from the boundary ̂ μn 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x0 = ¢ Lattice xi = i(1, 11)/16 + ¢ (mod 1), i = 0, . . . , 15

Slide 33

Slide 33 text

Extensible shifted lattice sequences van der Corput sequence i = (⋯i2 i1 i0 )2 = i0 + i1 2 + i2 4 + ⋯ + im 2m + ⋯ ∈ ℕ0 xi = 2 (0.i0 i1 i2 ⋯) = i0 2 + i1 4 + i2 8 + ⋯ + im 2m−1 + ⋯ ∈ [0,1) 0.00 0.25 0.50 0.75 1.00 x0 x1 x2 x3 x4 x5 16 points of a van der Corput squence

Slide 34

Slide 34 text

Extensible shifted lattice sequences xi = ( i0 2 + i1 4 + i2 8 + ⋯) h + Δ (mod 1), i = 0,…, 2m − 1, m ∈ ℕ0 Lattice reordered van der Corput sequence i = (⋯i2 i1 i0 )2 = i0 + i1 2 + i2 4 + ⋯ + im 2m + ⋯ ∈ ℕ0 xi = 2 (0.i0 i1 i2 ⋯) = i0 2 + i1 4 + i2 8 + ⋯ + im 2m−1 + ⋯ ∈ [0,1) 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 x0 x1 x2 x3 x4 x5 16 points of a van der Corput squence

Slide 35

Slide 35 text

Digital nets and sequences [DP 2010] xi = i0 z0 ⊕ i1 z1 ⊕ i2 z2 ⊕ ⋯ ⊕ Δ, i = 0,1,…,2m − 1, m ∈ ℕ0 ⊕ = bitwise addition, e.g., 2 0.011 ⊕ 2 0.101 = 2 0.110, zm ∈ [0,1)d Digital net
 with shift van der Corput sequence i = (⋯i2 i1 i0 )2 = i0 + i1 2 + i2 4 + ⋯ + im 2m + ⋯ ∈ ℕ0 xi = 2 (0.i0 i1 i2 ⋯) = i0 2 + i1 4 + i2 8 + ⋯ + im 2m−1 + ⋯ ∈ [0,1) Chosen by number theory or computer

Slide 36

Slide 36 text

Digital nets and sequences [DP 2010] xi = i0 z0 ⊕ i1 z1 ⊕ i2 z2 ⊕ ⋯ ⊕ Δ, i = 0,1,…,2m − 1, m ∈ ℕ0 ⊕ = bitwise addition, e.g., 2 0.011 ⊕ 2 0.101 = 2 0.110, zm ∈ [0,1)d Digital net
 with shift van der Corput sequence i = (⋯i2 i1 i0 )2 = i0 + i1 2 + i2 4 + ⋯ + im 2m + ⋯ ∈ ℕ0 xi = 2 (0.i0 i1 i2 ⋯) = i0 2 + i1 4 + i2 8 + ⋯ + im 2m−1 + ⋯ ∈ [0,1) 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Every tile has the same # of points is a coset for {xi }2m−1 i=0 m ∈ ℕ0

Slide 37

Slide 37 text

Digital nets and sequences van der Corput sequence i = (⋯i2 i1 i0 )2 = i0 + i1 2 + i2 4 + ⋯ + im 2m + ⋯ ∈ ℕ0 xi = 2 (0.i0 i1 i2 ⋯) = i0 2 + i1 4 + i2 8 + ⋯ + im 2m−1 + ⋯ ∈ [0,1) xi = i0 z0 ⊕ i1 z1 ⊕ i2 z2 ⊕ ⋯ ⊕ Δ, i = 0,…,2m − 1, m ∈ ℕ0 ⊕ = bitwise addition, zm ∈ [0,1)d Digital net
 with shift 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Digital nets are extensible as sequences

Slide 38

Slide 38 text

Scrambled, shifted digital sequences Digital sequence
 with shift xi = i0 z0 ⊕ i1 z1 ⊕ i2 z2 ⊕ ⋯ ⊕ Δ, i = 0,1,…,2m − 1, m ∈ ℕ0 ⊕ = bitwise addition, zm ∈ [0,1)d, Δ ∼ 𝒰 [0,1]d, zm random Every tile has the same # of points
 How small a tile can be depends on d 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

Slide 39

Slide 39 text

Scrambled, shifted digital sequences Digital sequence
 with shift xi = i0 z0 ⊕ i1 z1 ⊕ i2 z2 ⊕ ⋯ ⊕ Δ, i = 0,1,…,2m − 1, m ∈ ℕ0 ⊕ = bitwise addition, zm ∈ [0,1)d, Δ ∼ 𝒰 [0,1]d, zm random Every tile has the same # of points
 How small a tile can be depends on d 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Random scrambles & shifts make unbiased
 move the fi rst point away from the origin ̂ μn

Slide 40

Slide 40 text

• Use preferred for lattices and nets • Do not drop or skip points • Use randomized LD sequences to avoid points on the boundaries and to make your answers unbiased; you may also gain in convergence rate n = 2m

Slide 41

Slide 41 text

Overview • Where in practice • Constructing low discrepancy (LD) • Discrepancy (quality) measures for • Choosing so that • Making our original problem look like the above • Ongoing research x0 , x1 , … x0 , x1 , … n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 42

Slide 42 text

Discrepancy measures the quality of ? [H00] x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn |μ − ̂ μn | ≤ tight discrepancy({xi }n−1 i=0 ) norm of the error functional variation(f ) semi-norm

Slide 43

Slide 43 text

Discrepancy measures the quality of ? [H00] x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn |μ − ̂ μn | ≤ tight discrepancy({xi }n−1 i=0 ) norm of the error functional variation(f ) semi-norm If is in a Hilbert space with reproducing kernel , then f ℋ K discrepancy2({xi }n−1 i=0 ) = ∫ [0,1]d×[0,1]d K(t, x) dt dx − 2 n n−1 ∑ i=0 ∫ [0,1]d K(t, xi ) dt + 1 n2 n−1 ∑ i,j=0 K(xi , xj ) variation(f ) = inf c∈ℝ ∥f − c∥ℋ

Slide 44

Slide 44 text

Discrepancy measures the quality of ? [H00] x0 , x1 , … μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn |μ − ̂ μn | ≤ tight discrepancy({xi }n−1 i=0 ) norm of the error functional variation(f ) semi-norm If is in a Hilbert space with reproducing kernel , then f ℋ K discrepancy2({xi }n−1 i=0 ) = ∫ [0,1]d×[0,1]d K(t, x) dt dx − 2 n n−1 ∑ i=0 ∫ [0,1]d K(t, xi ) dt + 1 n2 n−1 ∑ i,j=0 K(xi , xj ) variation(f ) = inf c∈ℝ ∥f − c∥ℋ Quasi-Monte Carlo (qMC) methods use low discrepancy (LD) points

Slide 45

Slide 45 text

Ex., centered discrepancy K(t, x) = d ∏ ℓ=1 [1 + 1 2 ( tℓ − 1/2 + xℓ − 1/2 − tℓ − xℓ )] (d = 1) Any sub matrix is positive de fi nite 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.96 1.04 1.12 1.20 1.28 1.36 1.44 1.52

Slide 46

Slide 46 text

discrepancy2({xi }n−1 i=0 ) = ( 13 12) d − 2 n n−1 ∑ i=0 d ∏ ℓ=1 [1 + 1 2 ( xiℓ − 1/2 − xiℓ − 1/2 2 )] + 1 n2 n−1 ∑ i,j=0 d ∏ ℓ=1 [1 + 1 2 ( xiℓ − 1/2 + xjℓ − 1/2 − xiℓ − xjℓ )] Ex., centered discrepancy K(t, x) = d ∏ ℓ=1 [1 + 1 2 ( tℓ − 1/2 + xℓ − 1/2 − tℓ − xℓ )] Closed form
 computations 𝒪 (dn2)

Slide 47

Slide 47 text

discrepancy2({xi }n−1 i=0 ) = ( 13 12) d − 2 n n−1 ∑ i=0 d ∏ ℓ=1 [1 + 1 2 ( xiℓ − 1/2 − xiℓ − 1/2 2 )] + 1 n2 n−1 ∑ i,j=0 d ∏ ℓ=1 [1 + 1 2 ( xiℓ − 1/2 + xjℓ − 1/2 − xiℓ − xjℓ )] variation2(f ) = ∫ [0,1] ∂f(x1 , 1/2) ∂x1 2 dx1 + ⋯ + ∫ [0,1]2 ∂2f(x1 , x2 , 1/2) ∂x1 ∂x2 2 dx1 dx2 +⋯ + ∫ [0,1]d ∂df(x) ∂x1 ⋯∂xd 2 dx Ex., centered discrepancy K(t, x) = d ∏ ℓ=1 [1 + 1 2 ( tℓ − 1/2 + xℓ − 1/2 − tℓ − xℓ )] Closed form
 computations 𝒪 (dn2) Hard to compute

Slide 48

Slide 48 text

Centered discrepancy for digital sequences • decay in theory • decay practically for small • decay practically for large • Discrepancy increases with 𝒪 (n−1+δ) ∀d 𝒪 (n−1+δ) d 𝒪 (n−1/2) d d |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f )

Slide 49

Slide 49 text

Why discrepancy increases with d • Integration becomes harder as increases d discrepancy(∅) = sup ∥f∥ℋ ≤1 ∫ [0,1]d f(x) dx = ∫ [0,1]2d K(t, x) dt dx = ( 13 12 ) d/2

Slide 50

Slide 50 text

Why discrepancy increases with d • Integration becomes harder as increases d • Dividing discrepancy by 
 helps a bit discrepancy(∅) discrepancy(∅) = sup ∥f∥ℋ ≤1 ∫ [0,1]d f(x) dx = ∫ [0,1]2d K(t, x) dt dx = ( 13 12 ) d/2

Slide 51

Slide 51 text

Why discrepancy increases with d discrepancy(∅) = sup ∥f∥ℋ ≤1 ∫ [0,1]d f(x) dx = ∫ [0,1]2d K(t, x) dt dx = ( 13 12) d/2

Slide 52

Slide 52 text

Why discrepancy increases with d discrepancy(∅) = sup ∥f∥ℋ ≤1 ∫ [0,1]d f(x) dx = ∫ [0,1]2d K(t, x) dt dx = ( 13 12) d/2 • Dividing discrepancy by 
 helps a bit • LD still (usually) beats IID, although not in convergence order discrepancy(∅)

Slide 53

Slide 53 text

A little data can be worse than no data sup ∥f∥ℋ ≤1 |μ|2 = discrepancy2(∅) = ( 13 12 ) d may be < ( 13 12 ) d − 2 d ∏ ℓ=1 [1 + 1 2 ( x0ℓ − 1/2 − x0ℓ − 1/2 2 )] + d ∏ ℓ=1 [1 + x0ℓ − 1/2 ] = discrepancy2({x0 }) = sup ∥f∥ℋ ≤1 |μ − ̂ μ1 |2

Slide 54

Slide 54 text

Shrinkage estimators? sup ∥f∥ℋ ≤1 |μ|2 = discrepancy2(∅) = ( 13 12 ) d must be > ( 13 12 ) d − 2α d ∏ ℓ=1 [1 + 1 2 ( x0ℓ − 1/2 − x0ℓ − 1/2 2 )] +α2 d ∏ ℓ=1 [1 + x0ℓ − 1/2 ] = sup ∥f∥ℋ ≤1 |μ−α ̂ μ1 |2 for optimal α

Slide 55

Slide 55 text

Shrinkage estimators? sup ∥f∥ℋ ≤1 |μ|2 = discrepancy2(∅) = ( 13 12 ) d must be > ( 13 12 ) d − 2α d ∏ ℓ=1 [1 + 1 2 ( x0ℓ − 1/2 − x0ℓ − 1/2 2 )] +α2 d ∏ ℓ=1 [1 + x0ℓ − 1/2 ] = sup ∥f∥ℋ ≤1 |μ−α ̂ μ1 |2 for optimal α Is this the right and corresponding discrepancy? K

Slide 56

Slide 56 text

Centered discrepancy w/ coordinate weights K(t, x) = d ∏ ℓ=1 [ 1 + γ2 ℓ 2 ( tℓ − 1/2 + xℓ − 1/2 − tℓ − xℓ )] discrepancy2({xi }n−1 i=0 ) = d ∏ ℓ=1 ( 1 + γ2 ℓ 12) − 2 n n−1 ∑ i=0 d ∏ ℓ=1 [ 1 + γ2 ℓ 2 ( xiℓ − 1/2 − xiℓ − 1/2 2 )] + 1 n2 n−1 ∑ i,j=0 d ∏ ℓ=1 [ 1 + γ2 ℓ 2 ( xiℓ − 1/2 + xjℓ − 1/2 − xiℓ − xjℓ )] variation2(f ) = ∫ [0,1] ∂f(x1 , 1/2) γ1 ∂x1 2 dx1 + ⋯ + ∫ [0,1]2 ∂2f(x1 , x2 , 1/2) γ1 γ2 ∂x1 ∂x2 2 dx1 dx2 +⋯ + ∫ [0,1]d ∂df(x) γ1 ⋯γd ∂x1 ⋯∂xd 2 dx γ1 ≥ γ2 ≥ ⋯ > 0

Slide 57

Slide 57 text

Decaying coordinate weights make integration tractable [NW10] |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f )

Slide 58

Slide 58 text

Decaying coordinate weights make integration tractable [NW10] • Problems are tractable if the work required to solve them to error grows slower than exponentially in • Tractability requires shrinking • Hopefully, the encountered in practice have moderate even with decaying coordinate weights ≤ ε ε−1 {f : variation(f) ≤ 1} f variation(f) |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f )

Slide 59

Slide 59 text

Decaying coordinate weights may overcome the curse of dimensionality, but you may need to formulate your problem with most of the variation into the lower coordinates

Slide 60

Slide 60 text

How to find low discrepancy sequences

Slide 61

Slide 61 text

How to find low discrepancy sequences • Computing requires operations, but … discrepancy({xn−1 i=0 }) 𝒪 (dn2)

Slide 62

Slide 62 text

How to find low discrepancy sequences • Computing requires operations, but … discrepancy({xn−1 i=0 }) 𝒪 (dn2) • Computing for shifted lattices and nets requires only operations to compute, because … 𝔼 [(discrepancy2({xshift i }n−1 i=0 ))] 𝒪 (dn) 𝔼 [(discrepancy2({xshift i }n−1 i=0 ; K))] = discrepancy2({xi }n−1 i=0 ; ˜ K ) = 1 n n−1 ∑ i=0 ˜ K (xi , x0 ) − ∫ [0,1]d ˜ K (x, x0 ) dx where is a (digital) shift invariant version of , e.g., for lattices ˜ K K ˜ K (t, x) = ∫ [0,1]d K(t + Δ mod 1, x + Δ mod 1) dΔ

Slide 63

Slide 63 text

When working with lattices and digital sequences, if possible use shift-invariant kernels for greater computational efficiency

Slide 64

Slide 64 text

Is worst case error analysis too pessimistic?

Slide 65

Slide 65 text

• Spaces of smoother with correspondingly smoother reproducing kernels, , may allow faster decay of the error—see lattices for periodic with higher order smoothness and higher order nets for with higher order smoothness f K f f Is worst case error analysis too pessimistic?

Slide 66

Slide 66 text

• Spaces of smoother with correspondingly smoother reproducing kernels, , may allow faster decay of the error—see lattices for periodic with higher order smoothness and higher order nets for with higher order smoothness f K f f • For randomized , {xn−1 i=0 } Swapping the order may lead to an extra 𝒪 (n−1/2) Is worst case error analysis too pessimistic? 𝔼 {xi }n−1 i=0 sup variation(f)≤1 |μ − ̂ μn |2 = 𝔼 {xi }n−1 i=0 discrepancy2({xi }n−1 i=0 ) ≥ sup variation(f)≤1 𝔼 {xi }n−1 i=0 |μ − ̂ μn |2

Slide 67

Slide 67 text

• Spaces of smoother with correspondingly smoother reproducing kernels, , may allow faster decay of the error—see lattices for periodic with higher order smoothness and higher order nets for with higher order smoothness f K f f • For randomized , {xn−1 i=0 } Swapping the order may lead to an extra 𝒪 (n−1/2) • [H18] surveys worst case, randomized, and Bayesian error analyses Is worst case error analysis too pessimistic? 𝔼 {xi }n−1 i=0 sup variation(f)≤1 |μ − ̂ μn |2 = 𝔼 {xi }n−1 i=0 discrepancy2({xi }n−1 i=0 ) ≥ sup variation(f)≤1 𝔼 {xi }n−1 i=0 |μ − ̂ μn |2

Slide 68

Slide 68 text

Overview • Where in practice • Constructing low discrepancy (LD) • Discrepancy (quality) measures for • Choosing so that • Making our original problem look like the above • Ongoing research x0 , x1 , … x0 , x1 , … n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 69

Slide 69 text

How to choose to get the desired accuracy? n μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f ) want |μ − ̂ μn | ≤ ε

Slide 70

Slide 70 text

How to choose to get the desired accuracy? n μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • is for low discrepancy points versus 
 for IID discrepancy({xi }n−1 i=0 ) 𝒪 (n−1) 𝒪 (n−1/2) |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f ) want |μ − ̂ μn | ≤ ε

Slide 71

Slide 71 text

How to choose to get the desired accuracy? n μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • is for low discrepancy points versus 
 for IID discrepancy({xi }n−1 i=0 ) 𝒪 (n−1) 𝒪 (n−1/2) • There is an explicit formula for discrepancy, but the variation is hard to compute in practice |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f ) want |μ − ̂ μn | ≤ ε

Slide 72

Slide 72 text

How to choose to get the desired accuracy? n μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • is for low discrepancy points versus 
 for IID discrepancy({xi }n−1 i=0 ) 𝒪 (n−1) 𝒪 (n−1/2) • There is an explicit formula for discrepancy, but the variation is hard to compute in practice • One method is random replications plus the Student’s con fi dence interval [LENOT24] t |μ − ̂ μn | ≤ discrepancy({xi }n−1 i=0 ) variation(f ) want |μ − ̂ μn | ≤ ε

Slide 73

Slide 73 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε

Slide 74

Slide 74 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • For lattices, , , ̂ f(k) := ∫ [0,1]d f(x) 𝖾 −2π −1k′  x dx f(x) = ∑ k∈ℤd ̂ f(k) 𝖾 2π −1k′  x μ = ̂ f(0)

Slide 75

Slide 75 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • For lattices, , , ̂ f(k) := ∫ [0,1]d f(x) 𝖾 −2π −1k′  x dx f(x) = ∑ k∈ℤd ̂ f(k) 𝖾 2π −1k′  x μ = ̂ f(0) • due to aliasing ( constant for all ) μ − ̂ μn 𝖾 2π −1k′  xi i

Slide 76

Slide 76 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • For lattices, , , ̂ f(k) := ∫ [0,1]d f(x) 𝖾 −2π −1k′  x dx f(x) = ∑ k∈ℤd ̂ f(k) 𝖾 2π −1k′  x μ = ̂ f(0) • due to aliasing ( constant for all ) μ − ̂ μn 𝖾 2π −1k′  xi i • Approximate by a one-dimensional FFT of ̂ f(k) {f(xi )}n−1 i=0

Slide 77

Slide 77 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • For lattices, , , ̂ f(k) := ∫ [0,1]d f(x) 𝖾 −2π −1k′  x dx f(x) = ∑ k∈ℤd ̂ f(k) 𝖾 2π −1k′  x μ = ̂ f(0) • due to aliasing ( constant for all ) μ − ̂ μn 𝖾 2π −1k′  xi i • Approximate by a one-dimensional FFT of ̂ f(k) {f(xi )}n−1 i=0 • If the decay in a reasonable manner, their FFT approximations can be used to provide a rigorous data-driven bound on ̂ f(k) μ − ̂ μn

Slide 78

Slide 78 text

QMCPy has deterministic stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • For lattices, , , ̂ f(k) := ∫ [0,1]d f(x) 𝖾 −2π −1k′  x dx f(x) = ∑ k∈ℤd ̂ f(k) 𝖾 2π −1k′  x μ = ̂ f(0) • due to aliasing ( constant for all ) μ − ̂ μn 𝖾 2π −1k′  xi i • Approximate by a one-dimensional FFT of ̂ f(k) {f(xi )}n−1 i=0 • If the decay in a reasonable manner, their FFT approximations can be used to provide a rigorous data-driven bound on ̂ f(k) μ − ̂ μn • Similarly for digital nets

Slide 79

Slide 79 text

QMCPy has Bayesian stopping rules μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn , want |μ − ̂ μn | ≤ ε • If is assumed to be a Gaussian stochastic process with covariance kernel , where the hyper-parameters are properly tuned, then one may construct a Bayesian credible interval for the error • If is chosen to be shift invariant and lattice/digital sequences are used, then the computation normally required is reduced to f K K 𝒪 (n3) 𝒪 (n log n)

Slide 80

Slide 80 text

Overview • Where in practice • Constructing low discrepancy (LD) • Discrepancy (quality) measures for • Choosing so that • Making our original problem look like the above • Ongoing research x0 , x1 , … x0 , x1 , … n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 81

Slide 81 text

Variable Transformations μ := ∫ Ω g(z) λ(z) dz = something wonderful ⏞ ⋯ = expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 82

Slide 82 text

Variable Transformations μ := ∫ Ω g(z) λ(z) dz = something wonderful ⏞ ⋯ = expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn If for , then • is not unique • Want to be as small as possible (an art) • Often choose , but not necessary Z = Ψ(X) X ∼ 𝒰 [0,1]d f(x) = g(Ψ(x)) λ(Ψ(x)) ∂Ψ ∂x Ψ variation(f) λ(Ψ(x)) ∂Ψ ∂x = 1

Slide 83

Slide 83 text

Ex., option pricing [G04] μ := 𝔼 [payoff(Brownian motion(t1 , …, td ))] = ∫ ℝd payoff(z) exp(−zTΣ−1z/2) (2π)d|Σ| dz Often, choosing by principal component analysis (PCA) gives faster convergence than by Cholesky 𝖠 Z = 𝖠 Φ−1(X1 ) ⋮ Φ−1(Xd ) Ψ(X) , 𝖠 𝖠 T = Σ 10°3 10°2 10°1 " 10°2 10°1 100 101 Execution Time (s) IID, PCA Sobol, Cholesky Sobol, PCA

Slide 84

Slide 84 text

Variation Reduction μ := ∫ Ω g(z)λ(z) dz = something wonderful ⏞ ⋯ = expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • Variable transforms are akin to importance sampling • May be di ff i cult for Bayesian posterior mean problems • Control variates may also be used • Acceptance-rejection sampling does not work well

Slide 85

Slide 85 text

Multilevel methods reduce computational cost [G12] μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 86

Slide 86 text

Multilevel methods reduce computational cost [G12] μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • The cost to evaluate is typically , so the cost to obtain is typically f(xi ) 𝒪 (d) |μ − ̂ μn | ≤ ε 𝒪 (dε−1−δ)

Slide 87

Slide 87 text

Multilevel methods reduce computational cost [G12] μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • The cost to evaluate is typically , so the cost to obtain is typically f(xi ) 𝒪 (d) |μ − ̂ μn | ≤ ε 𝒪 (dε−1−δ) • If one can approximate by lower dimensional approximations, , then f fs : [0,1]s → ℝ μ = 𝔼 [fs1 (X1:s1 )] μ(1) + 𝔼 [fs2 (X1:s2 ) − fs1 (X1:s1 )] μ(2) + ⋯ + 𝔼 [f(X1:d ) − fsL−1 (X1:sL−1 )] μ(L)

Slide 88

Slide 88 text

Multilevel methods reduce computational cost [G12] μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn • The cost to evaluate is typically , so the cost to obtain is typically f(xi ) 𝒪 (d) |μ − ̂ μn | ≤ ε 𝒪 (dε−1−δ) • If one can approximate by lower dimensional approximations, , then f fs : [0,1]s → ℝ μ = 𝔼 [fs1 (X1:s1 )] μ(1) + 𝔼 [fs2 (X1:s2 ) − fs1 (X1:s1 )] μ(2) + ⋯ + 𝔼 [f(X1:d ) − fsL−1 (X1:sL−1 )] μ(L) • Balance the cost to approximate well and the total cost to obtain may be as small as as 𝒪 (nl sl ) μ(s) |μ − ̂ μn | ≤ ε 𝒪 (ε−1−δ) d, ε−1 → ∞

Slide 89

Slide 89 text

Gaussian Process Regression for PDEs with random fields [SPOMHHH24]

Slide 90

Slide 90 text

Conditional Monte Carlo for Density Estimation [LEPBA22, GKS24] Estimate the probability density, , for , where ϱ Y = f(X) X ∼ 𝒰 [0,1]d

Slide 91

Slide 91 text

Conditional Monte Carlo for Density Estimation [LEPBA22, GKS24] Estimate the probability density, , for , where ϱ Y = f(X) X ∼ 𝒰 [0,1]d y = f(x) ⟺ x1 = g(y; x2:d ) If one can identify , such that g

Slide 92

Slide 92 text

Conditional Monte Carlo for Density Estimation [LEPBA22, GKS24] Estimate the probability density, , for , where ϱ Y = f(X) X ∼ 𝒰 [0,1]d y = f(x) ⟺ x1 = g(y; x2:d ) Then ϱ(y) = ∫ [0,1]d−1 g′  (y; x) dx If one can identify , such that g

Slide 93

Slide 93 text

Overview • Where in practice • Constructing low discrepancy (LD) • Discrepancy (quality) measures for • Choosing so that • Making our original problem look like the above • Ongoing research x0 , x1 , … x0 , x1 , … n |μ − ̂ μn | ≤ ε μ := expectation 𝔼 [f( X ⏟ ∼ 𝒰 ([0,1]d) )] = integral ∫ [0,1]d f(x) dx ≈ sample mean 1 n n−1 ∑ i=0 f(xi ) =: ̂ μn

Slide 94

Slide 94 text

Ongoing research • Construction of LD sequences — Takahashi Goda • Error estimation — Art Owen — and stopping rules, especially for multilevel methods • Multi- fi delity models and Bayesian approaches — Chris Oates • Connections with machine learning — Frances Kuo • Applications beyond integration • Good software — QMCPy and others

Slide 95

Slide 95 text

My aim for the past 75 minutes You will • Understand what quasi-Monte Carlo (qMC) methods are • Try qMC in place of simple (or IID) Monte Carlo • Use qMC properly • Feel free to interrupt and ask questions • Join our friendly qMC research community

Slide 96

Slide 96 text

MCM2025Chicago.org July 28 – August 1 Plenary Speakers Nicholas Chopin, ENSAE Peter Glynn, Stanford U
 Roshan Joseph, Georgia Tech Christiane Lemieux, U Waterloo
 Matt Pharr, NVIDIA Veronika Rockova, U Chicago
 Uros Seljak, U California, Berkeley Michaela Szölgyenyi, U Klagenfurt

Slide 97

Slide 97 text

References [QMCPy] S.-C. T. Choi, F. J. Hickernell, R. Jagadeeswaran, M. McCourt, and A. Sorokin, QMCPy: A quasi- Monte Carlo Python library (versions 1–1.5), 2024. [DKP22] J. Dick, P. Kritzer, and F. Pillichshammer, Lattice rules: Numerical integration, approximation, and discrepancy, Springer Series in Computational Mathematics, Springer Cham, 2022. [DP10] J. Dick and F. Pillichshammer, Digital nets and sequences: Discrepancy theory and quasi-Monte Carlo integration, Cambridge University Press, Cambridge, 2010. GKS23] A. D. Gilbert, F. Y. Kuo, and I. H. Sloan, Analysis of preintegration followed by quasi-Monte Carlo integration for distribution functions and densities, SIAM J. Numer. Anal. 61 (2023), 135–166. [G13] M. Giles, Multilevel Monte Carlo methods, Monte Carlo and Quasi-Monte Carlo Methods 2012 (J. Dick, F. Y. Kuo, G. W. Peters, and I. H. Sloan, eds.), Springer Proceedings in Mathematics and Statistics, vol. 65, Springer-Verlag, Berlin, 2013. [G04] P. Glasserman, Monte Carlo methods in fi nancial engineering, Applications of Mathematics, vol. 53, Springer-Verlag, New York, 2004.

Slide 98

Slide 98 text

References [H00] F. J. Hickernell, What a ff ects the accuracy of quasi-Monte Carlo quadrature?, Monte Carlo and Quasi-Monte Carlo Methods 1998 (H. Niederreiter and J. Spanier, eds.), Springer-Verlag, Berlin, 2000, pp. 16–55. [H18] __________, The trio identity for quasi-Monte Carlo error analysis, Monte Carlo and Quasi-Monte Carlo Methods: MCQMC, Stanford, USA, August 2016 (P. Glynn and A. Owen, eds.), Springer Proceedings in Mathematics and Statistics, Springer-Verlag, Berlin, 2018, pp. 3–27. [LEPBA22] P. L'Ecuyer, F. Puchhammer, and A. Ben Abdellah, Monte Carlo and quasi-Monte Carlo density estimation via conditioning, INFORMS J. Comput. 34 (2022), no. 3, 1729–1748. [NW10] E. Novak and H. Woźniakowski, Tractability of multivariate problems Volume II: Standard information for functionals, EMS Tracts in Mathematics, no. 12, European Mathematical Society, Zürich, 2010. [LENOT24] P. L'Ecuyer, M. K. Nakayama, A. B. Owen, and B. Tu ffi n, Con fi dence intervals for randomized quasi-Monte Carlo estimators, WSC ’23: Proceedings of the Winter Simulation Conference, 2024, pp. 445–456.

Slide 99

Slide 99 text

References [SPDOHHH24] A. G. Sorokin, A. Pachalieva, D. O'Malley, J. M. Hyman, F. J. Hickernell, and N.~W. Hengartner, Computationally e ff i cient and error aware surrogate construction for numerical solutions of subsurface fl ow through porous media, 2024+, arXiv:2310.13765.