Slide 1

Slide 1 text

Relevant statistics for (approximate) Bayesian model choice Christian P. Robert Universit´ e Paris-Dauphine, Paris & University of Warwick, Coventry [email protected]

Slide 2

Slide 2 text

a meeting of potential interest?! MCMSki IV to be held in Chamonix Mt Blanc, France, from Monday, Jan. 6 to Wed., Jan. 8 (and BNP workshop on Jan. 9), 2014 All aspects of MCMC++ theory and methodology and applications and more! All posters welcomed! Website http://www.pages.drexel.edu/∼mwl25/mcmski/

Slide 3

Slide 3 text

Outline Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice Model choice consistency Conclusion and perspectives

Slide 4

Slide 4 text

Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1, . . . , yn|θ) is (really!) not available in closed form cannot (easily!) be either completed or demarginalised cannot be (at all!) estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings

Slide 5

Slide 5 text

Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1, . . . , yn|θ) is (really!) not available in closed form cannot (easily!) be either completed or demarginalised cannot be (at all!) estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings

Slide 6

Slide 6 text

Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1, . . . , yn|θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics

Slide 7

Slide 7 text

Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1, . . . , yn|θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics

Slide 8

Slide 8 text

Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1, . . . , yn|θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics

Slide 9

Slide 9 text

Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)

Slide 10

Slide 10 text

Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)

Slide 11

Slide 11 text

Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)

Slide 12

Slide 12 text

Approximate Bayesian computation Intractable likelihoods ABC methods Genesis of ABC ABC basics Advances and interpretations ABC as knn ABC as an inference machine ABC for model choice Model choice consistency Conclusion and perspectives

Slide 13

Slide 13 text

Genetic background of ABC skip genetics ABC is a recent computational technique that only requires being able to sample from the likelihood f (·|θ) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute significantly to methodological developments of ABC. [Griffith & al., 1997; Tavar´ e & al., 1999]

Slide 14

Slide 14 text

Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...

Slide 15

Slide 15 text

Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...

Slide 16

Slide 16 text

Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Sample of 8 genes Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2

Slide 17

Slide 17 text

Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2

Slide 18

Slide 18 text

Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2

Slide 19

Slide 19 text

Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Observations: leafs of the tree ^ θ = ? Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2

Slide 20

Slide 20 text

Much more interesting models. . . several independent locus Independent gene genealogies and mutations different populations linked by an evolutionary scenario made of divergences, admixtures, migrations between populations, selection pressure, etc. larger sample size usually between 50 and 100 genes A typical evolutionary scenario: MRCA POP 0 POP 1 POP 2 τ1 τ2

Slide 21

Slide 21 text

Instance of ecological questions How the Asian ladybird beetle arrived in Europe? Why does they swarm right now? What are the routes of invasion? How to get rid of them? Why did the chicken cross the road? [Lombaert & al., 2010, PLoS ONE]

Slide 22

Slide 22 text

Worldwide invasion routes of Harmonia axyridis Worldwide routes of invasion of Harmonia axyridis For each outbreak, the arrow indicates the most likely invasion pathway and the associated posterior probability, with 95% credible intervals in brackets [Estoup et al., 2012, Molecular Ecology Res.]

Slide 23

Slide 23 text

c Intractable likelihood Missing (too much missing!) data structure: f (y|θ) = G f (y|G, θ)f (G|θ)dG cannot be computed in a manageable way... [Stephens & Donnelly, 2000] The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.

Slide 24

Slide 24 text

c Intractable likelihood Missing (too much missing!) data structure: f (y|θ) = G f (y|G, θ)f (G|θ)dG cannot be computed in a manageable way... [Stephens & Donnelly, 2000] The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.

Slide 25

Slide 25 text

A?B?C? A stands for approximate [wrong likelihood / pic?] B stands for Bayesian C stands for computation [producing a parameter sample]

Slide 26

Slide 26 text

A?B?C? A stands for approximate [wrong likelihood / pic?] B stands for Bayesian C stands for computation [producing a parameter sample]

Slide 27

Slide 27 text

A?B?C? A stands for approximate [wrong likelihood / pic?] B stands for Bayesian C stands for computation [producing a parameter sample] ESS=155.6 θ Density −0.5 0.0 0.5 1.0 0.0 1.0 ESS=75.93 θ Density −0.4 −0.2 0.0 0.2 0.4 0.0 1.0 2.0 ESS=76.87 θ Density −0.4 −0.2 0.0 0.2 0 1 2 3 4 ESS=91.54 θ Density −0.6 −0.4 −0.2 0.0 0.2 0 1 2 3 4 ESS=108.4 θ Density −0.4 0.0 0.2 0.4 0.6 0.0 1.0 2.0 3.0 ESS=85.13 θ Density −0.2 0.0 0.2 0.4 0.6 0.0 1.0 2.0 3.0 ESS=149.1 θ Density −0.5 0.0 0.5 1.0 0.0 1.0 2.0 ESS=96.31 θ Density −0.4 0.0 0.2 0.4 0.6 0.0 1.0 2.0 ESS=83.77 θ Density −0.6 −0.4 −0.2 0.0 0.2 0.4 0 1 2 3 4 ESS=155.7 θ Density −0.5 0.0 0.5 0.0 1.0 2.0 ESS=92.42 θ Density −0.4 0.0 0.2 0.4 0.6 0.0 1.0 2.0 3.0 ESS=95.01 θ Density −0.4 0.0 0.2 0.4 0.6 0.0 1.5 3.0 ESS=139.2 Density −0.6 −0.2 0.2 0.6 0.0 1.0 2.0 ESS=99.33 Density −0.4 −0.2 0.0 0.2 0.4 0.0 1.0 2.0 3.0 ESS=87.28 Density −0.2 0.0 0.2 0.4 0.6 0 1 2 3

Slide 28

Slide 28 text

How Bayesian is aBc? Could we turn the resolution into a Bayesian answer? ideally so (not meaningfull: requires ∞-ly powerful computer approximation error unknown (w/o costly simulation) true B for wrong model (formal and artificial) true B for noisy model (much more convincing) true B for estimated likelihood (back to econometrics?) illuminating the tension between information and precision

Slide 29

Slide 29 text

ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ e et al., 1997]

Slide 30

Slide 30 text

ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ e et al., 1997]

Slide 31

Slide 31 text

ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ e et al., 1997]

Slide 32

Slide 32 text

A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from π(θ) Pθ{ρ(y, z) < } def ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]

Slide 33

Slide 33 text

A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from π(θ) Pθ{ρ(y, z) < } def ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]

Slide 34

Slide 34 text

ABC algorithm In most implementations, further degree of A...pproximation: Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic

Slide 35

Slide 35 text

Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!

Slide 36

Slide 36 text

Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!

Slide 37

Slide 37 text

Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!

Slide 38

Slide 38 text

Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the restricted posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) . Not so good..!

Slide 39

Slide 39 text

Comments Role of distance paramount (because = 0) Scaling of components of η(y) is also determinant matters little if “small enough” representative of “curse of dimensionality” small is beautiful! the data as a whole may be paradoxically weakly informative for ABC

Slide 40

Slide 40 text

curse of dimensionality in summaries Consider that the simulated summary statistics η(y1), . . . , η(yN) the observed summary statistics η(yobs) are iid with uniform distribution on [0, 1]d Take δ∞(d, N) = E min i=1,...,N η(yobs) − η(yi ) ∞ N = 100 N = 1, 000 N = 10, 000 N = 100, 000 δ∞(1, N) 0.0025 0.00025 0.000025 0.0000025 δ∞(2, N) 0.033 0.01 0.0033 0.001 δ∞(10, N) 0.28 0.22 0.18 0.14 δ∞(200, N) 0.48 0.48 0.47 0.46

Slide 41

Slide 41 text

ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ] [Ratmann et al., 2009]

Slide 42

Slide 42 text

ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ] [Ratmann et al., 2009]

Slide 43

Slide 43 text

ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ] [Ratmann et al., 2009]

Slide 44

Slide 44 text

ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ] [Ratmann et al., 2009]

Slide 45

Slide 45 text

ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ cois, 2010] ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]

Slide 46

Slide 46 text

ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ cois, 2010] ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]

Slide 47

Slide 47 text

ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ cois, 2010] ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]

Slide 48

Slide 48 text

ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 and hp N kN → ∞,

Slide 49

Slide 49 text

ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 and hp N kN → ∞,

Slide 50

Slide 50 text

Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4 p+8 when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4 p+8 log N when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4 m+p+4 [Biau et al., 2013] Drag: Only applies to sufficient summary statistics

Slide 51

Slide 51 text

Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4 p+8 when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4 p+8 log N when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4 m+p+4 [Biau et al., 2013] Drag: Only applies to sufficient summary statistics

Slide 52

Slide 52 text

ABC inference machine Intractable likelihoods ABC methods ABC as an inference machine Error inc. Exact BC and approximate targets summary statistic ABC for model choice Model choice consistency Conclusion and perspectives

Slide 53

Slide 53 text

How Bayesian is aBc..? may be a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o massive simulation) pragmatic/empirical B (there is no other solution!) many calibration issues (tolerance, distance, statistics) the NP side should be incorporated into the whole B picture the approximation error should also be part of the B inference

Slide 54

Slide 54 text

ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Slide 55

Slide 55 text

ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Slide 56

Slide 56 text

ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Slide 57

Slide 57 text

Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) π(θ)f (z|θ)K (y − z)dzdθ , with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the production of a randomised observation y = ˜ y + ξ, ξ ∼ K , and an acceptance probability of K (y − z)/M gives draws from the posterior distribution π(θ|y).

Slide 58

Slide 58 text

Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) π(θ)f (z|θ)K (y − z)dzdθ , with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the production of a randomised observation y = ˜ y + ξ, ξ ∼ K , and an acceptance probability of K (y − z)/M gives draws from the posterior distribution π(θ|y).

Slide 59

Slide 59 text

How exact a BC? Pros Pseudo-data from true model vs. observed data from noisy model Outcome is completely controlled Link with ABCµ when assuming y observed with measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] leads to “noisy ABC”: just do it!, i.e. perturbates the data down to precision ε and proceed [Fearnhead & Prangle, 2012] Cons Requires K to be bounded by M True approximation error never assessed

Slide 60

Slide 60 text

How exact a BC? Pros Pseudo-data from true model vs. observed data from noisy model Outcome is completely controlled Link with ABCµ when assuming y observed with measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] leads to “noisy ABC”: just do it!, i.e. perturbates the data down to precision ε and proceed [Fearnhead & Prangle, 2012] Cons Requires K to be bounded by M True approximation error never assessed

Slide 61

Slide 61 text

Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination

Slide 62

Slide 62 text

Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination

Slide 63

Slide 63 text

Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination

Slide 64

Slide 64 text

Semi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC considered as inferential method and calibrated as such randomised (or ‘noisy’) version of the summary statistics ˜ η(y) = η(y) + τ derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic [weak property]

Slide 65

Slide 65 text

Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! [requires iterative process] use of the standard quadratic loss function (θ − θ0)TA(θ − θ0) recent extension to model choice, optimality of Bayes factor B12(y) [F&P, ISBA 2012, Kyoto]

Slide 66

Slide 66 text

Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! [requires iterative process] use of the standard quadratic loss function (θ − θ0)TA(θ − θ0) recent extension to model choice, optimality of Bayes factor B12(y) [F&P, ISBA 2012, Kyoto]

Slide 67

Slide 67 text

LDA summaries for model choice In parallel to F& P semi-automatic ABC, selection of most discriminant subvector out of a collection of summary statistics, based on Linear Discriminant Analysis (LDA) [Estoup & al., 2012, Mol. Ecol. Res.] Solution now implemented in DIYABC.2 [Cornuet & al., 2008, Bioinf.; Estoup & al., 2013]

Slide 68

Slide 68 text

Implementation Step 1: Rake a subset of the α% (e.g., 1%) best simulations from an ABC reference table usually including 106–109 simulations for each of the M compared scenarios/models Selection based on normalized Euclidian distance computed between observed and simulated raw summaries Step 2: run LDA on this subset to transform summaries into (M − 1) discriminant variables When computing LDA functions, weight simulated data with an Epanechnikov kernel Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M − 1 most discriminant variables [Cornuet & al., 2008, Bioinformatics]

Slide 69

Slide 69 text

LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated

Slide 70

Slide 70 text

LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated

Slide 71

Slide 71 text

ABC for model choice Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice BMC Principle Gibbs random fields (counterexample) Generic ABC model choice Model choice consistency Conclusion and perspectives

Slide 72

Slide 72 text

Bayesian model choice BMC Principle Several models M1, M2, . . . are considered simultaneously for dataset y and model index M central to inference. Use of prior π(M = m), plus prior distribution on the parameter conditional on the value m of the model index, πm(θm)

Slide 73

Slide 73 text

Bayesian model choice BMC Principle Several models M1, M2, . . . are considered simultaneously for dataset y and model index M central to inference. Goal is to derive the posterior distribution of M, π(M = m|data) a challenging computational target when models are complex.

Slide 74

Slide 74 text

Generic ABC for model choice Algorithm 2 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior π(M = m) Generate θm from the prior πm(θm) Generate z from the model fm(z|θm) until ρ{η(z), η(y)} < Set m(t) = m and θ(t) = θm end for [Grelaud & al., 2009; Toni & al., 2009]

Slide 75

Slide 75 text

ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m .

Slide 76

Slide 76 text

ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m . Extension to a weighted polychotomous logistic regression estimate of π(M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]

Slide 77

Slide 77 text

Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ l∼i δyl =yi where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = x∈X exp{θS(x)} involves too many terms to be manageable and numerical approximations cannot always be trusted

Slide 78

Slide 78 text

Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ l∼i δyl =yi where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = x∈X exp{θS(x)} involves too many terms to be manageable and numerical approximations cannot always be trusted

Slide 79

Slide 79 text

Neighbourhood relations Setup Choice to be made between M neighbourhood relations i m ∼ i (0 m M − 1) with Sm(x) = im ∼i I{xi =xi } driven by the posterior probabilities of the models.

Slide 80

Slide 80 text

Model index Computational target: P(M = m|x) ∝ Θm fm(x|θm)πm(θm) dθm π(M = m) If S(x) sufficient statistic for the joint parameters (M, θ0, . . . , θM−1), P(M = m|x) = P(M = m|S(x)) .

Slide 81

Slide 81 text

Model index Computational target: P(M = m|x) ∝ Θm fm(x|θm)πm(θm) dθm π(M = m) If S(x) sufficient statistic for the joint parameters (M, θ0, . . . , θM−1), P(M = m|x) = P(M = m|S(x)) .

Slide 82

Slide 82 text

Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm(·) and S(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient. Explanation: For Gibbs random fields, x|M = m ∼ fm(x|θm) = f 1 m (x|S(x))f 2 m (S(x)|θm) = 1 n(S(x)) f 2 m (S(x)|θm) where n(S(x)) = {˜ x ∈ X : S(˜ x) = S(x)} c S(x) is sufficient for the joint parameters

Slide 83

Slide 83 text

Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm(·) and S(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient. Explanation: For Gibbs random fields, x|M = m ∼ fm(x|θm) = f 1 m (x|S(x))f 2 m (S(x)|θm) = 1 n(S(x)) f 2 m (S(x)|θm) where n(S(x)) = {˜ x ∈ X : S(˜ x) = S(x)} c S(x) is sufficient for the joint parameters

Slide 84

Slide 84 text

Toy example iid Bernoulli model versus two-state first-order Markov chain, i.e. f0(x|θ0) = exp θ0 n i=1 I{xi =1} {1 + exp(θ0)}n , versus f1(x|θ1) = 1 2 exp θ1 n i=2 I{xi =xi−1} {1 + exp(θ1)}n−1 , with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase transition” boundaries).

Slide 85

Slide 85 text

About sufficiency If η1(x) sufficient statistic for model m = 1 and parameter θ1 and η2(x) sufficient statistic for model m = 2 and parameter θ2, (η1(x), η2(x)) is not always sufficient for (m, θm) c Potential loss of information at the testing level

Slide 86

Slide 86 text

About sufficiency If η1(x) sufficient statistic for model m = 1 and parameter θ1 and η2(x) sufficient statistic for model m = 2 and parameter θ2, (η1(x), η2(x)) is not always sufficient for (m, θm) c Potential loss of information at the testing level

Slide 87

Slide 87 text

Poisson/geometric example Sample x = (x1, . . . , xn) from either a Poisson P(λ) or from a geometric G(p) Sum S = n i=1 xi = η(x) sufficient statistic for either model but not simultaneously

Slide 88

Slide 88 text

Limiting behaviour of B12 (T → ∞) ABC approximation B12(y) = T t=1 Imt =1 Iρ{η(zt ),η(y)} T t=1 Imt =2 Iρ{η(zt ),η(y)} , where the (mt, zt)’s are simulated from the (joint) prior As T goes to infinity, limit B12 (y) = Iρ{η(z),η(y)} π1(θ1)f1(z|θ1) dz dθ1 Iρ{η(z),η(y)} π2(θ2)f2(z|θ2) dz dθ2 = Iρ{η,η(y)} π1(θ1)f η 1 (η|θ1) dη dθ1 Iρ{η,η(y)} π2(θ2)f η 2 (η|θ2) dη dθ2 , where f η 1 (η|θ1) and f η 2 (η|θ2) distributions of η(z)

Slide 89

Slide 89 text

Limiting behaviour of B12 (T → ∞) ABC approximation B12(y) = T t=1 Imt =1 Iρ{η(zt ),η(y)} T t=1 Imt =2 Iρ{η(zt ),η(y)} , where the (mt, zt)’s are simulated from the (joint) prior As T goes to infinity, limit B12 (y) = Iρ{η(z),η(y)} π1(θ1)f1(z|θ1) dz dθ1 Iρ{η(z),η(y)} π2(θ2)f2(z|θ2) dz dθ2 = Iρ{η,η(y)} π1(θ1)f η 1 (η|θ1) dη dθ1 Iρ{η,η(y)} π2(θ2)f η 2 (η|θ2) dη dθ2 , where f η 1 (η|θ1) and f η 2 (η|θ2) distributions of η(z)

Slide 90

Slide 90 text

Limiting behaviour of B12 ( → 0) When goes to zero, Bη 12 (y) = π1(θ1)f η 1 (η(y)|θ1) dθ1 π2(θ2)f η 2 (η(y)|θ2) dθ2 c Bayes factor based on the sole observation of η(y)

Slide 91

Slide 91 text

Limiting behaviour of B12 ( → 0) When goes to zero, Bη 12 (y) = π1(θ1)f η 1 (η(y)|θ1) dθ1 π2(θ2)f η 2 (η(y)|θ2) dθ2 c Bayes factor based on the sole observation of η(y)

Slide 92

Slide 92 text

Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)f η i (η(y)|θi ) Thus B12(y) = Θ1 π(θ1)g1(y)f η 1 (η(y)|θ1) dθ1 Θ2 π(θ2)g2(y)f η 2 (η(y)|θ2) dθ2 = g1(y) π1(θ1)f η 1 (η(y)|θ1) dθ1 g2(y) π2(θ2)f η 2 (η(y)|θ2) dθ2 = g1(y) g2(y) Bη 12 (y) . [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency

Slide 93

Slide 93 text

Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)f η i (η(y)|θi ) Thus B12(y) = Θ1 π(θ1)g1(y)f η 1 (η(y)|θ1) dθ1 Θ2 π(θ2)g2(y)f η 2 (η(y)|θ2) dθ2 = g1(y) π1(θ1)f η 1 (η(y)|θ1) dθ1 g2(y) π2(θ2)f η 2 (η(y)|θ2) dθ2 = g1(y) g2(y) Bη 12 (y) . [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency

Slide 94

Slide 94 text

Poisson/geometric example (back) Sample x = (x1, . . . , xn) from either a Poisson P(λ) or from a geometric G(p) Discrepancy ratio g1(x) g2(x) = S!n−S / i xi ! 1 n+S−1 S

Slide 95

Slide 95 text

Poisson/geometric discrepancy Range of B12(x) versus Bη 12 (x): The values produced have nothing in common.

Slide 96

Slide 96 text

Formal recovery Creating an encompassing exponential family f (x|θ1, θ2, α1, α2) ∝ exp{θT 1 η1(x) + θT 2 η2(x) + α1t1(x) + α2t2(x)} leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x)) [Didelot, Everitt, Johansen & Lawson, 2011]

Slide 97

Slide 97 text

Formal recovery Creating an encompassing exponential family f (x|θ1, θ2, α1, α2) ∝ exp{θT 1 η1(x) + θT 2 η2(x) + α1t1(x) + α2t2(x)} leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x)) [Didelot, Everitt, Johansen & Lawson, 2011] In the Poisson/geometric case, if i xi ! is added to S, no discrepancy

Slide 98

Slide 98 text

Formal recovery Creating an encompassing exponential family f (x|θ1, θ2, α1, α2) ∝ exp{θT 1 η1(x) + θT 2 η2(x) + α1t1(x) + α2t2(x)} leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x)) [Didelot, Everitt, Johansen & Lawson, 2011] Only applies in genuine sufficiency settings... c Inability to evaluate information loss due to summary statistics

Slide 99

Slide 99 text

Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, lim n→∞ Bη 12 (y) = (θ0 + 1)2 θ0 e−θ0

Slide 100

Slide 100 text

Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, lim n→∞ Bη 12 (y) = (θ0 + 1)2 θ0 e−θ0

Slide 101

Slide 101 text

ABC model choice consistency Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice Model choice consistency Formalised framework Consistency results Summary statistics Conclusion and perspectives

Slide 102

Slide 102 text

The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? Note/warnin: c drawn on T(y) through BT 12 (y) necessarily differs from c drawn on y through B12(y) [Marin, Pillai, X, & Rousseau, JRSS B, 2013]

Slide 103

Slide 103 text

The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? Note/warnin: c drawn on T(y) through BT 12 (y) necessarily differs from c drawn on y through B12(y) [Marin, Pillai, X, & Rousseau, JRSS B, 2013]

Slide 104

Slide 104 text

A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one).

Slide 105

Slide 105 text

A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one). Four possible statistics 1. sample mean y (sufficient for M1 if not M2); 2. sample median med(y) (insufficient); 3. sample variance var(y) (ancillary); 4. median absolute deviation mad(y) = med(|y − med(y)|);

Slide 106

Slide 106 text

A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one). q q q q q q q q q q q Gauss Laplace 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 n=100

Slide 107

Slide 107 text

A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one). q q q q q q q q q q q Gauss Laplace 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 n=100 q q q q q q q q q q q q q q q q q q Gauss Laplace 0.0 0.2 0.4 0.6 0.8 1.0 n=100

Slide 108

Slide 108 text

Framework Starting from sample y = (y1, . . . , yn) the observed sample, not necessarily iid with true distribution y ∼ Pn Summary statistics T(y) = Tn = (T1(y), T2(y), · · · , Td (y)) ∈ Rd with true distribution Tn ∼ Gn.

Slide 109

Slide 109 text

Framework c Comparison of – under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1 – under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2 turned into – under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn) – under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn)

Slide 110

Slide 110 text

Assumptions A collection of asymptotic “standard” assumptions: [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator Tn from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density [Think CLT!]

Slide 111

Slide 111 text

Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A1] There exist a sequence {vn} converging to +∞, a distribution Q, a symmetric, d × d positive definite matrix V0 and a vector µ0 ∈ Rd such that vnV −1/2 0 (Tn − µ0) n→∞ Q, under Gn

Slide 112

Slide 112 text

Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A2] For i = 1, 2, there exist sets Fn,i ⊂ Θi , functions µi (θi ) and constants i , τi , αi > 0 such that for all τ > 0, sup θi ∈Fn,i Gi,n |Tn − µi (θi )| > τ|µi (θi ) − µ0| ∧ i |θi (|τµi (θi ) − µ0| ∧ i )−αi v−αi n with πi (Fc n,i ) = o(v−τi n ).

Slide 113

Slide 113 text

Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A3] If inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, defining (u > 0) Sn,i (u) = θi ∈ Fn,i ; |µi (θi ) − µ0| u v−1 n , there exist constants di < τi ∧ αi − 1 such that πi (Sn,i (u)) ∼ udi v−di n , ∀u vn

Slide 114

Slide 114 text

Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A4] If inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, for any > 0, there exist U, δ > 0 and (En)n such that, if θi ∈ Sn,i (U) En ⊂ {t; gi (t|θi ) < δgn(t)} and Gn (Ec n ) < .

Slide 115

Slide 115 text

Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] Again (sumup) [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator Tn from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density

Slide 116

Slide 116 text

Effective dimension Understanding di in [A3]: defined only when µ0 ∈ {µi (θi ), θi ∈ Θi }, πi (θi : |µi (θi ) − µ0| < n−1/2) = O(n−di /2) is the effective dimension of the model Θi around µ0

Slide 117

Slide 117 text

Effective dimension Understanding di in [A3]: when inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, mi (Tn) gn(Tn) ∼ v−di n , thus log(mi (Tn)/gn(Tn)) ∼ −di log vn and v−di n penalization factor resulting from integrating θi out (see effective number of parameters in DIC)

Slide 118

Slide 118 text

Effective dimension Understanding di in [A3]: In regular models, di dimension of T(Θi ), leading to BIC In non-regular models, di can be smaller

Slide 119

Slide 119 text

Asymptotic marginals Asymptotically, under [A1]–[A4] mi (t) = Θi gi (t|θi ) πi (θi ) dθi is such that (i) if inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, Cl vd−di n mi (Tn) Cuvd−di n and (ii) if inf{|µi (θi ) − µ0|; θi ∈ Θi } > 0 mi (Tn) = oPn [vd−τi n + vd−αi n ].

Slide 120

Slide 120 text

Within-model consistency Under same assumptions, if inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, the posterior distribution of µi (θi ) given Tn is consistent at rate 1/vn provided αi ∧ τi > di .

Slide 121

Slide 121 text

Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value!

Slide 122

Slide 122 text

Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value! Indeed, if inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0 then Cl v−(d1−d2) n m1(Tn) m2(Tn) Cuv−(d1−d2) n , where Cl , Cu = OPn (1), irrespective of the true model. c Only depends on the difference d1 − d2: no consistency

Slide 123

Slide 123 text

Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value! Else, if inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} > inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0 then m1(Tn) m2(Tn) Cu min v−(d1−α2) n , v−(d1−τ2) n

Slide 124

Slide 124 text

Consistency theorem If inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0, Bayes factor BT 12 = O(v−(d1−d2) n ) irrespective of the true model. It is inconsistent since it always picks the model with the smallest dimension

Slide 125

Slide 125 text

Consistency theorem If Pn belongs to one of the two models and if µ0 cannot be attained by the other one : 0 = min (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) < max (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) , then the Bayes factor BT 12 is consistent

Slide 126

Slide 126 text

Consequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the simplest summary statistics Tn are ancillary statistics with different mean values under both models Else, if Tn asymptotically depends on some of the parameters of the models, it is possible that there exists θi ∈ Θi such that µi (θi ) = µ0 even though model M1 is misspecified

Slide 127

Slide 127 text

Consequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the simplest summary statistics Tn are ancillary statistics with different mean values under both models Else, if Tn asymptotically depends on some of the parameters of the models, it is possible that there exists θi ∈ Θi such that µi (θi ) = µ0 even though model M1 is misspecified

Slide 128

Slide 128 text

Toy example: Laplace versus Gauss [1] If Tn = n−1 n i=1 X4 i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · · and the true distribution is Laplace with mean θ0 = 1, under the Gaussian model the value θ∗ = 2 √ 3 − 3 also leads to µ0 = µ(θ∗) [here d1 = d2 = d = 1]

Slide 129

Slide 129 text

Toy example: Laplace versus Gauss [1] If Tn = n−1 n i=1 X4 i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · · and the true distribution is Laplace with mean θ0 = 1, under the Gaussian model the value θ∗ = 2 √ 3 − 3 also leads to µ0 = µ(θ∗) [here d1 = d2 = d = 1] c A Bayes factor associated with such a statistic is inconsistent

Slide 130

Slide 130 text

Toy example: Laplace versus Gauss [1] If Tn = n−1 n i=1 X4 i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

Slide 131

Slide 131 text

Toy example: Laplace versus Gauss [1] If Tn = n−1 n i=1 X4 i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · · q q q q q q q M1 M2 0.30 0.35 0.40 0.45 n=1000 Fourth moment

Slide 132

Slide 132 text

Toy example: Laplace versus Gauss [0] When T(y) = ¯ y(4) n , ¯ y(6) n and the true distribution is Laplace with mean θ0 = 0, then µ0 = 6, µ1(θ∗ 1 ) = 6 with θ∗ 1 = 2 √ 3 − 3 [d1 = 1 and d2 = 1/2] thus B12 ∼ n−1/4 → 0 : consistent Under the Gaussian model µ0 = 3, µ2(θ2) 6 > 3 ∀θ2 B12 → +∞ : consistent

Slide 133

Slide 133 text

Toy example: Laplace versus Gauss [0] When T(y) = ¯ y(4) n , ¯ y(6) n and the true distribution is Laplace with mean θ0 = 0, then µ0 = 6, µ1(θ∗ 1 ) = 6 with θ∗ 1 = 2 √ 3 − 3 [d1 = 1 and d2 = 1/2] thus B12 ∼ n−1/4 → 0 : consistent Under the Gaussian model µ0 = 3, µ2(θ2) 6 > 3 ∀θ2 B12 → +∞ : consistent

Slide 134

Slide 134 text

Toy example: Laplace versus Gauss [0] When T(y) = ¯ y(4) n , ¯ y(6) n q q Gauss Laplace 0.0 0.2 0.4 0.6 0.8 1.0 n=100 q q q q q q q q q q q q q q q q q q q q q Gauss Laplace 0.0 0.2 0.4 0.6 0.8 1.0 n=1000 q q q q q q q q q q q q Gauss Laplace 0.0 0.2 0.4 0.6 0.8 1.0 n=10000 Fourth and sixth moments

Slide 135

Slide 135 text

Checking for adequate statistics Run a practical check of the relevance (or non-relevance) of Tn null hypothesis that both models are compatible with the statistic Tn H0 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} = 0 against H1 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} > 0 testing procedure provides estimates of mean of Tn under each model and checks for equality

Slide 136

Slide 136 text

Checking in practice Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L For each θi,l , generate yi,l ∼ Fi,n(·|ψi,l ), derive Tn(yi,l ) and compute ^ µi = 1 L L l=1 Tn(yi,l ), i = 1, 2 . Conditionally on Tn(y), √ L( ^ µi − Eπ [µi (θi )|Tn(y)]) N(0, Vi ), Test for a common mean H0 : ^ µ1 ∼ N(µ0, V1) , ^ µ2 ∼ N(µ0, V2) against the alternative of different means H1 : ^ µi ∼ N(µi , Vi ), with µ1 = µ2 .

Slide 137

Slide 137 text

Toy example: Laplace versus Gauss q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Gauss Laplace Gauss Laplace 0 10 20 30 40 Normalised χ2 without and with mad

Slide 138

Slide 138 text

A population genetic illustration Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Model 2

Slide 139

Slide 139 text

A population genetic illustration Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Stepwise mutation model: the number of repeats of the mutated gene increases or decreases by one. Mutation rate µ common to all loci set to 0.005 (single parameter) with uniform prior distribution µ ∼ U[0.0001, 0.01]

Slide 140

Slide 140 text

A population genetic illustration Summary statistics associated to the (δµ)2 distance xl,i,j repeated number of allele in locus l = 1, . . . , L for individual i = 1, . . . , 100 within the population j = 1, 2, 3. Then (δµ)2 j1,j2 = 1 L L l=1   1 100 100 i1=1 xl,i1,j1 − 1 100 100 i2=1 xl,i2,j2   2 .

Slide 141

Slide 141 text

A population genetic illustration For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2 , most recent common ancestor at coalescence time τj1,j2 , gene genealogy distance of 2τj1,j2 , hence number of mutations Poisson with parameter 2µτj1,j2 . Therefore, E xl,i,j1 − xl,i ,j2 2 |τj1,j2 = 2µτj1,j2 and Model 1 Model 2 E (δµ)2 1,2 2µ1t 2µ2t E (δµ)2 1,3 2µ1t 2µ2t E (δµ)2 2,3 2µ1t 2µ2t

Slide 142

Slide 142 text

A population genetic illustration Thus, Bayes factor based only on distance (δµ)2 1,2 not convergent: if µ1 = µ2, same expectation Bayes factor based only on distance (δµ)2 1,3 or (δµ)2 2,3 not convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation if two of the three distances are used, Bayes factor converges: there is no (µ1, µ2) for which all expectations are equal

Slide 143

Slide 143 text

A population genetic illustration q q q 5 50 100 0.0 0.4 0.8 DM2(12) q q q q q q q q q q q q q q 5 50 100 0.0 0.4 0.8 DM2(13) q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 5 50 100 0.0 0.4 0.8 DM2(13) & DM2(23) Posterior probabilities that the data is from model 1 for 5, 50 and 100 loci

Slide 144

Slide 144 text

A population genetic illustration q q q q q DM2(12) DM2(13) & DM2(23) 0 20 40 60 80 100 120 140 q q q q q q q q q DM2(12) DM2(13) & DM2(23) 0.0 0.2 0.4 0.6 0.8 1.0 p−values

Slide 145

Slide 145 text

Embedded models When M1 submodel of M2, and if the true distribution belongs to the smaller model M1, Bayes factor is of order v−(d1−d2) n

Slide 146

Slide 146 text

Embedded models When M1 submodel of M2, and if the true distribution belongs to the smaller model M1, Bayes factor is of order v−(d1−d2) n If summary statistic only informative on a parameter that is the same under both models, i.e if d1 = d2, then c the Bayes factor is not consistent

Slide 147

Slide 147 text

Embedded models When M1 submodel of M2, and if the true distribution belongs to the smaller model M1, Bayes factor is of order v−(d1−d2) n Else, d1 < d2 and Bayes factor is going to ∞ under M1. If true distribution not in M1, then c Bayes factor is consistent only if µ1 = µ2 = µ0

Slide 148

Slide 148 text

Summary Model selection feasible with ABC Choice of summary statistics is paramount At best, ABC → π(. | T(y)) which concentrates around µ0 For consistent estimation : {θ; µ(θ) = µ0} = θ0 For consistent testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

Slide 149

Slide 149 text

Summary Model selection feasible with ABC Choice of summary statistics is paramount At best, ABC → π(. | T(y)) which concentrates around µ0 For consistent estimation : {θ; µ(θ) = µ0} = θ0 For consistent testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

Slide 150

Slide 150 text

Conclusion/perspectives abc part of a wider picture to handle complex/Big Data models, able to start from rudimentary machine learning summaries many formats of empirical [likelihood] Bayes methods available lack of comparative tools and of an assessment for information loss