-2 10 -1 10 1 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% 60-65% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 3 v 0 0.05 0.1 ) 3 p(v -1 10 1 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 4 v 0 0.01 0.02 0.03 0.04 ) 4 p(v 1 10 2 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 0.05 Model Initial conditions, τ0, η/s, . . . 0.00 0.05 0.10 0.15 0.20 v2 P(v2 ) Glauber 20-25% Model ATLAS 0.00 0.05 0.10 0.15 0.20 v2 KLN 20-25% 1 / 20
other parameters fixed. Only several discrete values. Qualitative constraints lacking uncertainty. New projects Event-by-event model. Vary all salient parameters: η/s, τ0, IC parameters, . . . Continuous parameter space. Quantitative constraints including uncertainty. See also, e.g.: J. Novak, K. Novak, S. Pratt, C. Coleman-Smith and R. Wolpert, PRC 89, 034917 (2014), arXiv:1303.5769 [nucl-th]. R. A. Soltz, I. Garishvili, M. Cheng, B. Abelev, A. Glenn, J. Newby, L. A. Linden Levy and S. Pratt, Phys. Rev. C 87, 044901 (2013), arXiv:1208.0897 [nucl-th]. −→ 3 / 20
Y. Nara, Phys. Rev. C 74, 044905 (2006). Viscous 2+1D hydro H. Song and U. Heinz, Phys. Rev. C 77, 064901 (2008). Cooper-Frye hypersurface sampler Z. Qiu and C. Shen, arXiv:1308.2182 [nucl-th]. UrQMD S. Bass et. al., Prog. Part. Nucl. Phys. 41, 255 (1998). M. Bleicher et. al., J. Phys. G 25, 1859 (1999). 4 / 20
v4. Measure qn = ( cos nφ , sin nφ ) e-by-e; vn = |qn|. 2 v 0 0.1 0.2 ) 2 p(v -2 10 -1 10 1 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% 60-65% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 3 v 0 0.05 0.1 ) 3 p(v -1 10 1 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 4 v 0 0.01 0.02 0.03 0.04 ) 4 p(v 1 10 2 10 |<2.5 η >0.5 GeV, | T p centrality: 0-1% 5-10% 20-25% 30-35% 40-45% ATLAS Pb+Pb =2.76 TeV NN s -1 b µ = 7 int L 0.05 ATLAS Collaboration, JHEP 1311, 183 (2013). 5 / 20
at arbitrary points in parameter space quantitative uncertainty Gaussian Processes for Machine Learning, Rasmussen and Williams, 2006. Emulator predicts 1000 hours worth of CPU time in 1 millisecond 10 / 20
/ exclusion. Glauber qualitatively describes data. KLN does not. Repeat with more advanced models, especially initial conditions. Rigorously calibrate model to data → extract optimal parameters with uncertainty. Consider other observables, e.g. identified particle spectra, dNch/dy. Solve the finite-multiplicity problem. 16 / 20
) = P(vobs n |vn)P(vn) dvn where P(vobs n |vn) is the response function. Pure statistical smearing → Bessel-Gaussian response P(vobs n |vn) = vobs n δ2 vn e −(vobs n )2+(vn)2 2δ2 vn I0 vnvobs n δ2 vn . 17 / 20
δ2 vn e −(vn)2+(vRP n )2 2δ2 vn I0 vRP n vn δ2 vn . Response function is also Bessel-Gaussian; determined by multiplicity. Keep vRP n , decrease width δ2 vn → δ2 vn − 1/2M. 18 / 20
An observed flow distribution may have multiple possible origin distributions (within uncertainty). vn P(vn ) vtrue n vobs n Response PA (vtrue n ) PB (vtrue n ) PC (vtrue n ) PD (vtrue n ) PA (vobs n ) PB (vobs n ) PC (vobs n ) 19 / 20
of parameter space. v4 is intrinsically small—many points would be lost. Use a different fitting distribution. Correction algorithm more difficult. Still a poorly-defined inverse problem. Don’t use any distribution. Train an emulator (or other interpolator) to calculate true distribution moments given observed moments and multplicity. Still a poorly-defined inverse problem. Bayesian unfolding—what ATLAS uses. Must bootstrap observed distribution to obtain sufficient statistics. Oversample hydro. Need many particles: smearing is ∼ 1/ √ M. Significantly increases computation time and disk usage. 20 / 20
variables, any finite number of which have a joint Gaussian distribution. Instead of drawing variables from a distribution, functions are drawn from a process. Require a covariance function, e.g. cov(x1, x2) ∝ exp − (x1 − x2)2 2 2 Nearby points correlated, distant points independent. Gaussian Processes for Machine Learning, Rasmussen and Williams, 2006. 2 / 7
Posterior: Gaussian process conditioned on model outputs. Training Prior Posterior Emulator is a fast surrogate to the actual model. More certain near calculated points. Less certain in gaps. 4 / 7