Slide 21
Slide 21 text
and random effects. It turns out that different—in fact, incompatible—definitions
are used in different contexts. [See also Kreft and de Leeuw (1998), Section 1.3.3,
for a discussion of the multiplicity of definitions of fixed and random effects and
coefficients, and Robinson (1998) for a historical overview.] Here we outline five
definitions that we have seen:
1. Fixed effects are constant across individuals, and random effects vary. For
example, in a growth study, a model with random intercepts αi and fixed
slope β corresponds to parallel lines for different individuals i, or the model
yit
= αi
+ βt. Kreft and de Leeuw [(1998), page 12] thus distinguish between
fixed and random coefficients.
2. Effects are fixed if they are interesting in themselves or random if there is
interest in the underlying population. Searle, Casella and McCulloch [(1992),
Section 1.4] explore this distinction in depth.
3. “When a sample exhausts the population, the corresponding variable is fixed;
when the sample is a small (i.e., negligible) part of the population the
corresponding variable is random” [Green and Tukey (1960)].
4. “If an effect is assumed to be a realized value of a random variable, it is called
a random effect” [LaMotte (1983)].
5. Fixed effects are estimated using least squares (or, more generally, maximum
likelihood) and random effects are estimated with shrinkage [“linear unbiased
prediction” in the terminology of Robinson (1991)]. This definition is standard
in the multilevel modeling literature [see, e.g., Snijders and Bosker (1999),
Section 4.2] and in econometrics.
In the Bayesian framework, this definition implies that fixed effects β(m)
j
are estimated conditional on σm
= ∞ and random effects β(m)
j
are estimated
conditional on σm from the posterior distribution.
Of these definitions, the first clearly stands apart, but the other four definitions
differ also. Under the second definition, an effect can change from fixed to
The Annals of Statistics
2005, Vol. 33, No. 1, 1–53
DOI 10.1214/009053604000001048
© Institute of Mathematical Statistics, 2005
DISCUSSION PAPER
ANALYSIS OF VARIANCE—WHY IT IS MO
THAN EVER1
BY ANDREW GELMAN
Columbia University
Analysis of variance (ANOVA) is an extremely
in exploratory and confirmatory data analysis. Unfortu
problems (e.g., split-plot designs), it is not always e