from the posterior p(θ|y), where θ is a vector of k parameters, θ1, θ2, ..., θk. Algorithm 1: Gibbs Sampling θ(0) := θ(0) 1 , θ(0) 2 , ..., θ(0) k for t = 1 to T do for i = 1 to k do θ(t+1) i ∼ p(θi |θ(t+1) 1 , ..., θ(t+1) i−1 , θ(t) i+1 , ..., θ(t) k ) end end Rather than probabilistically picking the next state all at once, we make a separate probabilistic choice for each of the k dimensions, where each choice depends on the other k − 1 dimensions. David Haber Introduction to Gibbs Sampling