Slide 1

Slide 1 text

Dealing with Separation in Logistic Regression Models Carlisle Rainey Assistant Professor Texas A&M University [email protected] paper, data, and code at crain.co/research

Slide 2

Slide 2 text

The prior matters a lot, so choose a good one.

Slide 3

Slide 3 text

The prior matters a lot, so choose a good one. 1. in practice 2. in theory 3. concepts 4. software

Slide 4

Slide 4 text

The Prior Matters in Practice

Slide 5

Slide 5 text

politics need

Slide 6

Slide 6 text

Variable Coefficient Confidence Interval Democratic Governor -26.35 [-126,979.03; 126,926.33] % Uninsured (Std.) 0.92 [-3.46; 5.30] % Favorable to ACA 0.01 [-0.17; 0.18] GOP Legislature 2.43 [-0.47; 5.33] Fiscal Health 0.00 [-0.02; 0.02] Medicaid Multiplier -0.32 [-2.45; 1.80] % Non-white 0.05 [-0.12; 0.21] % Metropolitan -0.08 [-0.17; 0.02] Constant 2.58 [-7.02; 12.18]

Slide 7

Slide 7 text

Variable Coefficient Confidence Interval Democratic Governor -26.35 [-126,979.03; 126,926.33] % Uninsured (Std.) 0.92 [-3.46; 5.30] % Favorable to ACA 0.01 [-0.17; 0.18] GOP Legislature 2.43 [-0.47; 5.33] Fiscal Health 0.00 [-0.02; 0.02] Medicaid Multiplier -0.32 [-2.45; 1.80] % Non-white 0.05 [-0.12; 0.21] % Metropolitan -0.08 [-0.17; 0.02] Constant 2.58 [-7.02; 12.18] This is a failure of maximum likelihood.

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

Different default priors produce different results.

Slide 11

Slide 11 text

The Prior Matters in Theory

Slide 12

Slide 12 text

For 1. a monotonic likelihood p(y| ) decreasing in s, 2. a proper prior distribution p( | ) , and 3. a large, negative s, the posterior distribution of s is proportional to the prior distribution for s, so that p( s |y) / p( s | ) .

Slide 13

Slide 13 text

For 1. a monotonic likelihood p(y| ) decreasing in s, 2. a proper prior distribution p( | ) , and 3. a large, negative s, the posterior distribution of s is proportional to the prior distribution for s, so that p( s |y) / p( s | ) .

Slide 14

Slide 14 text

The prior determines crucial parts of the posterior.

Slide 15

Slide 15 text

Key Concepts for Choosing a Good Prior

Slide 16

Slide 16 text

Pr ( yi) = ⇤( c + ssi + 1xi1 + ... + kxik)

Slide 17

Slide 17 text

Transforming the Prior Distribution ˜ ⇠ p( ) ˜ ⇡new = p(ynew |˜) ˜ qnew = q(˜ ⇡new)

Slide 18

Slide 18 text

We Already Know Few Things 1 ⇡ ˆmle 1 2 ⇡ ˆmle 2 . . . k ⇡ ˆmle k s < 0

Slide 19

Slide 19 text

Partial Prior Distribution p⇤( | s < 0, s = ˆmle s ), where ˆmle s = 1

Slide 20

Slide 20 text

The Pacifying Effects of Nuclear Weapons

Slide 21

Slide 21 text

Software for Choosing a Good Prior

Slide 22

Slide 22 text

separation (on GitHub)

Slide 23

Slide 23 text

Conclusion

Slide 24

Slide 24 text

The prior matters a lot, so choose a good one.

Slide 25

Slide 25 text

What should you do? 1. Notice the problem and do something. 2. Recognize the the prior affects the inferences and choose a good one. 3. Assess the robustness of your conclusions to a range of prior distributions.

Slide 26

Slide 26 text

No content