Slide 2
Slide 2 text
Regularization
• Use informative, conservative priors to reduce
overfitting => model learns less from sample
• But if too informative, model learns too little
• Such priors are regularizing
1 0 1 2 3
rameter value
/PSNBM(, ) ćJO TPMJE /PSNBM(, .) ćJDL
TPMJE /PSNBM(, .)
T SFBMMZ POF PG UVOJOH #VU BT ZPVMM TFF
FWFO NJME TLFQUJDJTN DBO IFMQ B
BOE EPJOH CFUUFS JT BMM XF DBO SFBMMZ IPQF GPS JO UIF MBSHF XPSME
XIFSF OP
JT PQUJNBM
DPOTJEFS UIJT (BVTTJBO NPEFM
ZJ ∼ /PSNBM(µJ, σ)
µJ = α + βYJ
α ∼ /PSNBM(, )
β ∼ /PSNBM(, )
σ ∼ 6OJGPSN(, )
E QSBDUJDF
UIBU UIF QSFEJDUPS Y JT TUBOEBSEJ[FE TP UIBU JUT TUBOEBSE EFWJBUJPO
JT [FSP ćFO UIF QSJPS PO α JT B OFBSMZĘBU QSJPS UIBU IBT OP QSBDUJDBM FČFDU
07&3'*55*/(
3&(6-"3*;"5*0/
-3 -2 -1 0 1 2 3
0.0 0.5 1.0 1.5 2.0
parameter value
Density
'ĶĴłĿIJ
TUSPOH
TUBOEBS
ĕUUJOH
/PSNB
TPMJE /
regularizing
prior N(0,1)
N(0,0.5)
N(0,0.2)