Slide 20
Slide 20 text
Foundation of Countinuous Depth ODE, Interpretation & Applications
Modifying ResNet
Exploiting ResNet structure
.. for countinuous depth models
Precisely, three changes ..
1 Scale g(·) with a scalar ∆l
→ Quite harmless, g(·) can always adjust by learning to
scale it up
2 Share parameters accross blocks
→ Reduces number of parameters, hence modelling
capacity
3 much more blocks than usual
→ To compensate model capacity, we want more blocks
x1 ← x0 + g(x0; Θ)·∆l
x2 ← x1 + g(x1; Θ)·∆l
x3 ← x2 + g(x2; Θ)·∆l
.
.
.
x4 ← x3 + g(x3; Θ) · ∆l
x5 ← x4 + g(x4; Θ) · ∆l
x6 ← x5 + g(x5; Θ) · ∆l
.
.
.
xL ← xL−1 + g(xL−1; Θ) · ∆l
Exploring the notion of continuous-depth & a brief overview of Neural ODE Copyright © 2021 Ayan Das (@dasayan05) 5 / 15