Many proofs in learning theory typically utilize results from information theory or statistical learning to shed light on generalization or provide guarantees about convergence. While these are indeed powerful tools, it is possible to characterize some properties of neural networks using simpler methods. In Deep, Skinny Neural Networks are not Universal Approximators, Jesse Johnson proves, using standard set theoretic topology, that feedforward nets with a maximum layer width less than or equal to its input size cannot approximate functions with a level set containing a bounded path component. Join us next Tuesday at 10:30am in the Mila Auditorium to discuss this paper!
Presented on March 19th, 2019.
Slides for today's proof are here: https://slides.com/breandan/skinny-nns
ICLR/OpenReview discussion: https://openreview.net/forum?id=ryGgSsAcFQ
The author has a great blog on topology which I particularly enjoyed: https://ldtopology.wordpress.com/