Slide 5
Slide 5 text
Fully-connected layer on image is inefficient
4
Consider a two-layer neural network with:
Input: 40,000 dimension (an input image is 200 x 200 pixels)
Hidden layer: 20,000 dimension
Output: 1,000 (1,000 categories for objects)
The number of parameters is huge, c.a. 0.82 billion (1.6GB with float16)
1st layer: 40,000 x 20,000 = 800,000,000
2nd layer: 20,000 x 1,000 = 20,000,000
The number depends on the size of input images
This treatment ignores stationarity in images
Patterns appearing different positions
Positional shifts