Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep Convolutional Neural Networks and future

Deep Convolutional Neural Networks and future

Talk given at Bay Area Women in Machine Learning and Data Science (https://www.meetup.com/Bay-Area-Women-in-Machine-Learning-and-Data-Science/events/274038207/)

Convolutional Neural Networks, works on paradigm of sharing of weights in AI thereby creating kernels that filter information from the data. This simple idea along with open-source frameworks lead to many astounding work in DL community, along with implications in many business products that we use daily. We will understand CNNs as the basic level and try to brainstorm what holds for this architectural model in the future given new models and methods to train neural networks.

Tanisha Bhayani

October 25, 2020
Tweet

More Decks by Tanisha Bhayani

Other Decks in Technology

Transcript

  1. The Convolution Operation Intuitively, the convolution of two functions represents

    the amount of overlap between the two functions. The function g is the input, f the kernel of the convolution. 5 TANISHA BHAYANI
  2. Convolutional Neural Networks A convolutional neural network (CNN) is a

    type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data. CNNs are powerful image processing, artificial intelligence (AI) that use deep learning to perform both generative and descriptive tasks, often using machine vison that includes image and video recognition, along with recommender systems and natural language processing (NLP). 9 TANISHA BHAYANI
  3. Convolutional Neural Network • Depthwise convolution is the channel-wise n×n

    spatial convolution. Suppose in the figure above, we have 5 channels, then we will have 5 n×n spatial convolution. • Pointwise convolution actually is the 1×1 convolution to change the dimension. 10 TANISHA BHAYANI
  4. “When we’re learning to see, nobody’s telling us what the

    right answers are — we just look. Every so often, your mother says “that’s a dog”, but that’s very little information. You’d be lucky if you got a few bits of information — even one bit per second — that way. The brain’s visual system has 10¹⁴ neural connections. And you only live for 10⁹ seconds. So it’s no use learning one bit per second. You need more like 10⁵ bits per second. And there’s only one place you can get that much information: from the input itself.” — Geoffrey Hinton, 1996 11 TANISHA BHAYANI
  5. Recent Research in DCNN 1. Vernacular OCR 2. Person Re-identification

    using Siamese Networks 3. Animal Facial Recognition using CNN 4. Accurate Recommendations Engine 5. 4D CNN for medical domain (Brain Imaging) 6. Image Generation de-noising using CNN 7. Clustering Environmental images using CNN, and more... 15 TANISHA BHAYANI
  6. Recent Research in DCNN 1. Attention based models 2. Capsule

    Networks 3. Reinforcement Learning 4. GANS 16 TANISHA BHAYANI