Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

Deep Learning

Abhinav Tushar
September 10, 2015

Deep Learning

Introductory talk on deep learning

Abhinav Tushar

September 10, 2015
Tweet

More Decks by Abhinav Tushar

Other Decks in Research

Transcript

  1. models AE / SAE RBM / DBN CNN RNN /

    LSTM Memnet / NTM agenda questions What ? Why ? How ? Next ?
  2. what why how next What ? AI technique for learning

    multiple levels of abstractions directly from raw information
  3. what why how next Classical machine learning Learning from custom

    features Hand Crafted Features Learning System Output Input
  4. what why how next Deep Learning based AI Learn everything

    Learned Features (Lower Level) Learned Features (Higher Level) Learning System Output Input
  5. With the capacity to represent the world in signs and

    symbols, comes the capacity to change it Elizabeth Kolbert (The Sixth Extinction) “
  6. HUGE SUCCESS Speech, text understanding Robotics / Computer Vision Business

    / Big Data Artificial General Intelligence (AGI)
  7. what why how next Shallow Network ℎ ℎ = (,

    0) = ′(ℎ, 1) = (, ) minimize
  8. what why how next Deep Network More abstract features Stellar

    performance Vanishing Gradient Overfitting
  9. what why how next Stacked Autoencoder Y. Bengio et. all;

    Greedy Layer-Wise Training of Deep Networks
  10. what why how next Stacked Autoencoder 1. Unsupervised, layer by

    layer pretraining 2. Supervised fine tuning
  11. what why how next Deep Belief Network 2006 breakthrough Stacking

    Restricted Boltzmann Machines (RBMs) Hinton, G. E., Osindero, S. and Teh, Y.; A fast learning algorithm for deep belief nets
  12. what why how next The Starry Night Vincent van Gogh

    Leon A. Gatys, Alexander S. Ecker and Matthias Bethge; A Neural Algorithm of Artistic Style
  13. what why how next Scene Description CNN + RNN Oriol

    Vinyals et. all; Show and Tell: A Neural Image Caption Generator
  14. what why how next Recurrent Neural Network Simple Elman Version

    ℎ ℎ = ( , ℎ−1 , 0, 1) = ′(ℎ , 2)
  15. what why how next Long Short Term Memory (LSTM) add

    memory cells learn access mechanism Sepp Hochreiter and Jürgen Schmidhuber; Long short-term memory
  16. what why how next Fooling Deep Networks Anh Nguyen, Jason

    Yosinski, Jeff Clune; Deep Neural Networks are Easily Fooled
  17. what why how next Attention & Memory NTMs, Memory Networks,

    Stack RNNs . . . NLP Translation, description