Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Intro to Deep Learning

Intro to Deep Learning

My Presentation about Introduction to deep Learning . (Google developer Group-Gandhinagar)

Kashyap Raval

October 30, 2017
Tweet

More Decks by Kashyap Raval

Other Decks in Technology

Transcript

  1. Hype or Reality? Google Machine learning is a core transformative

    way by which we are rethinking everything we are doing – Sundar Pichai (CEOGoogle) Source
  2. Hype or Reality? Quotes I have worked all my life

    in Machine Learning, and I’ve never seen one algorithm knock over benchmarks like Deep Learning – Andrew Ng (Stanford &Baidu) Deep Learning is an algorithm which has no theoretical limitations of what it can learn; the more data you give and the more computational time you provide, the better it is – Geoffrey Hinton (Google) Human-level artificial intelligence has the potential to help humanity thrive more than any invention that has come before it – Dileep George (Co-Founder Vicarious) For a very long time it will be a complementary tool that human scientists and human experts can use to help them with the things that humans are not naturally good – Demis Hassabis (Co-Founder DeepMind)
  3. The Big Players Superstar Researchers Geoffrey Hinton: University of Toronto

    & Google Yann LeCun: New York University & Facebook AndrewNg: Stanford & Baidu Yoshua Bengio: University of Montreal Jürgen Schmidhuber: Swiss AI Lab & NNAISENSE
  4. What is Artificial Intelligence? Artificial Narrow Intelligence (ANI): Machine intelligence

    that equals or exceeds human intelligence or efficiency at a specific task. Artificial General Intelligence (AGI): A machine with the ability to apply intelligence to any problem, rather than just one specific problem (human-level intelligence). Artificial Superintelligence (ASI): An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
  5. Machine Learning - Basics Introduction Machine Learning is a type

    of Artificial Intelligence that provides computers with the ability to learn without being explicitly programmed. Machine Learning Algorithm LearnedModel Data Prediction LabeledData Training Prediction Provides various techniques that can learn from and make predictions on data
  6. Machine Learning - Basics LearningApproaches Supervised Learning: Learning with a

    labeled training set Example: email spam detector with training set of already labeled emails Unsupervised Learning: Discovering patterns in unlabeled data Example: cluster similar documents based on the textcontent Reinforcement Learning: learning based on feedback or reward Example: learn toplay chess by winning or losing
  7. Machine Learning - Basics ProblemTypes Regression (supervised – predictive) Classification

    (supervised – predictive) Anomaly Detection (unsupervised– descriptive) Clustering (unsupervised – descriptive)
  8. What is Deep Learning? Part of the machine learning field

    of learning representations of data. Exceptional effective at learning patterns. Utilizes learning algorithms that derive meaning out of data by using a hierarchy of multiple layers that mimic the neural networks of our brain. If you provide the system tons of information, it begins to understand it and respond in useful ways.
  9. Inspired by the Brain The first hierarchy of neurons that

    receives information in the visual cortex are sensitive to specific edges while brain regions further down the visual pipeline are sensitive to more complex structures such as faces. Our brain has lots of neurons connected together and the strength of the connections between neurons represents long term knowledge.
  10. Deep Learning Today • Advancement in speech recognition in the

    last 2 years – A few long-standing performance records were broken with deep learning methods – Microsoft and Google have both deployed DL-based speech recognition systems in their products • Advancement in Computer Vision – Feature engineering is the bread-and-butter of a large portion of the CV community, which creates some resistance to feature learning – But the record holders on ImageNet and Semantic Segmentation are convolutional nets • Advancement in Natural Language Processing – Fine-grained sentiment analysis, syntactic parsing – Language model, machine translation, question answering 19
  11. Deep Learning - Basics Architecture A deep neural network consists

    of a hierarchy of layers, whereby each layer transforms the input data into more abstract representations (e.g. edge -> nose -> face).The output layer combines those features to make predictions.
  12. Deep Learning - Basics No more featureengineering Feature Engineering Traditional

    Learning Algorithm InputData Costs lots of time Deep Learning Algorithm InputData
  13. A brief History A longtime ago… 1974 Backpropagation awkward silence

    (AI Winter) 1995 SVM reigns 2006 Restricted Boltzman n Machine 1958 Perceptron 1969 Perceptron criticized Convolution Neural Networks for Google Brain Projecton Handwritten Recognition 16kCores 1998 2012 2012 AlexNet wins ImageNet
  14. A brief History The Big Bang aka “One net to

    rule them all” ImageNet: The “computer vision World Cup” Source
  15. A brief History The Big Bang aka “One net to

    rule them all” Human Performance ImageNet: The “computer vision World Cup” Source
  16. Limitations of Neural Networks Random initialization + densely connected networks

    lead to: • High cost – Each neuron in the neural network can be considered as a logistic regression. – Training the entire neural network is to train all the interconnected logistic regressions. • Difficult to train as the number of hidden layers increases – Recall that logistic regression is trained by gradient descent. – In backpropagation, gradient is progressively getting more dilute. That is, below top layers, the correction signal is minimal. • Stuck in local optima 28
  17. Deep Learning - Basics Artificial NeuralNetworks Consists of one input,

    one output and multiple fully-connected hidden layers in- between. Each layer is represented as a series of neurons and progressively extracts higher and higher-level features of the input until the final layer essentially makes a decision about what the input shows.
  18. ... input layer,i hidden layer,j ... ... ... ... hidden

    layer,k n output layer,n 34 Deep Neural Network (DNN)
  19. Deep Learning - Basics TheNeuron An artificial neuron contains a

    nonlinear activation function and has several incoming and outgoing weighted connections. Neurons are trained to filter and detect specific features or patterns (e.g. edge, nose) by receiving weighted input, transforming it with the activation function und passing it to the outgoing connections.
  20. Deep Learning - Basics Non-linear Activation Function Non-linearity is needed

    to learn complex (non-linear) representations of data, otherwise the NN would be just a linear function. Most deep networks use ReLU - max(0,x) - nowadays for hidden layers, since it trains much faster, is more expressive than logistic function and prevents the gradient vanishing problem.
  21. Deep Learning - Basics The TrainingProcess Forward it trough the

    network toget predictions Sample labeleddata Backpropagate theerrors Update the connection weights Learns by generating an error signal that measures the difference between the predictions of the network and the desired values and then using this error signal to change the weights (or parameters) so that predictions get more accurate.
  22. Deep Learning - Basics GradientDescent Gradient Descent finds the (local)

    the minimum of the cost function (used to calculate the output error) and is used to adjust the weights.
  23. Deep Learning - Basics Convolutional Neural Nets (CNN) Convolutional Neural

    Networks learn a complex representation of visual data using vast amounts of data. They are inspired by the human visual system and learn multiple layers of transformations, which are applied on top of each other to extract a progressively more sophisticated representation of the input. Every layer of a CNN takes a 3D volume of numbers and outputs a 3D volume of numbers. E.g. Image is a 224*224*3 (RGB) cube and will be transformed to 1*1000 vector of probabilities.
  24. Deep Learning - Basics Convolutional Neural Nets (CNN) Convolution layer

    is a feature detector that automagically learns to filter out not needed information from an input by using convolution kernel. Pooling layers compute the max or average value of a particular feature over a region of the input data (downsizing of input images). Also helps to detect objects in some unusual places and reduces memory size.
  25. Deep CNN for Image Classification Try out a live demo

    at http://demo.caffe.berkeleyvision.org/
  26. CNN

  27. • Convolutional Neural Networks (ConvNets or CNNs) are a category

    of Neural Networks that have proven very effective in areas such as image recognition and classification. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars.
  28. There are four main operations in the ConvNet : •

    Convolution • Non Linearity (ReLU) • Pooling • Classification (Fully Connected Layer)
  29. • Pooling layer is frequently used in convulational neural networks

    with the purpose to progressively reduce the spatial size of the representation to reduce the amount of features and the computational complexity of the network.The more commonly used pooling layer is the MAXPOOL layer ,with is provided by all deep learning libraries.Basically a maxpool of 2 x 2 would cause a filter of 2 by 2 to traverse over the entire matrix and pick the largest element from the window to be included in the next representation map.
  30. 55

  31. Why do we need RNNs? The limitations of the Neural

    network (CNNs) • Rely on the assumption of independence among the (training and test) examples. – After each data point is processed, the entire state of the network is lost • Rely on examples being vectors of fixed length We need to model the data with temporal or sequential structures and varying length of inputs and outputs – Frames from video – Snippets of audio – Words pulled from sentences
  32. Deep Learning - Basics Recurrent Neural Nets (RNN) RNNs are

    general computers which can learn algorithms to map input sequences to output sequences (flexible-sized vectors). The output vector’s contents are influenced by the entire history of inputs. State-of-the-art results in time series prediction, adaptive robotics, handwriting recognition, image classification, speech recognition, stock market prediction, and other sequence learning problems. Everything can be processed sequentially.
  33. • (1) Vanilla mode of processing without RNN, from fixed-

    sized input to fixed-sized output (e.g. image classification). • (2) Sequence output (e.g. image captioning takes an image and outputs a sentence of words). • (3) Sequence input (e.g. sentiment analysis where a given sentence is classified as expressing positive or negative sentiment). • (4) Sequence input and sequence output (e.g. Machine Translation: an RNN reads a sentence in English and then outputs a sentence in French). • (5) Synced sequence input and output (e.g. video classification where we wish to label each frame of the video). Notice that in every case are no pre-specified constraints on the lengths sequences because the recurrent transformation (green) is fixed and can be applied as many times as we like.
  34. This chain-like nature reveals that recurrent neural networks are intimately

    related to sequences and lists. They’re the natural architecture of neural network to use for such data. In the last few years, there have been incredible success applying RNNs to a variety of problems: speech recognition, language modeling, translation, image captioning
  35.  How to sole the vanishing gradient problem?  Proper

    initialization of the matrix can reduce the effect of vanishing gradients  Use ReLU instead of tanh or sigmoid activation function ReLU derivate is a constant of either 0 or 1, so it isn’t likely to suffer from vanishing gradients  Use Long Short-Term Memory or Gated Recurrent unit architectures LSTM will be introduced later
  36. Deep Learning - Basics Long Short-Term Memory RNN(LSTM) A Long

    Short-Term Memory (LSTM) network is a particular type of recurrent network that works slightly better in practice, owing to its more powerful update equation and some appealing back propagation dynamics. The LSTM units give the network memory cells with read, write and reset operations. During training,the network can learn when it should remember data and when it should throw it away. Well-suited to learn from experience to classify, process and predict time series when there are very long time lags of unknown size between important events. Source
  37. In machine translation, the input is a sequence of words

    in source language, and the output is a sequence of words in target language. Encoder-decoder architecture for machine translation Encoder: An RNN to encode the input sentence into a hidden state (feature) Decoder: An RNN with the hidden state of the sentence in source language as the input and output the translated sentence Machine Translation
  38. http://visualqa.csail.mit.edu/ VQA: Given an image and a natural language question

    about the image, the task is to provide an accurate natural language answer Visual Question Answering (VQA) Picture from (Antol et al., 2015)
  39. Deep Learning - Basics Natural Language Processing – Embeddings Embeddings

    are used to turn textual data (words,sentences, paragraphs) into high- dimensional vector representations and group them together with semantically similar data in a vectorspace. Thereby, computer can detect similarities mathematically. Source
  40. The output is to be conditioned on both image and

    textual inputs. A CNN is used to encode the image and a RNN is implemented to encode the sentence. Visual Question Answering
  41. Deep Learning - Basics Natural Language Processing – Word2Vec Word2Vec

    is an unsupervised learning algorithm for obtaining vector representations for words.These vectors were trained for a specific domain on a very large textual data set. GloVe is a better performing alternative. It detects similarities mathematically by grouping the vectors of similar words together. All it needs is words co-occurance in the given corpus. Source
  42. Deep Learning - Basics Natural Language Processing – Word2Vec Woman

    – Man ≈ Aunt - Uncle King - Male + Female ≈ Queen Human -Animal ≈ Ethics Source
  43. Deep Learning - Basics UsageRequirements Large data set with good

    quality (input-output mappings) Measurable and describable goals (define the cost) Enough computing power (AWS GPU Instance) Excels in tasks where the basic unit (pixel, word) has very little meaning in itself,but the combination of such units has a useful meaning.
  44. Things we want to do with data Label image Speech

    recognition Web search Images Audio Text
  45. How is computer perception done? Image Low-level vision features Recognition

    Low-level state features Action Helicopter Audio Low-level audio features Speaker identification Object detection Audio classification Helicopter control
  46. Deep Learning for Games DeepMindAlphaGo History is made: Google's AlphaGo

    wins the match against Go champion Lee Sedol Source
  47. Deep Learning for Games DeepMindAlphaGo AlphaGo could learn the game

    by examining thousands of human Go moves, and then it could master the game by playing itself over and over and over again. The result is a system of unprecedented beauty. Source
  48. Deep Learning in Computer Vision ImageCaptioning Neural Image Caption Generator

    generates fitting natural-language captions only based on the pixels by combining a vision CNN and a language-generating RNN. A man flying through the air while riding a skateboard Two pizzas sitting on top of a stove top oven A close up of a child holding a stuffed animal
  49. Deep Learning in Computer Vision Image Compression Image compression with

    Residual Gated Recurrent Unit (Residual GRU). Left:Original (1419KB PNG), Center:JPEG (33KB), Right:Residual GRU (24KB). => 25% smaller for comparable image quality. Source
  50. Deep Learning in Computer Vision Image Localization PlaNet is able

    to determine the location of almost any image with superhuman ability.
  51. Deep3D can automatically convert image/video from 2D-to-3D with Convolutional Neural

    Networks. It learns to infer 3D representations of the world based on training set of 3D movies. Source
  52. Deep Learning in Computer Vision Image Sharpening DCGAN architecture to

    upscale and sharpen an image with features that are plausible based on the dataset that was used to train the neural net. Source
  53. Deep Learning in Computer Vision Image Completion Image completion with

    deep convolutional generative adversarial networks (DCGAN). The centers of these images are being automatically generated. Source
  54. Deep Learning in Computer Vision Image Transformation – Adding features

    Performs high-level semantic transformations on images like "make older/younger", "make bespectacled","add smile“. Source
  55. Deep Learning in Computer Vision Image Colorization Given a grayscale

    photograph as input, this Convolutional Neural Network tackles the problem of hallucinating a plausible color version of the photograph. Source
  56. Deep Learning in Computer Vision Image-to-ImageTranslation Image-to-Image Translation with Conditional

    Adversarial Nets. This approach can be applied as a generic solutions to any Image-to-Image translation problem such as synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images. Demo Source
  57. Deep Learning in Computer Vision Video Sequence Prediction PredNet -

    a deep convolutional recurrent neural network that predicts the future frames in a video sequence.These networks are able to robustly learn to predict the movement of synthetic (rendered) objects. Source
  58. Deep Learning in Computer Vision Image Generation – Bedrooms These

    neural networks are learning what the visual world looks like! Source
  59. Deep Learning in Computer Vision LipNet - Sentence-level Lipreading LipNet

    achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.
  60. Deep Learning in Computer Vision Image Generation – From Descriptions

    Deep architecture and GAN formulation to translate visual concepts from characters to pixels.We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. Source
  61. Deep Learning in Computer Vision Image Generation -Handwriting This LSTM

    recurrent neural network is able to generate highly realistic cursive handwriting in a wide variety of styles, simply by predicting one data point at a time. Source
  62. Deep Learning in NLP Syntax Parsing SyntaxNet (Parsey McParseface) tags

    each word with a part-of-speech tag, and it determines the syntactic relationships between words in the sentence with an 94% accuracy compared to a human performance at 96%. Source
  63. Deep Learning in NLP GeneratingText To train the RNN,insert characters

    sequentially and predict the probabilities of the next letter. Backpropagate error and update RNN’s weights to increase the confidence of the correct letter (green) and decrease the confidence of all other letters (red). Trained on structured Wikipedia markdown. Network learns to spell English words completely from scratch and copy general syntactic structures. Source
  64. Deep Learning in NLP Neural Machine Translation Googles Neural Machine

    Translation system based on a deep LSTM seq2seq architecture reduces translation errors by more than 55%-85% compared to traditional approaches. Source
  65. Deep Learning in NLP Neural Conversational Model Neural Chatbot: Predicts

    the next sentence given the previous sentences in a conversation. The LSTM remembers facts, understands contexts and performs common sense reasoning in the trained domain. Human: what is the purpose of life ? Machine: to serve the greater good . Human:what is the purpose of living ? Machine: to live forever Human:what is the purpose of existence? Machine:to find out what happens when we get to the planet earth Human:where are you now ? . Machine:i ’m in the middle of nowhere . Human: what is the purpose of dying ? Machine:to have a life Human:what is the purpose of being intelligent ? Machine:to find out what it is. Human:what is the purpose of emotions ? Machine:I don ’t know . Philosophical Q&A – Network trained with OpenSubtitles (movie subtitles) Source
  66. Deep Learning in Medicine Skin CancerDiagnoses The CNN performed just

    as well as almost two dozen veteran dermatologists in deciding whether a lesion needed further medical attention. Source
  67. Deep Learning in Medicine Detection of diabetic eye disease Their

    deep learning algorithm performed better than the median board-certified ophthalmologist in assessing signs of diabetic retinopathy Source
  68. Deep Learning in Science MappingPoverty Combining satellite imagery and machine

    learning to predict poverty. A deep- learning algorithm that can recognize signs of poverty in satellite images – such as condition of roads – by sorting through a million images to accurately identify economic conditions in five African countries. Source
  69. Deep Learning in Google Products RankBrain (Search): In few months,

    RankBrain has become the third-most important signal contributing to the search result. Speech Recognition (Google Assistant): 30% reduction in Word Error Rate for English. Biggest single improvement in 20 years of speech research. Photo Search (Google Photos): Error rate of just 5% which is as good as humans performing the same task.Also, superhuman performance in face recognition.
  70. Outlook Goal-basedAI Computer Program Data Output Traditional Programming Computer Output

    Data Program Machine Learning Computer Goal Program Goal-basedAI
  71. Outlook TheEnabler Many of the biggest problems facing humanity today,

    like curing diseases or addressing climate change, would be vastly easier with the help ofAI. The Big Bang for Self-Driving Cars (10-15 years). Fully autonomous taxi systems will change the paradigm of the need to own a car. AI will fuel a medical revolution (5-10 years) by enabling far more efficient drug discovery, diagnoses and research.
  72. Takeaways Machines that learn to represent the world from experience.

    Deep Learning is no magic! Just statistics (matrix multiplications) in a black box,but exceptional effective at learning patterns Transitioning from research to consumer products.Will make the tools you use every day work better,faster and smarter.
  73. they work!* - *for some problems - *when there is

    a lot of labeled data - *and the underlying function is complex but not noisy
  74. Now this is not the end. It is not even

    the beginning of the end. But it is, perhaps, the end of the beginning.”
  75. Resources References: • http://cs231n.github.io • http://cs224d.stanford.edu • http://colah.github.io • https://www.reddit.com/r/MachineLearning/

    • http://www.andreykurenkov.com/writing/a-brief- history-of-neural-nets-and-deep-learning/ Software libraries • Keras (good place to start): http://keras.io • Lasagne: https://github.com/Lasagne/Lasagne • TensorFlow: https://www.tensorflow.org • Theano: http://deeplearning.net/software/theano/