ML Session n°5

ML Session n°5

2696500a913e29a26f38115f8ea56f71?s=128

Adrien Couque

April 19, 2017
Tweet

Transcript

  1. ML: advanced neural networks April 2017

  2. Recap

  3. Artificial neuron: Perceptron

  4. Artifical neuron : activation function

  5. Artificial neuron -> Artificial neural network

  6. Teaching a neural net

  7. Demo: Tensorflow Playground

  8. Demo: layered network, still the hard way (97%)

  9. Dropout

  10. Overfitting

  11. Dropout

  12. Dropout Result

  13. Convolutional Neural Networks

  14. Fully connected vs Convolution

  15. Convolutional layer: different input sizes

  16. Inside a convolution neuron: kernels

  17. Stacked computational layers

  18. “Deep” layers

  19. 2D and multiple layers

  20. Pooling

  21. Max pooling

  22. Max pooling in CNN

  23. Pooling

  24. Multiple layers

  25. Convolutional classification (2012)

  26. Convolutional architecture

  27. Convolutional network -> fully connected

  28. Convolutional success (2012)

  29. None
  30. Detecting the input

  31. None
  32. Sliding a window

  33. Sliding window with different scales

  34. Sliding window: result

  35. Content to detect in the window for text

  36. Sliding window on text

  37. Detecting individual characters

  38. Cheating

  39. Image classification through caffe image = '/tmp/kitten.png' # preprocess the

    kitten and resize it to 224x224 pixels net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(image)) # make a prediction from the kitten pixels out = net.forward() # extract the most likely prediction print("Predicted class is #{}.".format(out['prob'][0].argmax()))
  40. Mistakes

  41. Finding the gradient def compute_gradient(image, intended_outcome): # Put the image

    into the network and make the prediction predict(image) # Get an empty set of probabilities probs = np.zeros_like(net.blobs['prob'].data) # Set the probability for our intended outcome to 1 probs[0][intended_outcome] = 1 # Do backpropagation to calculate the gradient for that outcome # and the image we put in gradient = net.backward(prob=probs) return gradient['data'].copy()
  42. Using the gradient to tweak the image

  43. Adding more gradients

  44. Transforming a panda into a vulture

  45. Panda gradient

  46. Next Recurrent Neural Networks Unsupervised Learning Reinforcement Learning April 2017