Over Mt. Stupid to Deep Learning

Over Mt. Stupid to Deep Learning

My slides for my talk at Code.Talks 2018 about MAchine Learning, Deep Learning and my journey through this topics. Video of talk may come later.

B1fa8b2208e7d7cc68da7fbb519dbc96?s=128

Carsten Sandtner

October 18, 2018
Tweet

Transcript

  1. OVER MT. STUPID TO DEEP LEARNING… Carsten Sandtner \\ @casarock

    Photo by Archie Binamira from Pexels https://www.pexels.com/photo/man-wearing-white-shirt-brown-shorts-and-green-backpack-standing-on-hill-672358/
  2. ABOUT://ME My name is Carsten and I’m Technical Director at

    mediaman GmbH in Mayence (Mainz). I’m a moz://a Techspeaker and I love the open web and everything about open standards for the web! I’m tweeting as @casarock 
 
 … and I asked myself: What is this AI thing?
  3. None
  4. „ “ –Amy Webb - SXSW 2018 The robots are

    going to come and kill us all but not before they take over all of our jobs
  5. „ “ –Amy Webb - SXSW 2018 The artificial intelligence

    ecosystem —  flooded with capital, hungry for commercial applications, and yet polluted with widespread, misplaced optimism and fear — will continue to swell
  6. „ “ –Google CEO Sundar Pichai - Google I/O 2017

    From mobile first to AI first
  7. Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment

    Plateau of sustainability
  8. Photo by Markus Spiske temporausch.com from Pexels https://www.pexels.com/photo/aerial-photography-of-white-mountains-987573/

  9. BASICS!

  10. None
  11. SOME HISTORY!

  12. HISTORY 1950: Neuronal Networks! ~1980: Machine Learning Today: Deep Learning

    Photo by Dick Thomas Johnson https://www.flickr.com/photos/31029865@N06/14810867549/
  13. AI Machine Learning Deep Learning

  14. MACHINE LEARNING

  15. MACHINE LEARNING Statistics! Correlation Regression a => weights Const =>

    Offset/Bias Y = Const + aX1 + bX2 + cX3 + ... + zXn
  16. Activation Function Output Input Weight Offset z = ∑w0x0 w0

    x0 +b y = f(z) y0
  17. ACTIVATION FUNCTIONS 0 2 4 6 -4 -6 -2 0.0

    0.2 0.4 0.6 0.8 1.0 0 2 4 6 -4 -6 -2 0 1 2 3 4 5 Sigmoid ReLu R(z) = max(0, z) sig(z) = 1/(1+e-z) Rectifier Linear Unit
  18. EXAMPLE: PREDICTIVE MAINTENANCE

  19. SIMPLE EXAMPLE Machine Sensors x1, x2 and x3
 Weights a,

    b and c Y = Const + ax1 + bx2 + cx3 Const: Value when x1, x2 and x3 are 0 Training data: 100.000 Datasets. Keep 25.000 for validation Train with 75.000 -> Vary weights until result (Y) is ok Verify your model with the 25.000 sets for validation
  20. None
  21. HOW MACHINES LEARN

  22. HOW MACHINES LEARN Supervised Learning Unsupervised Learning Reinforcement Learning

  23. SUPERVISED Useful for predictions and classifications Popular use case: 


    Image recognition Needs classified training sets.
  24. UNSUPERVISED Useful for segmentation and clustering Clustered data needs revision

    by a human Good for dimensional reduction
  25. REINFORCEMENT has not a defined result for training data. Using

    rewards for good results - if it isn’t good do it never again, bad boy! Example: Learn how to play a game just while analyse every pixel Popular Example: 
 Alpha Go
  26. MY JOURNEY! Confidence Wisdom Mt. Stupid Valley of despair Slope

    of enlightenment Plateau of sustainability
  27. Photo by Paula May on Unsplash https://unsplash.com/photos/AJqeO_-ifx0

  28. DEEP LEARNING (DL)

  29. NEURONAL NETWORKS

  30. ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of

    enlightenment Plateau of sustainability
  31. Photo by Mario Álvarez on Unsplash https://unsplash.com/photos/M1YdS0g8SRA

  32. NEURON Neuron Activation Function Output Input Weight Offset z =

    ∑wixi w0 x0 +b y = f(z) y0 x1 xn w1 wn y1 yn LAYER
  33. . . . . Input Layer Output Layer Hidden Layers

  34. . . . . Input Layer Output Layer Hidden Layers

    Forward pass Init with random weights and offsets
  35. DETERMINE THE ERROR

  36. ERROR FUNCTION f(x) = mx + b

  37. ERROR FUNCTION f(x) = mx + b Mean Squared Error

    = Avg(Error2)
  38. . . . . Input Layer Output Layer Hidden Layers

    Back propagation Propagate the error into every neuron err err err err err err err err err err err
  39. BACKPROPAGATION For every neuron from output to input Check neurons

    share in the error (using weights) Fine tune weights (in very small steps eg. 0.001) This is a Hyperparameter: Learning Rate! Applies to the whole network Every weight adjusted? Start a new run This is called an new Epoch (next Hyperparameter!)
  40. GRADIENT DESCENT

  41. OUR GOAL: CONVERGENCE

  42. CONVERGENCE Always take a look at your error. Is it

    minimal? Stop learning! Validate your model with other data (not the learning data!) Try to avoid overfitting!
  43. ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of

    enlightenment Plateau of sustainability
  44. Photo by Will Langenberg on Unsplash https://unsplash.com/photos/S9uZOfeYi1U

  45. CONVOLUTIONAL NEURAL NETWORKS

  46. CONVOLUTIONS

  47. CONVOLUTIONS

  48. CONVOLUTIONS We train this little thing!

  49. CONVOLUTIONAL NEURAL NETWORK Conv ReLu Pool Conv Conv Conv Conv

    Conv ReLu ReLu ReLu ReLu ReLu Pool Pool Dog Cat Mouse FC
  50. Photo by Will Langenberg on Unsplash https://unsplash.com/photos/S9uZOfeYi1U

  51. ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of

    enlightenment Plateau of sustainability
  52. Photo by Natalia_Kollegova https://pixabay.com/de/fata-morgana-das-gebirge-berge-2738131/

  53. THOUGHT ABOUT MY GOALS Confidence Wisdom Mt. Stupid Valley of

    despair Slope of enlightenment Plateau of sustainability
  54. WHAT IS IT SUITABLE FOR?

  55. None
  56. None
  57. None
  58. SOME BETTER EXAMPLES Categorization of products in an online shop

    using their images. Using cognitive services for Natural Language Processing (NLP) voice or text based Using a cloud based AI Service!
  59. AZURE COGNITIVE SERVICES https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/

  60. „ “ No matter what you do, you need sh*tloads

    of data!
  61. EVERYTHING IS GREAT!

  62. DEPENDS…

  63. . . . . Input Layer Output Layer Hidden Layers

  64. WHO IS RESPONSIBLE?

  65. Photo by Katie Salerno https://www.pexels.com/photo/love-people-romance-engagement-18396/

  66. Photo by Cade Roberts on Unsplash https://unsplash.com/photos/H0Aud5lhupc

  67. Photo by Martin Shreder on Unsplash https://unsplash.com/photos/5Xwaj9gaR0g

  68. AI (ML/DL) IS AWESOME. USE IT WISELY. Thank you! @casarock

    for