Slide 1

Slide 1 text

OVER MT. STUPID TO DEEP LEARNING… Carsten Sandtner \\ @casarock Photo by Archie Binamira from Pexels https://www.pexels.com/photo/man-wearing-white-shirt-brown-shorts-and-green-backpack-standing-on-hill-672358/

Slide 2

Slide 2 text

ABOUT://ME My name is Carsten and I’m Technical Director at mediaman GmbH in Mayence (Mainz). I’m a moz://a Techspeaker and I love the open web and everything about open standards for the web! I’m tweeting as @casarock 
 
 … and I asked myself: What is this AI thing?

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

„ “ –Amy Webb - SXSW 2018 The robots are going to come and kill us all but not before they take over all of our jobs

Slide 5

Slide 5 text

„ “ –Amy Webb - SXSW 2018 The artificial intelligence ecosystem —  flooded with capital, hungry for commercial applications, and yet polluted with widespread, misplaced optimism and fear — will continue to swell

Slide 6

Slide 6 text

„ “ –Google CEO Sundar Pichai - Google I/O 2017 From mobile first to AI first

Slide 7

Slide 7 text

Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 8

Slide 8 text

Photo by Markus Spiske temporausch.com from Pexels https://www.pexels.com/photo/aerial-photography-of-white-mountains-987573/

Slide 9

Slide 9 text

BASICS!

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

SOME HISTORY!

Slide 12

Slide 12 text

HISTORY 1950: Neuronal Networks! ~1980: Machine Learning Today: Deep Learning Photo by Dick Thomas Johnson https://www.flickr.com/photos/31029865@N06/14810867549/

Slide 13

Slide 13 text

AI Machine Learning Deep Learning

Slide 14

Slide 14 text

MACHINE LEARNING

Slide 15

Slide 15 text

MACHINE LEARNING Statistics! Correlation Regression a => weights Const => Offset/Bias Y = Const + aX1 + bX2 + cX3 + ... + zXn

Slide 16

Slide 16 text

Activation Function Output Input Weight Offset z = ∑w0x0 w0 x0 +b y = f(z) y0

Slide 17

Slide 17 text

ACTIVATION FUNCTIONS 0 2 4 6 -4 -6 -2 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 -4 -6 -2 0 1 2 3 4 5 Sigmoid ReLu R(z) = max(0, z) sig(z) = 1/(1+e-z) Rectifier Linear Unit

Slide 18

Slide 18 text

EXAMPLE: PREDICTIVE MAINTENANCE

Slide 19

Slide 19 text

SIMPLE EXAMPLE Machine Sensors x1, x2 and x3
 Weights a, b and c Y = Const + ax1 + bx2 + cx3 Const: Value when x1, x2 and x3 are 0 Training data: 100.000 Datasets. Keep 25.000 for validation Train with 75.000 -> Vary weights until result (Y) is ok Verify your model with the 25.000 sets for validation

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

HOW MACHINES LEARN

Slide 22

Slide 22 text

HOW MACHINES LEARN Supervised Learning Unsupervised Learning Reinforcement Learning

Slide 23

Slide 23 text

SUPERVISED Useful for predictions and classifications Popular use case: 
 Image recognition Needs classified training sets.

Slide 24

Slide 24 text

UNSUPERVISED Useful for segmentation and clustering Clustered data needs revision by a human Good for dimensional reduction

Slide 25

Slide 25 text

REINFORCEMENT has not a defined result for training data. Using rewards for good results - if it isn’t good do it never again, bad boy! Example: Learn how to play a game just while analyse every pixel Popular Example: 
 Alpha Go

Slide 26

Slide 26 text

MY JOURNEY! Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 27

Slide 27 text

Photo by Paula May on Unsplash https://unsplash.com/photos/AJqeO_-ifx0

Slide 28

Slide 28 text

DEEP LEARNING (DL)

Slide 29

Slide 29 text

NEURONAL NETWORKS

Slide 30

Slide 30 text

ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 31

Slide 31 text

Photo by Mario Álvarez on Unsplash https://unsplash.com/photos/M1YdS0g8SRA

Slide 32

Slide 32 text

NEURON Neuron Activation Function Output Input Weight Offset z = ∑wixi w0 x0 +b y = f(z) y0 x1 xn w1 wn y1 yn LAYER

Slide 33

Slide 33 text

. . . . Input Layer Output Layer Hidden Layers

Slide 34

Slide 34 text

. . . . Input Layer Output Layer Hidden Layers Forward pass Init with random weights and offsets

Slide 35

Slide 35 text

DETERMINE THE ERROR

Slide 36

Slide 36 text

ERROR FUNCTION f(x) = mx + b

Slide 37

Slide 37 text

ERROR FUNCTION f(x) = mx + b Mean Squared Error = Avg(Error2)

Slide 38

Slide 38 text

. . . . Input Layer Output Layer Hidden Layers Back propagation Propagate the error into every neuron err err err err err err err err err err err

Slide 39

Slide 39 text

BACKPROPAGATION For every neuron from output to input Check neurons share in the error (using weights) Fine tune weights (in very small steps eg. 0.001) This is a Hyperparameter: Learning Rate! Applies to the whole network Every weight adjusted? Start a new run This is called an new Epoch (next Hyperparameter!)

Slide 40

Slide 40 text

GRADIENT DESCENT

Slide 41

Slide 41 text

OUR GOAL: CONVERGENCE

Slide 42

Slide 42 text

CONVERGENCE Always take a look at your error. Is it minimal? Stop learning! Validate your model with other data (not the learning data!) Try to avoid overfitting!

Slide 43

Slide 43 text

ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 44

Slide 44 text

Photo by Will Langenberg on Unsplash https://unsplash.com/photos/S9uZOfeYi1U

Slide 45

Slide 45 text

CONVOLUTIONAL NEURAL NETWORKS

Slide 46

Slide 46 text

CONVOLUTIONS

Slide 47

Slide 47 text

CONVOLUTIONS

Slide 48

Slide 48 text

CONVOLUTIONS We train this little thing!

Slide 49

Slide 49 text

CONVOLUTIONAL NEURAL NETWORK Conv ReLu Pool Conv Conv Conv Conv Conv ReLu ReLu ReLu ReLu ReLu Pool Pool Dog Cat Mouse FC

Slide 50

Slide 50 text

Photo by Will Langenberg on Unsplash https://unsplash.com/photos/S9uZOfeYi1U

Slide 51

Slide 51 text

ME. Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 52

Slide 52 text

Photo by Natalia_Kollegova https://pixabay.com/de/fata-morgana-das-gebirge-berge-2738131/

Slide 53

Slide 53 text

THOUGHT ABOUT MY GOALS Confidence Wisdom Mt. Stupid Valley of despair Slope of enlightenment Plateau of sustainability

Slide 54

Slide 54 text

WHAT IS IT SUITABLE FOR?

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

SOME BETTER EXAMPLES Categorization of products in an online shop using their images. Using cognitive services for Natural Language Processing (NLP) voice or text based Using a cloud based AI Service!

Slide 59

Slide 59 text

AZURE COGNITIVE SERVICES https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/

Slide 60

Slide 60 text

„ “ No matter what you do, you need sh*tloads of data!

Slide 61

Slide 61 text

EVERYTHING IS GREAT!

Slide 62

Slide 62 text

DEPENDS…

Slide 63

Slide 63 text

. . . . Input Layer Output Layer Hidden Layers

Slide 64

Slide 64 text

WHO IS RESPONSIBLE?

Slide 65

Slide 65 text

Photo by Katie Salerno https://www.pexels.com/photo/love-people-romance-engagement-18396/

Slide 66

Slide 66 text

Photo by Cade Roberts on Unsplash https://unsplash.com/photos/H0Aud5lhupc

Slide 67

Slide 67 text

Photo by Martin Shreder on Unsplash https://unsplash.com/photos/5Xwaj9gaR0g

Slide 68

Slide 68 text

AI (ML/DL) IS AWESOME. USE IT WISELY. Thank you! @casarock for