Slide 1

Slide 1 text

AI & ML July 2016

Slide 2

Slide 2 text

History of AI

Slide 3

Slide 3 text

Minksy’s Multiplicity (1960) Crucial parts for problem solving : ● Induction ● Planning ● Search, knowledge representation ● Pattern recognition ● Learning Components needed to get to human-level AI

Slide 4

Slide 4 text

1960s : golden years ● SAINT (Symbolic Automatic Integrator): able to calculate integrals ● General Problem Solver : elementary logic and algebra (ex : Tower of Hanoi) ● STUDENT : high school algebra word problems ● ELIZA : psychiatry conversation ● Micro-worlds Minsky, 1970 : "In from three to eight years we will have a machine with the general intelligence of an average human being."

Slide 5

Slide 5 text

1970s : first AI winter 1973 : Whitehill report (UK) : “utter failure of AI to achieve its grandiose objectives” Example : “The spirit is willing, but the flesh is weak” -> Russian -> English : “The whisky is strong, but the meat is rotten” Funding killed for a decade

Slide 6

Slide 6 text

1980s : AI revival Rise of expert systems. Goal : mimic expert behaviors Doctors, scientists, accountants, … Japan : Fifth Generation Computer Budget : $850 million

Slide 7

Slide 7 text

90s-00s : second AI winter Expert systems : - hard to develop - harder to maintain - can’t replicate across fields Still can’t interact with the real world : - impossible to acquire data - all brain but no body ("Elephants Don't Play Chess") Funding stopped again

Slide 8

Slide 8 text

2010s : emergence of Machine Learning 1999 : neural networks used for handwritten digits recognition

Slide 9

Slide 9 text

2010s : emergence of Machine Learning 1999 : neural networks used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week

Slide 10

Slide 10 text

2010s : emergence of Machine Learning 1999 : neural networks used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week

Slide 11

Slide 11 text

2010s : emergence of Machine Learning 1999 : neural networks used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week Able to find cats

Slide 12

Slide 12 text

2010s : emergence of Machine Learning ● Self-driving cars ● Human interaction : ○ Handwriting ○ Speech ○ Natural language ● OCR ● Image recognition ● Information retrieval ● Artificial personal assistants ● Recommendations systems ● Drones ● Game playing ● ...

Slide 13

Slide 13 text

Classic Hype Cycle

Slide 14

Slide 14 text

Classic Hype Cycle

Slide 15

Slide 15 text

Hype Cycle 2015

Slide 16

Slide 16 text

Kondratiev Waves

Slide 17

Slide 17 text

Testing AI : Chess = “the pinnacle of human intelligence” DeepBlue vs Kasparov : 1996 : 1W, 2D, 3L 1997 : 2W, 3D, 1L This switched the canonical example of a game where humans outmatched machines to the ancient Chinese game of Go, a game of simple rules and far more possible moves than chess, which requires more intuition and is less susceptible to brute force.

Slide 18

Slide 18 text

Testing AI : Go AlphaGo vs Lee Sedol : March 2016 : 4-1 "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself." Lee Sedol : "I misjudged the capabilities of AlphaGo and felt powerless."

Slide 19

Slide 19 text

Testing AI : Turing Test Can an AI pass as human through text messaging ? Judges interact with humans and robots. They must find out which is which. Success in June 2014 AI passed as 13-year Ukrainian boy, Eugene Goostman

Slide 20

Slide 20 text

Testing AI : Turing Test Can an AI pass as human through text messaging ? Judges interact with humans and robots. They must find out which is which. Success in June 2014 AI passed as 13-year Ukrainian boy, Eugene Goostman

Slide 21

Slide 21 text

Minksy’s Multiplicity (1960) Crucial parts for problem solving : ● Induction ● Planning ● Search, knowledge representation ● Pattern recognition ● Learning Components needed to get to human-level AI

Slide 22

Slide 22 text

Minksy’s Multiplicity (1960) Crucial parts for problem solving : ● Induction -> 1960s ● Planning -> 1960s ● Search, knowledge representation -> 1980s - 2010s ● Pattern recognition -> 2010s ● Learning -> 2010s Components needed to get to human-level AI

Slide 23

Slide 23 text

Machine Learning

Slide 24

Slide 24 text

Explaining Machine Learning Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.

Slide 25

Slide 25 text

Explaining Machine Learning Different categories : ● Supervised learning ○ continous answer : regression ex : estimate a price, a volume, ... ○ discrete answer : classification ex : spam detection, cancerous tumors ● Unsupervised learning ● Reinforced learning continuous environment and constant feedback, learns by itself

Slide 26

Slide 26 text

Unsupervised learning

Slide 27

Slide 27 text

Intro to Machine Learning http://www.r2d3.us/ visual-intro-to- machine-learning-part-1/

Slide 28

Slide 28 text

Linear regression

Slide 29

Slide 29 text

Intuitions from linear regression ● algorithm is generic, results depends on data ● system is both the algorithm and the data ● starts with a hypothesis about how we can represent the data (for linear regression : a straight line) ● only as good as your data ● can deal poorly with outliers ● lots of calculation to learn, but very fast to apply (can run on mobile)

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Overfitting Need to split the data into : - training set (60%) - cross-validation set (20%) - evaluation set (20%)

Slide 32

Slide 32 text

Artificial Neural Networks (ANN) Each node does a linear combination of previous nodes More nodes can handle more complexity Input must be normalized For example, all images -> 20x20 Training in multiple steps - left to right to evaluate the training set - right to left to propagate errors

Slide 33

Slide 33 text

Tensorflow Playground

Slide 34

Slide 34 text

Deep learning

Slide 35

Slide 35 text

Deep learning

Slide 36

Slide 36 text

Hierarchy

Slide 37

Slide 37 text

State of the art : IBM

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

IBM Watson ● Healthcare : ○ Diagnostics ○ Tests suggestion ○ Prescription recommendations ● Legal : ○ Hired as a lawyer (“Ross”) ● Teaching : ○ Used as a TA (“Jill Watson”)

Slide 40

Slide 40 text

IBM Watson ● Healthcare : ○ Diagnostics ○ Tests suggestion ○ Prescription recommendations ● Legal : ○ Hired as a lawyer (“Ross”) ● Teaching : ○ Used as a TA (“Jill Watson”) ● Cooking : ○ published a recipe book ○ new combinations ○ able to avoid allergies

Slide 41

Slide 41 text

No content

Slide 42

Slide 42 text

State of the art : Google

Slide 43

Slide 43 text

Google Auto captionning (2014) Two pizzas sitting on top of a stove top oven

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

Cloud Vision API (2016) Available on Google Cloud Automatic labelling Sentiment analysis Text extraction Landmark detection Logo detection Explicit content detection

Slide 46

Slide 46 text

Cloud Vision API Available on Google Cloud Automatic labelling Sentiment analysis Text extraction Landmark detection Logo detection Explicit content detection

Slide 47

Slide 47 text

TensorFlow ML framework Opensourced in 2016 New standard Cloud computing : TPUs

Slide 48

Slide 48 text

SyntaxNet and Parsey McParseface

Slide 49

Slide 49 text

SyntaxNet and Parsey McParseface

Slide 50

Slide 50 text

SyntaxNet and Parsey McParseface Parsey McParseface can correctly read: ● The old man the boat. ● While the man hunted the deer ran into the woods. ● While Anna dressed the baby played in the crib. ● Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. It makes mistakes on: ● I convinced her children are noisy. ● The coach smiled at the player tossed the frisbee. ● The cotton clothes are made up of grows in Mississippi. ● James while John had had had had had had had had had had had a better effect on the teacher

Slide 51

Slide 51 text

NeuralArt (Tensorflow) Content Style Output

Slide 52

Slide 52 text

State of the art : Facebook

Slide 53

Slide 53 text

Facebook: M Chatbot for Facebook Messenger AI only answers when confident. Otherwise, humans answer. But AI learns from answers provided by humans when it was not confident

Slide 54

Slide 54 text

Facebook: DeepText Can understand 1000 posts / s in 20 languages Able to extract context Able to understand slang Currently used to suggest actions (“request a ride”, “create an ad to sell your item”, …)

Slide 55

Slide 55 text

State of the art : Others

Slide 56

Slide 56 text

Face recognition (April 2014)

Slide 57

Slide 57 text

Facial manipulation (Feb 2015)

Slide 58

Slide 58 text

Finding similarities in art (Aug 2014)

Slide 59

Slide 59 text

Audio prediction on video (April 2016)

Slide 60

Slide 60 text

Extract instructions from Youtube videos (Nov 2015)

Slide 61

Slide 61 text

AI fighter pilot (June 2016)

Slide 62

Slide 62 text

Applications of NLP at Quora - automatic grammar correction - question quality - duplicate question detection - related question suggestion - topic biography quality (= qualifications of writer) - topic labeler (from “science” to narrow topics like “tennis Courts in Mountain View”) - search - answer summaries - automatic answers wiki - hate speech/harassment detection - spam detection - question edit quality

Slide 63

Slide 63 text

Questions? July 2016