Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Intro to AI & ML

Intro to AI & ML

Adrien Couque

July 20, 2016
Tweet

More Decks by Adrien Couque

Other Decks in Technology

Transcript

  1. Minksy’s Multiplicity (1960) Crucial parts for problem solving : •

    Induction • Planning • Search, knowledge representation • Pattern recognition • Learning Components needed to get to human-level AI
  2. 1960s : golden years • SAINT (Symbolic Automatic Integrator): able

    to calculate integrals • General Problem Solver : elementary logic and algebra (ex : Tower of Hanoi) • STUDENT : high school algebra word problems • ELIZA : psychiatry conversation • Micro-worlds Minsky, 1970 : "In from three to eight years we will have a machine with the general intelligence of an average human being."
  3. 1970s : first AI winter 1973 : Whitehill report (UK)

    : “utter failure of AI to achieve its grandiose objectives” Example : “The spirit is willing, but the flesh is weak” -> Russian -> English : “The whisky is strong, but the meat is rotten” Funding killed for a decade
  4. 1980s : AI revival Rise of expert systems. Goal :

    mimic expert behaviors Doctors, scientists, accountants, … Japan : Fifth Generation Computer Budget : $850 million
  5. 90s-00s : second AI winter Expert systems : - hard

    to develop - harder to maintain - can’t replicate across fields Still can’t interact with the real world : - impossible to acquire data - all brain but no body ("Elephants Don't Play Chess") Funding stopped again
  6. 2010s : emergence of Machine Learning 1999 : neural networks

    used for handwritten digits recognition
  7. 2010s : emergence of Machine Learning 1999 : neural networks

    used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week
  8. 2010s : emergence of Machine Learning 1999 : neural networks

    used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week
  9. 2010s : emergence of Machine Learning 1999 : neural networks

    used for handwritten digits recognition 2012 : @Google, 16k processors, 10M Youtube videos, 1 week Able to find cats
  10. 2010s : emergence of Machine Learning • Self-driving cars •

    Human interaction : ◦ Handwriting ◦ Speech ◦ Natural language • OCR • Image recognition • Information retrieval • Artificial personal assistants • Recommendations systems • Drones • Game playing • ...
  11. Testing AI : Chess = “the pinnacle of human intelligence”

    DeepBlue vs Kasparov : 1996 : 1W, 2D, 3L 1997 : 2W, 3D, 1L This switched the canonical example of a game where humans outmatched machines to the ancient Chinese game of Go, a game of simple rules and far more possible moves than chess, which requires more intuition and is less susceptible to brute force.
  12. Testing AI : Go AlphaGo vs Lee Sedol : March

    2016 : 4-1 "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself." Lee Sedol : "I misjudged the capabilities of AlphaGo and felt powerless."
  13. Testing AI : Turing Test Can an AI pass as

    human through text messaging ? Judges interact with humans and robots. They must find out which is which. Success in June 2014 AI passed as 13-year Ukrainian boy, Eugene Goostman
  14. Testing AI : Turing Test Can an AI pass as

    human through text messaging ? Judges interact with humans and robots. They must find out which is which. Success in June 2014 AI passed as 13-year Ukrainian boy, Eugene Goostman
  15. Minksy’s Multiplicity (1960) Crucial parts for problem solving : •

    Induction • Planning • Search, knowledge representation • Pattern recognition • Learning Components needed to get to human-level AI
  16. Minksy’s Multiplicity (1960) Crucial parts for problem solving : •

    Induction -> 1960s • Planning -> 1960s • Search, knowledge representation -> 1980s - 2010s • Pattern recognition -> 2010s • Learning -> 2010s Components needed to get to human-level AI
  17. Explaining Machine Learning Machine learning is the idea that there

    are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.
  18. Explaining Machine Learning Different categories : • Supervised learning ◦

    continous answer : regression ex : estimate a price, a volume, ... ◦ discrete answer : classification ex : spam detection, cancerous tumors • Unsupervised learning • Reinforced learning continuous environment and constant feedback, learns by itself
  19. Intuitions from linear regression • algorithm is generic, results depends

    on data • system is both the algorithm and the data • starts with a hypothesis about how we can represent the data (for linear regression : a straight line) • only as good as your data • can deal poorly with outliers • lots of calculation to learn, but very fast to apply (can run on mobile)
  20. Overfitting Need to split the data into : - training

    set (60%) - cross-validation set (20%) - evaluation set (20%)
  21. Artificial Neural Networks (ANN) Each node does a linear combination

    of previous nodes More nodes can handle more complexity Input must be normalized For example, all images -> 20x20 Training in multiple steps - left to right to evaluate the training set - right to left to propagate errors
  22. IBM Watson • Healthcare : ◦ Diagnostics ◦ Tests suggestion

    ◦ Prescription recommendations • Legal : ◦ Hired as a lawyer (“Ross”) • Teaching : ◦ Used as a TA (“Jill Watson”)
  23. IBM Watson • Healthcare : ◦ Diagnostics ◦ Tests suggestion

    ◦ Prescription recommendations • Legal : ◦ Hired as a lawyer (“Ross”) • Teaching : ◦ Used as a TA (“Jill Watson”) • Cooking : ◦ published a recipe book ◦ new combinations ◦ able to avoid allergies
  24. Cloud Vision API (2016) Available on Google Cloud Automatic labelling

    Sentiment analysis Text extraction Landmark detection Logo detection Explicit content detection
  25. Cloud Vision API Available on Google Cloud Automatic labelling Sentiment

    analysis Text extraction Landmark detection Logo detection Explicit content detection
  26. SyntaxNet and Parsey McParseface Parsey McParseface can correctly read: •

    The old man the boat. • While the man hunted the deer ran into the woods. • While Anna dressed the baby played in the crib. • Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. It makes mistakes on: • I convinced her children are noisy. • The coach smiled at the player tossed the frisbee. • The cotton clothes are made up of grows in Mississippi. • James while John had had had had had had had had had had had a better effect on the teacher
  27. Facebook: M Chatbot for Facebook Messenger AI only answers when

    confident. Otherwise, humans answer. But AI learns from answers provided by humans when it was not confident
  28. Facebook: DeepText Can understand 1000 posts / s in 20

    languages Able to extract context Able to understand slang Currently used to suggest actions (“request a ride”, “create an ad to sell your item”, …)
  29. Applications of NLP at Quora - automatic grammar correction -

    question quality - duplicate question detection - related question suggestion - topic biography quality (= qualifications of writer) - topic labeler (from “science” to narrow topics like “tennis Courts in Mountain View”) - search - answer summaries - automatic answers wiki - hate speech/harassment detection - spam detection - question edit quality