Casidhe Lee
August 09, 2011
99

# Artificial Intelligence in One Hour

A Tech Talk I gave to engineers at LinkedIn. Many of them hadn't encountered concepts in A.I. before, so I gave a soft introduction.

August 09, 2011

## Transcript

1. ### Artiﬁcial Intelligence  in  One Hour Casidhe Lee Content

adapted from Prof. Klein’s CS 188 course at UC Berkeley

agents

6. ### Rational The best sequence of actions toward a goal independent

of the thought process Goals are expressed using utility functions

8. ### The Bellman Equations  Definition of “optimal utility” leads to

a simple one-step lookahead relationship amongst optimal utility values: Optimal rewards = maximize over first action and then follow optimal policy  Formally: a s s, a s,a,s’ s’
9. ### What You’ll Get From this Talk A high level summary

of the ﬁeld No code, but a bit of math How to approach certain problems using AI techniques

13. ### Off-the-shelf search algorithms take too long on very large graphs.

AI approach: estimate the path beforehand
14. ### A* Search Assign a path cost p(n) to node n.

Assign an estimated cost h(n) to go from node n to goal node. This is called a heuristic. Let f(n) = p(n) + h(n) be the cost from current node to node n. Perform a search by choosing adjacent node n with smallest f(n) at each step. Return when we hit the goal.
15. None
16. ###          

                                               DFS BFS A*
17. ### Why does it work? Admissibility h(n) < distance(n, goal) Consistency

h(x) <= distance(x, y) + h(y)                         Warning: 3e book has a more complex, but also correct, variant A* Graph Search Gone Wrong? S A B C G 1 1 1 2 3 h=2 h=1 h=4 h=1 h=0 S (0+2) A (1+4) B (1+1) C (2+1) G (5+0) C (3+1) G (6+0) State space graph Search tree

19. ### Search is also useful for games Minimax   

                                 Alpha Beta Pruning

21. ### The science of getting computers to learn, without being explicitly

programmed ! Prof. Andrew Ng
22. None
23. ### Machine learning is often coupled with classiﬁcation Classiﬁers are implemented

with linear or probabilistic methods, amongst others The challenge is to train these classiﬁers
24. ### Classiﬁers rely on features to reach conclusions A feature is

any signal of the input’s property
25. ### Naive Bayes        

                      “Given observations of Features F1 ... Fn , what’s the probability we see event Y”
26. ### But how do you train it to do that? With

this model, whenever we see feature, we can conclude a possible Y. This is a probabilistic classiﬁer
27. ### How to train naive bayes Another method: Maximum Likelihood Estimation

! L(Y) = P(F1 , F2 , ... Fn | Y) Give it a test data set and tune parameters until we satisfy success metric One method: Find probabilities using counting

30. ### Other Topics to Cover Constraint Satisfaction Problems Reinforcement Learning Markov

Decision Processes Decision Trees NLP, Computer Vision, and Robotics Hidden Markov Models SVM, Perceptron, Mira, and Linear Classiﬁcation Models
31. ### Question How can I learn more about A.I.? Answer Brush

up on probability and linear algebra Go ask your local expert Think about these techniques in your daily work