Slide 1

Slide 1 text

Intro to Reinforcement Learning + Deep Q-Networks: Part 1 Robin Ranjit Singh Chauhan [email protected] Pathway Intelligence Inc

Slide 2

Slide 2 text

● Aanchan Mohan ○ for suggesting I do this, and organizing ● Bruce Sharpe ○ Video! ● Reviewers ○ Valuable comments ● Other meetup organizers ● Hive ● Sutton+Barto, Berkeley, UCL, DeepMind, OpenAI ○ for publishing openly Props Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 2 “Many unpaid hours were sacrificed to bring us this information”

Slide 3

Slide 3 text

Why ● Aim for today ○ Deliver you hard-won insight on silver platter ○ Things I wish I had known ○ Curated best existing content + original content ● Exchange ○ There is a lot to know ○ I hope others present on RL topics ○ If you have serious interest in RL I would like to chat Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 3 “Many unpaid hours were sacrificed to bring us this information”

Slide 4

Slide 4 text

● Head of Engineering @ AgFunder ○ SF-based VC focussed on AgTech + FoodTech investing ○ Investments include companies doing Machine Learning in these spaces ■ ImpactVision, The Yield ● Pathway Intelligence ○ BC-based consulting company ● Past ○ Microsoft PM in Fintech Payment Fraud ○ Transportation ○ HPC for Environmental engineering ● Computer Engineering @ Waterloo Me 4 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 4

Slide 5

Slide 5 text

You ● Comfort levels ○ ML ○ RL ○ Experience? ● Interest areas ● Lots of slides! ○ Speed vs depth?

Slide 6

Slide 6 text

IRL RL = trial and error + learning trial and error = variation and selection, search (explore/exploit) Learning = Association + Memory - Sutton + Barto

Slide 7

Slide 7 text

Types of ML ● Unsupervised ● Supervised ● Reinforcement Learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 7

Slide 8

Slide 8 text

Unsupervised Learning Supervised Learning (+semi-supervised) Reinforcement Learning Training Data Collect training data Collect training data Agent creates data through exploration Labels None Explicit label per example ** Sparse, Delayed Reward -> Temporal Credit Assignment problem Evaluation Case-specific, can be subjective Often Accuracy / Loss metrics per instance Regret ; Total Reward Inherent vs Artificial Reward Training / Fitting Training set Training set Behaviour policy Testing Test set Test set Target policy Exploration n/a n/a Exploration strategy ** (typically part of Behavior Policy) 8 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 8 Image credit: Robin Chauhan, Pathway Intelligence Inc.

Slide 9

Slide 9 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 9 Image credit: Complementary roles of basal ganglia and cerebellum in learning and motor control, Kenji Doya, 2000

Slide 10

Slide 10 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 10 Yann LeCun, January 2017 Asilomar, Future of Life Institute

Slide 11

Slide 11 text

Related Fields Image credit: UCL MSc Course on RL, David Silver, University College London Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 11

Slide 12

Slide 12 text

● Know what a “good / bad result” looks like ○ Don’t want to/cannot specify how to get to it ● When you need Tactics + Strategy ○ Action, not just prediction ● Cases ○ Games ○ Complex robot control ○ Dialog systems ○ Vehicle Control ** ○ More as RL and DL advances When to consider RL Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 12 Image credit: (internet)...

Slide 13

Slide 13 text

● Simulatable ○ Else: training IRL usually infeasible ** ● Vast state spaces require exploration ○ Else: enumerate + plan ● Dependencies across time ○ Delayed reward ○ Else: supervised ● Avoid RL unless needed ○ Immature ○ Complicated ○ Data-hungry When to consider RL Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 13 Image credit: (internet)...

Slide 14

Slide 14 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 14 Image credit: OpenAI https://blog.openai.com/ai-and-compute

Slide 15

Slide 15 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 15 Image credit: OpenAI https://blog.openai.com/ai-and-compute 1 day on worlds fastest supercomputer peak perf 1 day on NVIDIA DGX-2: 16 Volta GPUs $400k HPC stats from top500.org

Slide 16

Slide 16 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 16 Image credit: OpenAI https://blog.openai.com/ai-and-compute 1 day on worlds fastest supercomputer peak perf 1 day on NVIDIA DGX-2: 16 Volta GPUs $400k HPC stats from top500.org 4 of the 5 most data-hungry AI training runs are RL

Slide 17

Slide 17 text

Hype vs Reality ● Behind many recently AI milestones ● Better than human perf ● “Scared of AI” == Scared of RL ○ Jobs ○ Killing / Enslaving ○ Paperclips ○ AGI ○ Sentience 17 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 17 ● Few applications so far ● Slow learning ● Practical for robots? ● Progressing quickly

Slide 18

Slide 18 text

"RL + DL = general intelligence" David Silver Google DeepMind ICML 2016

Slide 19

Slide 19 text

“I think reinforcement learning is one class of technology where the PR excitement is vastly disproportional relative to the ... actual deployments today” Andrew Ng Chief Scientist of Baidu EmTech Nov 2017 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 19

Slide 20

Slide 20 text

● Methods ○ RL Algorithms ● Approximators ○ Deep Learning models in general ○ RL-specific DL techniques ● Gear ○ GPU ○ TPU, Other custom silicon ● Data ○ Sensors + Sensor data ● All of these are on fire ○ Safe to expect non-linear advancement in RL RL Trajectory Dependencies Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 20 Image credit: (internet)

Slide 21

Slide 21 text

● DeepMind (Google) ● OpenAI ● UAlberta ● Google Brain ● Berkeley, CMU, Oxford ● Many more... Who Does RL Research? Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 21

Slide 22

Slide 22 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 22

Slide 23

Slide 23 text

● Safety ● Animal rights ● Unemployment ● Civil Liberties ● Peace + Conflict ● Power Centralization RL+AI Ethics Dimensions Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 23 Consider donating to organizations dedicated to protecting values you cherish

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

Reinforcement Learning ● learning to decide + act over time ● often online learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 25 Image credit: Reinforcement Learning: An Introduction, Sutton and Barto

Slide 26

Slide 26 text

● Sequential Task ○ Pick one of K arms ○ Each has own fixed, unknown, (stochastic?) reward distribution ● Goal ○ Maximize reward ● Challenge ○ Explore vs Exploit ○ Either alone not optimal ○ Supervised learning alone cannot solve: does not explore (Stochastic Multi-armed) Bandit Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 26 Image credit: Microsoft Research Image credit: https://github.com/CamDavidsonPilon

Slide 27

Slide 27 text

Contextual Multi-armed Bandit ● Rewards depend on Context ● Context independent of action F a1 F a2 F a3 F a4 Edgewater Casino (Context a) F b1 F b2 F b3 F b4 Hard Rock Casino (Context b) Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 27

Slide 28

Slide 28 text

Reinforcement Learning Image credit: CMU Graduate AI course slides ● Context change depends on action ● Learn an MDP from experience only ● Game setting ○ Experiences effects of rules (wins/loss/tie) ○ Does not “know” rules Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 28

Slide 29

Slide 29 text

Markov Chains ● State fully defines history ● Transitions ○ Probability ○ Destination Image credit: Wikipedia Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 29

Slide 30

Slide 30 text

Markov Decision Process (MDP) ● Markov Chains ○ States linked w/o history ● Actions ○ Choice ● Rewards ○ Motivation ● Variants ○ Bandit = MDP with single state! ○ MC + Rewards = MRP ○ Partially observed (POMDP) ○ Semi-MDP Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 30 Image credit: Wikipedia

Slide 31

Slide 31 text

MDP and Friends Image credit: Aaron Schumacher, planspace.org Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 31

Slide 32

Slide 32 text

Reward Signal ● Reward drives learning ○ Details of reward signal often critical ● Too sparse ○ complete learning failure ● Too generous ○ optimization limited ● Problem specific Image credit: Wikipedia Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 32

Slide 33

Slide 33 text

Montezuma’s Actual Revenge Chart credit: Schaul et al, Prioritized Experience Replay, DeepMind Feb 2016 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 33

Slide 34

Slide 34 text

Broad Applicability ● Environment / Episodes ○ Finite length / Endless ● Action space ○ Discrete / Continuous ○ Few / Vast ● State space ○ Discrete / Continuous ○ Tree / Graph / Cyclic ○ Deterministic / Stochastic ○ Partially / Fully observed ● Reward signals ○ Deterministic / Stochastic ○ Continuous / Sparse ○ Immediate / Delayed 34 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 34 Image credit: Wikipedia

Slide 35

Slide 35 text

Types of RL ● Value Based ○ Construct state-action value function Q*(s,a) ● Policy Based ○ Directly construct π*(s) ● Model Based ○ Learn model of environment ○ Plan using model ● Hybrids Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 35 Image credit: UCL MSc Course on RL, David Silver, University College London

Slide 36

Slide 36 text

Reinforcement Learning vs Planning / Search ● Planning and RL can be combined Planning / Search Reinforcement Learning Goal Improved policy Improved policy Method Computing on known model Interacting with unknown environment State Space Model Known Unknown Algos Heuristic state-space seach Dynamic programming Q-Learning Monte Carlo rollouts Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 36 Content paraphrased from: UCL MSc Course on RL, David Silver, University College London

Slide 37

Slide 37 text

RL Rollouts vs Planning rollouts Image credit: Aaron Schumacher, planspace.org

Slide 38

Slide 38 text

Elementary approaches ● Monte Carlo RL (MC) ○ Value = mean return of multiple runs ● Value Iteration + Policy Iteration ○ Both require enumerating all states ○ Both require knowing transition model T(s) ● Dynamic Programming (DP) ○ Value = value of next state + reward in this state ○ Iteration propagates reward from terminal state back to beginning Images credit: Reinforcement Learning: An Introduction, Sutton and Barto Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 38

Slide 39

Slide 39 text

Elementary approaches: Value Iteration Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 39 Image credit: Pieter Abbeel UC Berkeley EECS

Slide 40

Slide 40 text

Elementary approaches: Policy Iteration Image credit: Pascal Poupart CS886 University of Waterloo Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 40

Slide 41

Slide 41 text

RL Algo Zoo ● Discrete / Continuous ● Model-based / Model-Free ● On- / Off-policy ● Derivative-based / not Image credit: Aaron Schumacher and Berkeley Deep RL Bootcamp Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 41

Slide 42

Slide 42 text

RL Algo Zoo ● Discrete / Continuous ● Model-based / Model-Free ● On- / Off-policy ● Derivative-based / not ● Memory, Imagination ● Imitation, Inverse ● Hierarchical ● Mastery / Generalization Image credit: Aaron Schumacher and Berkeley Deep RL Bootcamp Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 42

Slide 43

Slide 43 text

RL Algo Zoo ● Discrete / Continuous ● Model-based / Model-Free ● On- / Off-policy ● Derivative-based / not ● Memory, Imagination ● Imitation, Inverse ● Hierarchical ● Mastery / Generalization ● Scalability ● Sample efficiency? Image credit: Aaron Schumacher and Berkeley Deep RL Bootcamp , plus additions in red by Robin Chauhan GAE V-trace (Impala) Dyna-Q family AlphaGo AlphaZero MPPI MMC PAL HER GPS+DDP Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 43 Sarsa Distro-DQN TD Search DDO FRL MAXQ Options UNREAL HAM OptCrit hDQN iLQR MPC Pri-Sweep ReinfPlan NAC ACER A0C Rainbow MERLIN DQRN (POMDP)

Slide 44

Slide 44 text

Name Notation Intuition Where Used State Value function V(s) How good is state s? Value-based methods State-action value function Q(s,a) In state s, how good is action a? Q-Learning, DDPG Policy π(s) What action do we take in state s? Policy-based methods (But all RL methods have some kind of policy) Advantage function A(s,a) In state s, how much better is action a, than the “average” V(s)? Duelling DQN, Advantage Actor Critic, A3C Transition prediction function P(s′,r|s,a) In state s, if I take action a, what is expected next state and reward? Model-based RL Reward prediction function R(s,a) In state s, if I take action a, what is expected reward? Model-based RL Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 44

Slide 45

Slide 45 text

Image credit: Sergey Levine via Chelsea Finn and Berkeley Deep RL Bootcamp Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 45

Slide 46

Slide 46 text

● Wide task variety ○ Toy tasks ○ Continuous + Discrete ○ 2D, 3D, Text, Atari ● Common API for env + agent ○ Compare algos ● Similar ○ OpenAI’s Retro: Genesis, Atari arcade ○ DeepMind’s Lab: Quake-based 3D env ○ Microsoft’s Malmo: Minecraft ○ Facebook’s CommAI: Text comms ○ Poznan University, Poland: VizDoom OpenAI gym Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 46 Image credit: Open AI gym

Slide 47

Slide 47 text

OpenAI gym import gym env = gym.make('CartPole-v0') for i_episode in range(20): observation = env.reset() for t in range(100): env.render() print(observation) action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) break Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 47 Sample code from https://gym.openai.com/docs/

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

StarCraft II Learning Environment

Slide 50

Slide 50 text

No content

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 52

Slide 53

Slide 53 text

RL IRL ● Most results in hermetic envs ○ Board games ○ Computer games ○ Simulatable robot controllers ● Sim != Reality ● Model-based : Sample efficiency ++ ○ But: Model errors accumulate ● Techniques to account for model errors ● Theme: Bridge Sim -> Reality Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 53

Slide 54

Slide 54 text

RL IRL ● Simple IRL manipulations hard for present day RL Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 54 Image credit: Chelsea Finn Image credit: Sergey Levine Image credit: Google Research

Slide 55

Slide 55 text

Q Learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 55

Slide 56

Slide 56 text

Q (s,a)

Slide 57

Slide 57 text

Q-Learning ● From s, which a best? Q( state, action ) = E[ Σr ] ● Q implies policy: π*(s) = max a Q*(s, a) ● Use TD Learning to find Q for each s ○ By Watkins in 1989 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 57

Slide 58

Slide 58 text

Q-Learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 58 ● Discrete, finite action spaces ○ stochastic env ○ changing env (unlike Go) ● Model-free RL ○ Naive about action effects ● TD(0) ○ Only draws reward 1 step into the past, each iteration

Slide 59

Slide 59 text

No content

Slide 60

Slide 60 text

Intuition: Q-function Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 60 Image credit: (internet) Image credit: AlphaXos, Pathway Intelligence Inc.

Slide 61

Slide 61 text

Image credit: Vlad Mnih, Deepmind at Deep RL Bootcamp, Berkeley Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 61

Slide 62

Slide 62 text

Image credit: AlphaXos, Pathway Intelligence Inc. Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 62

Slide 63

Slide 63 text

Temporal Difference (TD) Learning ● Predict future values ○ Incremental ● Many Variants ○ TD(0) vs TD(1) vs TD(λ) ● Not new ○ From Witten 1977, Sutton and Barto 1981 ● Here we use it to predict expected reward Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 63

Slide 64

Slide 64 text

Intuition: TD learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 64 Image credit: (internet)... Image credit: Author

Slide 65

Slide 65 text

Intuition: TD Learning and State Space Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 65 Image credit: (internet)...

Slide 66

Slide 66 text

Bellman Equation for Q-learning Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 66 Image credit: Robin Chauhan, Pathway Intelligence Inc.

Slide 67

Slide 67 text

TD(0) updates / Backups / Bellman Updates Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 67 Image credit: Robin Chauhan, Pathway Intelligence Inc.

Slide 68

Slide 68 text

Q-Learning (non-deep) Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 68 Image credit: Reinforcement Learning: An Introduction, Sutton and Barto

Slide 69

Slide 69 text

Q-Learning Policies ● Policy: course of action ● Greedy policy ○ pick action w / max Q ● Epsilon-Greedy policy ○ Explore: random action ○ Exploit : action w / max Q ○ Probability ε ● Alternatives ○ Sample over action distro ○ Noise + Greedy Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 69 Image credit: (internet…)

Slide 70

Slide 70 text

On-policy / Off-policy ● On-policy learning ○ Learn on the job ● Off-policy learning ○ Look over someone’s shoulder ● Q-Learning = off-policy Paraphrased from: Reinforcement Learning: An Introduction, Sutton and Barto Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 70

Slide 71

Slide 71 text

Basic Methods Images credit: Reinforcement Learning: An Introduction, Sutton and Barto Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 71

Slide 72

Slide 72 text

● reinforcement learning: an introduction by sutton and barto http://incompleteideas.net/book/bookdraft2018jan1.pdf ● david silver's RL course: https://www.youtube.com/watch?v=2pWv7GOvuf0 ● Berkeley Deep RL bootcamp: https://sites.google.com/view/deep-rl-bootcamp/lectures ● openai gym: https://gym.openai.com/ ● arxiv.org Resources Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 72

Slide 73

Slide 73 text

DQN in depth Part 2 of this talk coming soon 73

Slide 74

Slide 74 text

Thank you! Robin Ranjit Singh Chauhan [email protected] https://github.com/pathway https://ca.linkedin.com/in/robinc https://pathway.com/aiml 74 Intro to RL+DQN by Robin Chauhan, Pathway Intelligence Inc. 74