Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
ICML2018読み会 Policy and Value Transfer in Lifelong Reinforcement Learning
Search
Yuu David Jinnai
July 28, 2018
Research
3
560
ICML2018読み会 Policy and Value Transfer in Lifelong Reinforcement Learning
Yuu David Jinnai
July 28, 2018
Tweet
Share
Other Decks in Research
See All in Research
サウナでのプロジェクションマッピングの可能性の検討 / EC71koizumi
yumulab
0
150
F0に基づいて伸縮された画像文字からの音声合成 [ASJ2024春]
nehi0615
0
120
近似最近傍探索とVector DBの理論的背景
matsui_528
4
1.3k
論文紹介 DISN: Deep Implicit Surface Network for High quality Single-view 3D Reconstruction / DISN: Deep Implicit Surface Network for High quality Single-view 3D Reconstruction
nttcom
0
110
データで診て考える合志市の渋滞と公共交通 ~めざせ 車1割削減、渋滞半減、公共交通2倍~
trafficbrain
0
460
20240209 データを肴に熊本の交通を考える会「車1割削減、渋滞半減、公共交通2倍」をめざし世界に学ぼう
trafficbrain
0
770
Generative AI - practice and theory
gpeyre
1
550
200名の育児中男性の声 「僕たちは、キャリアとライフをトレードオフにしたくない」共働き3.0世代の男性が 本当に求める働き方とは【ワーキングペアレンツの転職意識調査2023|XTalent株式会社】
xtalent
0
460
自己教師あり学習による事前学習(CVIMチュートリアル)
naok615
2
1.4k
[2023 CCSE] ZOZOTOWN検索における 研究開発の取り組みについて
tomoyayama
0
130
Sosiaalisen median katsaus 02/2024
hponka
0
2.4k
第14回対話システムシンポジウム EMNLP 2023 参加報告
atsumoto
0
140
Featured
See All Featured
Producing Creativity
orderedlist
PRO
336
39k
In The Pink: A Labor of Love
frogandcode
138
21k
Mobile First: as difficult as doing things right
swwweet
216
8.6k
A Tale of Four Properties
chriscoyier
150
22k
What the flash - Photography Introduction
edds
64
11k
XXLCSS - How to scale CSS and keep your sanity
sugarenia
240
1.2M
RailsConf 2023
tenderlove
2
530
Distributed Sagas: A Protocol for Coordinating Microservices
caitiem20
321
20k
The World Runs on Bad Software
bkeepers
PRO
61
6.7k
The Brand Is Dead. Long Live the Brand.
mthomps
48
28k
Stop Working from a Prison Cell
hatefulcrawdad
266
19k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
34
8.9k
Transcript
Policy and Value Transfer in Lifelong Reinforcement Learning David Abel*,
Yuu Jinnai*, George Konidaris, Michael Littman, Yue Guo Brown University
Motivation: Solving Multiple Tasks (Arumugam et al. 2017) (Konidaris et
al. 2017)
Markov Decision Processes M = (S, A, T, R, γ)
S: set of states A: set of actions T: transitions R: reward γ: discount factor Objective: Find a policy π(a | s) which maximizes total discounted reward s t s t+1 r t+1 a t ・・・・
Optimal Fixed Policy Given a distribution of task what policy
maximizes the expected performance? Task M 1 Policy Task M 2 Task M 3
Previous Work: Action Prior (Rosman&Ramamoorthy2012) Pr(M 1 ) = 0.5
Pr(M 2 ) = 0.5 0.5 0.5 Probability of the action being the optimal action
Pr(M 1 ) = 0.5 Pr(M 2 ) = 0.5
Probability of the action being the optimal action Previous Work: Action Prior (Rosman&Ramamoorthy2012) 0.5 0.5
Algorithm: Average MDP Pr(M 1 ) = 0.5 Pr(M 2
) = 0.5 0.0 1.0
(Theorem) Average MDP is an optimal policy if only reward
function is distributed (e.g. S, A, T, γ are fixed) (Ramachandran&Amir 2007) Results
Optimal Fixed Policy Given a distribution of task what policy
maximizes the expected performance? Task M 1 Policy Task M 2 Task M 3
Lifelong Reinforcement Learning D ・・ Repeat: 1. Agent samples an
MDP from a distribution M ← sample(D) 2. Solve it π ← solve(M) M 1 M 2 M 3
Optimistic Initialization (Keans&Singh 2002) Initialize Q-value: Initialize Q-value optimistically to
encourage exploration
PAC-MDP (Strehl et al. ‘09; Rao, Whiteson ‘12; Mann, Choe
‘13) (Theorem) Sample complexity of PAC-MDP algorithms are: IF:
PAC-MDP (Strehl et al. ‘09; Rao, Whiteson ‘12; Mann, Choe
‘13) (Theorem) Sample complexity of PAC-MDP algorithms are: Minimize: the overestimate Subject to:
PAC-MDP (Strehl et al. ‘09; Rao, Whiteson ‘12; Mann, Choe
‘13) (Theorem) Sample complexity of PAC-MDP algorithms are: Minimize: the overestimate Subject to: Solution:
Algorithm: MaxQInit Task M 1 Task M 2 ・・・ Task
M m ・・・ (Theorem) For m sufficiently large, MaxQInit preserves the PAC-MDP property with high probability
Results: Delayed Q-Learning
Results: Delayed Q-Learning
Results: R-Max (Brafman&Tennenholtz 2002)
Results: Q-Learning (Watkins 1992) Tradeoff in jumpstart performance vs. convergence
time
Conclusions Average MDP Task M 1 Policy 1 Task M
2 Policy 2 ・・・・ MaxQInit Task M 1 Policy Task M 2 Task M 3