Slide 1

Slide 1 text

"*4VQFS.BSJP5VUPSJBM 8POTFPL+VOH 3FJOGPSDFNFOU-FBSOJOH

Slide 2

Slide 2 text

੿ਗࢳ
 8POTFPL+VOH $JUZ6OJWFSTJUZPG/FX:PSL#BSVDI$PMMFHF %BUB4DJFODF.BKPS  $POOFYJPO"*'PVOEFS %FFQ-FBSOJOH$PMMFHF3FJOGPSDFNFOU-FBSOJOH3FTFBSDIFS .PEVMBCT$53--FBEFS 3FJOGPSDFNFOU-FBSOJOH 0CKFDU%FUFDUJPO $IBUCPU (JUIVC IUUQTHJUIVCDPNXPOTFPLKVOH 'BDFCPPL IUUQTXXXGBDFCPPLDPNXTKVOH #MPH IUUQTXPOTFPLKVOHHJUIVCJP

Slide 3

Slide 3 text

SUPERMARIO TUTORIAL SERIES 1.Environment and DQN 2.Deep Reinforcement Learning with Double Q-learning 3.Dueling Network Architectures for Deep Reinforcement Learning 4.Prioritized Experience Replay 5.Noisy Networks for ExplorationPrioritized DQN 6. A Distributional Perspective on Reinforcement Learning 7. Rainbow: Combining Improvements in Deep Reinforcement Learning REINFORCEMENT LEARNING

Slide 4

Slide 4 text

SUPERMARIO TUTORIAL SERIES 1. Environment and DQN 2.Deep Reinforcement Learning with Double Q-learning 3.Dueling Network Architectures for Deep Reinforcement Learning 4.Prioritized Experience Replay 5.Noisy Networks for ExplorationPrioritized DQN 6. A Distributional Perspective on Reinforcement Learning 7. Rainbow: Combining Improvements in Deep Reinforcement Learning REINFORCEMENT LEARNING

Slide 5

Slide 5 text

ݾର 1. Markov Decision Process 2. How to install Supermario envrionment 3.Supermario Envrionment 4. Training 5. DQN 6. Result REINFORCEMENT LEARNING

Slide 6

Slide 6 text

1. MARKOV DECISION PROCESS

Slide 7

Slide 7 text

MARKOV DECISION PROCESS "DUJPO "HFOU &OWJSPONFOU 3FXBSE At Rt 4UBUF St Rt+1 St+1 REINFORCEMENT LEARNING

Slide 8

Slide 8 text

MARKOV DECISION PROCESS "DUJPO "HFOU &OWJSPONFOU 3FXBSE At Rt 4UBUF St Rt+1 St+1 SUPERMARIO WITH R.L 3FXBSE  1FOBMUZ

Slide 9

Slide 9 text

MARKOV DECISION PROCESS "DUJPO "HFOU &OWJSPONFOU 3FXBSE At Rt 4UBUF St Rt+1 St+1 SUPERMARIO WITH R.L 3FXBSE  1FOBMUZ

Slide 10

Slide 10 text

SUPERMARIO WITH R.L https://github.com/wonseokjung/gym-super-mario-bros pip install gym-super-mario-bros
 import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0') env.reset() env.render() INSTALL AND IMPORT ENVIRONMENT

Slide 11

Slide 11 text

WORLDS & LEVELS ( WORLD 1~4) SUPERMARIO WITH R.L 8PSME 8PSME 8PSME 8PSME env = gym_super_mario_bros.make('SuperMarioBros---v')

Slide 12

Slide 12 text

WORLDS & LEVELS ( WORLD 5~8) SUPERMARIO WITH R.L 8PSME 8PSME 8PSME 8PSME env = gym_super_mario_bros.make('SuperMarioBros---v')

Slide 13

Slide 13 text

ALL WORLDS AND LEVELS SUPERMARIO WITH R.L env = gym_super_mario_bros.make('SuperMarioBros---v')        

Slide 14

Slide 14 text

ALL WORLDS AND LEVELS SUPERMARIO WITH R.L env = gym_super_mario_bros.make('SuperMarioBros---v')                

Slide 15

Slide 15 text

WORLDS & LEVELS SUPERMARIO WITH R.L 7FSTJPO env = gym_super_mario_bros.make('SuperMarioBros---v') 7FSTJPO 7FSTJPO 7FSTJPO

Slide 16

Slide 16 text

GOAL SUPERMARIO WITH R.L

Slide 17

Slide 17 text

REWARD AND PENALTY SUPERMARIO WITH R.L 3FXBSE 1FOBMUZ ӥߊীоөਕ૑ݶ  ݾ಴ীب଱ೞݶ ݾ಴׳ࢿೞ૑ޅೞݶ दр੉૑զٸ݃׮ ӥߊীࢲݣয૑ݶ

Slide 18

Slide 18 text

STATE, ACTION SUPERMARIO WITH R.L env.observation_space.shape (240, 256, 3) # [ height, weight, channel ] env.action_space.n 256 SIMPLE_MOVEMENT = [ [‘nop’], [‘right’], [‘right’,’A’], [‘right’,’B’], [‘right’,’A’,’B’], [‘A’], [‘left’], ] 
 
 from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0’) env =BinarySpaceToDiscreteSpaceEnv(env, SIMPLE_MOVEMENT)

Slide 19

Slide 19 text

OBSERVATION SPACE SUPERMARIO WITH R.L env.action_space.n 256 SIMPLE_MOVEMENT = [ [‘nop’], [‘right’], [‘right’,’A’], [‘right’,’B’], [‘right’,’A’,’B’], [‘A’], [‘left’], ] 
 
 from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0’) env =BinarySpaceToDiscreteSpaceEnv(env, SIMPLE_MOVEMENT) env.observation_space.shape (240, 256, 3) # [ height, weight, channel ]

Slide 20

Slide 20 text

OBSERVATION SPACE SUPERMARIO WITH R.L env.action_space.n 256 SIMPLE_MOVEMENT = [ [‘nop’], [‘right’], [‘right’,’A’], [‘right’,’B’], [‘right’,’A’,’B’], [‘A’], [‘left’], ] 
 
 from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0’) env =BinarySpaceToDiscreteSpaceEnv(env, SIMPLE_MOVEMENT) env.observation_space.shape (240, 256, 3) # [ height, weight, channel ]

Slide 21

Slide 21 text

ACTION SPACE SUPERMARIO WITH R.L env.action_space.n 256 SIMPLE_MOVEMENT = [ [‘nop’], [‘right’], [‘right’,’A’], [‘right’,’B’], [‘right’,’A’,’B’], [‘A’], [‘left’], ] 
 
 from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0’) env =BinarySpaceToDiscreteSpaceEnv(env, SIMPLE_MOVEMENT) env.observation_space.shape (240, 256, 3) # [ height, weight, channel ]

Slide 22

Slide 22 text

ACTION AFTER WRAPPER SUPERMARIO WITH R.L env.action_space.n 256 SIMPLE_MOVEMENT = [ [‘nop’], [‘right’], [‘right’,’A’], [‘right’,’B’], [‘right’,’A’,’B’], [‘A’], [‘left’], ] 
 
 import gym_super_mario_bros
 env = gym_super_mario_bros.make(‘SuperMarioBros-v0’) env.observation_space.shape (240, 256, 3) # [ height, weight, channel ] env =BinarySpaceToDiscreteSpaceEnv(env, SIMPLE_MOVEMENT) from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv

Slide 23

Slide 23 text

EXPLOITATION AND EXPLORATION SUPERMARIO WITH R.L next_state, reward, done, info = env.step(action) else : 
 action = np.argmax(output) Exploitation Exploration def epsilon_greedy(q_value,step): if np.random.rand() < epsilon : action=np.random.randint(output) ?

Slide 24

Slide 24 text

EXPLORATION SUPERMARIO WITH R.L next_state, reward, done, info = env.step(action) else : 
 action = np.argmax(output) Exploitation Exploration if np.random.rand() < epsilon : action=np.random.randint(output) def epsilon_greedy(q_value,step): ?

Slide 25

Slide 25 text

EXPLOITATION SUPERMARIO WITH R.L next_state, reward, done, info = env.step(action) else : 
 action = np.argmax(output) def epsilon_greedy(q_value,step): if np.random.rand() < epsilon : action=np.random.randint(output) Exploitation Exploration ?

Slide 26

Slide 26 text

ENV.STEP( ) SUPERMARIO WITH R.L next_state, reward, done, info = env.step(action) else : 
 action = np.argmax(output) def epsilon_greedy(q_value,step): if np.random.rand() < epsilon : action=np.random.randint(output)

Slide 27

Slide 27 text

EXPLORATION RATE AND REPLAY MEMORY BUFFER SUPERMARIO WITH R.L memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state) (St , At , Rt+1 , St+1 ) next_state, reward, done, info = env.step(action) eps_max = 1 eps_min = 0.1 eps_decay_steps = 200000

Slide 28

Slide 28 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state) next_state, reward, done, info = env.step(action) eps_max = 1 eps_decay_steps = 200000 eps_min = 0.1

Slide 29

Slide 29 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state) next_state, reward, done, info = env.step(action) eps_max = 1 eps_decay_steps = 200000 eps_min = 0.1

Slide 30

Slide 30 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state) next_state, reward, done, info = env.step(action) eps_max = 1 eps_decay_steps = 200000 eps_min = 0.1

Slide 31

Slide 31 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state) eps_max = 1 eps_min = 0.1 eps_decay_steps = 200000 next_state, reward, done, info = env.step(action)

Slide 32

Slide 32 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L eps_max = 1 eps_min = 0.1 eps_decay_steps = 200000 next_state, reward, done, info = env.step(action) memory = deque([],maxlen=1000000) memory.append(state,action,reward,next_state)

Slide 33

Slide 33 text

REPLAY MEMORY BUFFER SUPERMARIO WITH R.L eps_max = 1 eps_min = 0.1 eps_decay_steps = 200000 next_state, reward, done, info = env.step(action) memory.append(state,action,reward,next_state) memory = deque([],maxlen=1000000)

Slide 34

Slide 34 text

MINIMIZE LOSS SUPERMARIO WITH R.L import tensorflow as tf loss = tf.reduce_mean(tf.squre( y - Q_action ) ) Optimizer =tf.train.AdamsOptimizer(learning_rate) training_op = optimizer.minize(loss) (Rt+1 + γt+1 maxa′ qθ (St+1 , a′) − qθ (St , At ))2 (St , At , Rt+1 , St+1 )

Slide 35

Slide 35 text

MINIMIZE LOSS SUPERMARIO WITH R.L (Rt+1 + γt+1 maxa′ qθ (St+1 , a′) − qθ (St , At ))2 (St , At , Rt+1 , St+1 ) loss = tf.reduce_mean(tf.squre( y - Q_action ) ) Optimizer =tf.train.AdamsOptimizer(learning_rate) training_op = optimizer.minize(loss) import tensorflow as tf

Slide 36

Slide 36 text

MINIMIZE LOSS SUPERMARIO WITH R.L import tensorflow as tf Optimizer =tf.train.AdamsOptimizer(learning_rate) training_op = optimizer.minize(loss) (Rt+1 + γt+1 maxa′ qθ (St+1 , a′) − qθ (St , At ))2 (St , At , Rt+1 , St+1 ) loss = tf.reduce_mean(tf.squre( y - Q_action ) )

Slide 37

Slide 37 text

MINIMIZE LOSS SUPERMARIO WITH R.L import tensorflow as tf training_op = optimizer.minize(loss) (Rt+1 + γt+1 maxa′ qθ (St+1 , a′) − qθ (St , At ))2 (St , At , Rt+1 , St+1 ) loss = tf.reduce_mean(tf.squre( y - Q_action ) ) Optimizer =tf.train.AdamsOptimizer(learning_rate)

Slide 38

Slide 38 text

MINIMIZE LOSS SUPERMARIO WITH R.L import tensorflow as tf (Rt+1 + γt+1 maxa′ qθ (St+1 , a′) − qθ (St , At ))2 (St , At , Rt+1 , St+1 ) loss = tf.reduce_mean(tf.squre( y - Q_action ) ) Optimizer =tf.train.AdamsOptimizer(learning_rate) training_op = optimizer.minize(loss)

Slide 39

Slide 39 text

APPROXIMATE ACTION-VALUE SUPERMARIO WITH R.L

Slide 40

Slide 40 text

DOUBLE DQN SUPERMARIO WITH R.L JOQVU "DUJPO WBMVF &OW 2/FUXPSL s’ s 3FQMBZNFNPSZ 2 T B a r (St , At , Rt+1 , St+1 )

Slide 41

Slide 41 text

1000EPISODE, 3000EPISODE, TRAINING SUPERMARIO WITH R.L FQJTPEF FQJTPEF

Slide 42

Slide 42 text

5000 EPISODE SUPERMARIO WITH R.L FQJTPEF %BZT

Slide 43

Slide 43 text

REINFORCEMENT LEARNING

Slide 44

Slide 44 text

How about making your own A.I SuperMario?

Slide 45

Slide 45 text

(JUIVC IUUQTHJUIVCDPNXPOTFPLKVOH 'BDFCPPL IUUQTXXXGBDFCPPLDPNXTKVOH #MPH IUUQTXPOTFPLKVOHHJUIVCJP хࢎ೤פ׮ 
 5IBOLZPV

Slide 46

Slide 46 text

2VFTUJPO 

Slide 47

Slide 47 text

*ଵҊ ਊয ӝഐ Time step Action Transition Function Reward Set of states Set of actions Start state Discount factor t a P(s′, r ∣ s, a) r A S S0 γ Set of reward
 
 Policy Reward State R π r REINFORCEMENT LEARNING s

Slide 48

Slide 48 text

REFERENCES 1. SuperMario environment 
 https://github.com/Kautenja/gym-super-mario-bros