Slide 1

Slide 1 text

Multiple Timescale Continuous Time 
 Recurrent Neural Networks ϨγΣοΫ ϦϏπΩ ࠶ؼ݁߹ܕ࿈ଓ࣌ؒతਆܦճ࿏

Slide 2

Slide 2 text

What are the computations of the brain? Continuous Time Recurrent 
 Neural Networks Experiment 
 description Results Contents
 ίϯςϯπ

Slide 3

Slide 3 text

Doya, Kenji. "What are the computations of the cerebe!um, the basal ganglia and the cerebral cortex?." Neural networks 12.7 (1999): 961-974. Cerebellum খ೴ supervised learning ڭࢣ͋Γֶश Basal Ganglia େ೴جఈ֩ reinforcement learning ڧԽֶश Cerebral cortex େ೴ൽ࣭ unsupervised learning ڭࢣͳֶ͠श

Slide 4

Slide 4 text

EL are the corresponding equilibrium potentials. The variables m, h and n describe the opening and closing of the voltage dependent channels. C du dt = gNam3h(u ENa ) gKn4(u EK ) gL (u EL )+I(t) (1) t n dn dt = [n n0 (u)], t m dm dt = [m m0 (u)], t h dh dt = [h h0 (u)] Dynamics of spike firing Fig. 7 Electrical model of “spiking” neuron as defined by Hodgkin and Huxley. The model is able to produce realistic variations of the membrane potential and the dynamics of a spike firing, e.g. in response to an input current I(t) sent during a small time, at t < 0. Appropriately calibrated, the Hodgkin-Huxley model has been successfully com- pared to numerous data from biological experiments on the giant axon of the squid. More generally, it has been shown that the Hodgkin-Huxley neuron is able to model biophysically meaningful properties of the membrane potential, respecting the be- haviour recordable from natural neurons: an abrupt, large increase at firing time, followed by a short period where the neuron is unable to spike again, the absolute Dynamics of spike firing Hodgkin and Huxley. The model is able nd the dynamics of a spike firing, e.g. in at t < 0. model has been successfully com- nts on the giant axon of the squid. in-Huxley neuron is able to model brane potential, respecting the be- upt, large increase at firing time, Paugam-Moisy, H., Bohte, S.M.: “Computing with Spiking Neuron Networks.” In: Kok, J., Heskes, T. (eds.) Handbook of Natural Computing. Springer, Heidelberg (2009)

Slide 5

Slide 5 text

Feedforward architecture w y I inputɹˠɹoutput

Slide 6

Slide 6 text

Feedforward architecture Recurrent architecture ܾఆ࿦ input → output ΧΦεཧ࿦ trigger → sequence

Slide 7

Slide 7 text

Continuous Time Neural Networks ࿈ଓ࣌ؒతਆܦճ࿏ input transfer function synaptic weights time constant gain bias Higher time constant = more stable Lower time constant = faster reaction

Slide 8

Slide 8 text

Error Backpropagation Through Time Because the inner context neurons change depending on the past, the whole sequence has to be trained in order.

Slide 9

Slide 9 text

[22–25]. Spatio-temporal patterns of behavior arise from dynam- ics of neural activities through neural connectivity. The RNN is as such considered to emulate characteristic features of actual neural systems, and the current model is considered consistent at the level of the macro-level mechanisms of biological neural systems [24– 26]. The network receives input from current proprioception and vision sensory modalities, and generates forward predictions of task execution through real-time interactions b prediction and bottom-up modulation proces unpredictable switching of the object’s position, was temporarily increased and this induced m robot’s intention state resulting in the flexible swi behavior in response to its environment. T intention through bottom-up modulation can corresponding to recognition of a situation. Figure 1. The behavioral task for the robot and system overview. (A) The task for the robot is to repeatedly produce two behavior: (i) move the object up and down three times at the position L, and (ii) move the object backward and forward three time For each series of actions, the robot began from the home position and ended at the same home position. The robot repeatedly g series of actions unless the object was located at the same position. The object position was switched by an experimenter at unpre System overview. doi:10.1371/journal.pone.0037843.g001 “Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Experiment Description

Slide 10

Slide 10 text

ϩϘτʹΑΔҰ࿈ಈ࡞ͷֶश࣮ݧʢجຊܕʣ

Slide 11

Slide 11 text

“Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Multiple Timescale Neural Network ࠶ؼ݁߹ܕਆܦճ࿏ high τ low τ

Slide 12

Slide 12 text

“Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Training Method ֶशํ๏

Slide 13

Slide 13 text

ಈ࡞Λ੾Γସ͑ͤͨ͞৔߹

Slide 14

Slide 14 text

μϝʔδΛ༩͑ͨ৔߹

Slide 15

Slide 15 text

Ҏ্