Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multiple Time Scale Continuous Recurrent Neural...

Multiple Time Scale Continuous Recurrent Neural Networks

Multiple Time Scale Continuous Recurrent Neural Networks

Leszek Rybicki

May 23, 2015
Tweet

More Decks by Leszek Rybicki

Other Decks in Research

Transcript

  1. What are the computations of the brain? Continuous Time Recurrent

    
 Neural Networks Experiment 
 description Results Contents
 ίϯςϯπ
  2. Doya, Kenji. "What are the computations of the cerebe!um, the

    basal ganglia and the cerebral cortex?." Neural networks 12.7 (1999): 961-974. Cerebellum খ೴ supervised learning ڭࢣ͋Γֶश Basal Ganglia େ೴جఈ֩ reinforcement learning ڧԽֶश Cerebral cortex େ೴ൽ࣭ unsupervised learning ڭࢣͳֶ͠श
  3. EL are the corresponding equilibrium potentials. The variables m, h

    and n describe the opening and closing of the voltage dependent channels. C du dt = gNam3h(u ENa ) gKn4(u EK ) gL (u EL )+I(t) (1) t n dn dt = [n n0 (u)], t m dm dt = [m m0 (u)], t h dh dt = [h h0 (u)] Dynamics of spike firing Fig. 7 Electrical model of “spiking” neuron as defined by Hodgkin and Huxley. The model is able to produce realistic variations of the membrane potential and the dynamics of a spike firing, e.g. in response to an input current I(t) sent during a small time, at t < 0. Appropriately calibrated, the Hodgkin-Huxley model has been successfully com- pared to numerous data from biological experiments on the giant axon of the squid. More generally, it has been shown that the Hodgkin-Huxley neuron is able to model biophysically meaningful properties of the membrane potential, respecting the be- haviour recordable from natural neurons: an abrupt, large increase at firing time, followed by a short period where the neuron is unable to spike again, the absolute Dynamics of spike firing Hodgkin and Huxley. The model is able nd the dynamics of a spike firing, e.g. in at t < 0. model has been successfully com- nts on the giant axon of the squid. in-Huxley neuron is able to model brane potential, respecting the be- upt, large increase at firing time, Paugam-Moisy, H., Bohte, S.M.: “Computing with Spiking Neuron Networks.” In: Kok, J., Heskes, T. (eds.) Handbook of Natural Computing. Springer, Heidelberg (2009)
  4. Continuous Time Neural Networks ࿈ଓ࣌ؒతਆܦճ࿏ input transfer function synaptic weights

    time constant gain bias Higher time constant = more stable Lower time constant = faster reaction
  5. Error Backpropagation Through Time Because the inner context neurons change

    depending on the past, the whole sequence has to be trained in order.
  6. [22–25]. Spatio-temporal patterns of behavior arise from dynam- ics of

    neural activities through neural connectivity. The RNN is as such considered to emulate characteristic features of actual neural systems, and the current model is considered consistent at the level of the macro-level mechanisms of biological neural systems [24– 26]. The network receives input from current proprioception and vision sensory modalities, and generates forward predictions of task execution through real-time interactions b prediction and bottom-up modulation proces unpredictable switching of the object’s position, was temporarily increased and this induced m robot’s intention state resulting in the flexible swi behavior in response to its environment. T intention through bottom-up modulation can corresponding to recognition of a situation. Figure 1. The behavioral task for the robot and system overview. (A) The task for the robot is to repeatedly produce two behavior: (i) move the object up and down three times at the position L, and (ii) move the object backward and forward three time For each series of actions, the robot began from the home position and ended at the same home position. The robot repeatedly g series of actions unless the object was located at the same position. The object position was switched by an experimenter at unpre System overview. doi:10.1371/journal.pone.0037843.g001 “Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Experiment Description
  7. “Emergence of functional hierarchy in a multiple timescale neural network

    model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Multiple Timescale Neural Network ࠶ؼ݁߹ܕਆܦճ࿏ high τ low τ
  8. “Emergence of functional hierarchy in a multiple timescale neural network

    model: a humanoid robot experiment”, Y Yamashita, J Tani PLoS computational biology 4 (11), e1000220 Training Method ֶशํ๏