Critical Spiking Neural Networks

Be1c8a24b76f8b2b23f53eb22d401810?s=47 Imperial ACM
November 08, 2013

Critical Spiking Neural Networks

Be1c8a24b76f8b2b23f53eb22d401810?s=128

Imperial ACM

November 08, 2013
Tweet

Transcript

  1. Critical Spiking Neural Networks By Filipe Peliz Pinto Teixeira &

    Murray Shanahan {fp10, m.shanahan}@imperial.ac.uk Filipe Peliz Pinto Teixeira, Murray Shanahan Department of Computing Imperial College London
  2. Outline 1.Background 1.Artificial Intelligence 2.Artificial Neural Networks 3.Neurons in Nature

    2.Spiking Neuron Models 3.Applications 1.Synchronisation 2.Oscillations 3.Metastability 4.Agent System 5.Criticality 4.Summary
  3. Background: Artificial Intelligence • Artificial intelligence • Bayesian Networks •

    Deep Learning Networks • Logic • Argumentation • Artificial Neural Networks • Agent Systems • Etc. . . • All with the purpose of reproducing “intelligence” • Human-like intelligence being the golden goal • Why not take at the biological level ?
  4. Background: Artificial Neuron • Simple weighted computation • Learn weights,

    perform simple computation, predict output.
  5. Background: Artificial Neural Network • Feed forward architecture • Trained

    with sample set of data • Errors calculated • Weights adjusted via backward propagation • Network adjusts for more accurate predictions • Simple, yet biologically inaccurate
  6. Background: Neurons in Nature - Design • Dendrites receive electrical

    charges from other neurons or external stimulation • Soma holds electrical charge or membrane potential • Axon carries electrical “spike” when charge in soma exceeds a threshold
  7. Background: Neurons in Nature - Connectivity • Neurons connect to

    each other via synapse. • Synapse is where the firing neuron's axon passes an electrical charge to the receiving neuron's dendrites. • Firing Neuron – Presynaptic neuron • Receiving Neuron – Postsynaptic neuron • Synapses are not direct electrical connections.
  8. Background: Neurons in Nature - Excitation Vs Inhibition • Neurons

    in nature fall into two primary sub classes: • Excitatory Neurons • Firing increases activity of neighbouring neurons • Inhibitory Neurons  Firing decreases activity of neighbouring neurons • Interplay between these neurons responsible for many brain phenomena • Oscillations • Banding • Dynamic long range correlations
  9. Background: Neurons in Nature - Behaviour • With no stimulation

    membrane potential remains at resting potential. • Electrical current flows into neuron via dendrites. • When membrane potential in soma exceeds threshold a spike (action potential) is released. • Spike travels down axon and through synapses to neighbouring neurons promoting or demoting neighbouring neuron activity. • After spike, firing neuron hyperpolarises. This prevents spike from reverberating back to source. • Slowly returns to resting potential after spike.
  10. Background: From Artificial to Spiking Neurons • By understanding electrical

    flow of neurons we can model their behaviour • The opening and closing of the sodium and potassium channels together with a leakage current results in the “spikey” behaviour of neurons
  11. Spiking Models: The Golden Model • This can be accurately

    modelled by the Hodgkin-Huxley model [Hodgkin & Huxley, 1952], defined:
  12. Spiking Models: The Golden Model • This can be accurately

    modelled by the Hodgkin-Huxley model [Hodgkin & Huxley, 1952], defined:
  13. Spiking Models: The Popular Model • The Hodgkin-Huxley model is

    highly accurate but very expensive, instead we use the Izhikevich model [Izhikevich, 2003]. • Membrane potential (v) and refractory period (u) modelled by : • Neural spiking: • Where I is the dendrite current and a,b,c, and d are the model parameters. • The Izhikevich neuron is computationally efficient and produces neural spiking activity similar to that of real neurons. dv dt =0.04v2+5v+140−u+ I du dt =a(bv−u) if v⩾30then v←c u←u+d
  14. Spiking Models: Izhikevich Model Behaviour - Excitation

  15. Spiking Models: Izhikevich Model Behaviour - Inhibition

  16. Spiking Models: Izhikevich Model Behaviour - Bursting

  17. Spiking Models: Connect Them All Together . . . •

    Uses recurrent connectivity, not feed forward.
  18. Spiking Models: . . . and you get:

  19. Applications: Bridging Neuroscience and Artificial Intelligence • So why do

    these blue dots matter to us ? • From these neural firings various encodings can be achieved: • Rate Encoding • Mean Firing Rate • Smoothed firing rate • Time Encoding • Timing between spikes • Grouping of spikes according to time periods • These encodings allow us to: • Better understand various neural phenomena • Synchronization • Oscillations • Criticality • Apply this technique on artificial platforms • Robotics
  20. Applications: Neural Phenomena - Synchronization • This allows us to

    know to what degree neurons fire together. • Are all neurons locked into the same firing patterns • Are all neurons firing independently • This tells us if the system is in a ordered state, chaotic state, or something in between. • This also may tell us how different parts of the brain may talk to each other (To follow).
  21. Applications: Neural Phenomena - Synchronization • Single neural population Note:

    Synchronization is not clean for neural populations
  22. Applications: Neural Phenomena - Oscillations • Two neural populations connected

    together
  23. Applications: Neural Phenomena – Communication Through Coherence • Neural populations

    which are synchronised can communicate with each other [Fries, 2009]. • For example, take 3 neural populations connected together: • In neural systems communication is not locked. A could communicate with C at a later stage. We get a dynamic formation of coalitions throughout time. These coalitions are said to be in a metastable state [Shanahan, (2010)].
  24. Applications: Neural Phenomena – Metastability • A possible scenario of

    what this phenomena may mean (very abstract idea) : *Credit to David Bhowmik for this explanation Not internally or externally Synchronised Internally Synchronised but not externally Internally and externally synchronised
  25. Applications: Agent and Robotic Systems • We are now seeing

    spiking neural networks being used in designing intelligent agents. iCub Robot controlled by spiking neuron system [Gamez, et al., 2012] Autonomous agent in game controlled by spiking neuron system [Fountas, et al., 2011]
  26. Criticality: Definition • Observed in systems made of up many

    interacting components • The Critical Point • Balance between order and disorder [Bak, P., Tang, C., & Wiesenfeld, K. (1988)] • Build up of activity leading to a critical event. • Systems which demonstrate criticality: • Forest Fires • Avalanches • Earthquakes • Neural Populations ?
  27. Criticality: Identifying Criticality Images from Beggs, J. M., & Timme,

    N. (2012). • Two dimensional Ising Model demonstrates criticality [Cipra, B. A. (1987)]
  28. Criticality: Identifying Criticality(2) Dynamic Correlation over Time Phase Transition Images

    from Beggs, J. M., & Timme, N. (2012). Correlation Over Distance
  29. Criticality: Identifying Criticality (3) • Sand pile model [Bak, P.,

    Tang, C., & Wiesenfeld, K. (1988)] Images from Bak, P. (1988). Avalanche Size Distribution
  30. Criticality: Self Organised Criticality • What is self organized criticality

    (SOC) ? • Sandpile model is SOC • Ising Model is NOT SOC • Critical behaviour regardless of control parameter • How to Identify SOC ? • Phase Transition ? • Power law ? • Time spent around critical point ? [Tagliazucchi, E., & Chialvo, D. R. (2012)]. • Overlaying avalanches [Beggs, J. M.(2012)] ? Phase Transition
  31. Criticality: The Search For The Critical Brain – Neural Avalanches

    • Taking a lesson from the sand pile model we look for criticality in the brain. • From avalanches of sand to avalanches of neurons. • Definition of Neuronal Avalanche [Beggs, J. M., & Plenz, D. (2003)]. • Divide neural firing into time frames of size Δt • Window is active if one or more neurons fire and is inactive otherwise • A neural avalanche is a series of active windows encapsulated by inactive windows Images from Beggs, J. M., & Plenz, D. (2003).
  32. Criticality: The Search For The Critical Brain – Neural Avalanches

    (2) • Avalanches characterised by: • Size – Number of neurons which fired within active time frames. • Length – Number of active windows within avalanche • Avalanche size follows Power Law (Like) distribution with exponent of -1.5 • Avalanche length length distribution follows Power Law (Like) distribution with exponent of -2 • Found in vitro in rat somatosensory cortex using 8 8 ϫ multielectrode array [Beggs, J. M., & Plenz, D. (2003)]. • Later found in vivo in rat somatosensory cortex using 8 4 multielectrode array ϫ [Gireesh, E. D., & Plenz, D. (2008)] Images from Beggs, J. M., & Plenz, D. (2003). Avalanche Size Distribution
  33. • A brain self organizing via neuronal avalanche activity to

    operate at a near critical point is said to optimize network behaviour. • The criticality hypotheses states occurs in following areas [Beggs, J. M. (2008).]: 1. Information Processing. 2. Information Storage. 3. Computational Power. 4. Stability. Criticality: Network Optimisation
  34. Criticlity: Firings Post Learning

  35. Criticlity: Power Law Markers • Both networks exhibit power law

    like behaviour indicating an approach to criticality:
  36. Criticlity: Criticality Vs Network Structure Watts Strogatz Klemm Eguilez

  37. Criticlity: Varying Topology – Sub Critical Regime • Different network

    structures
  38. Criticlity: Varying Topology – Super Critical Regime

  39. Criticlity: Varying Topology – Critical Regime

  40. Criticlity: Varying Topology – Critical Regime • 21/500 Failures •

    54/500 Failures
  41. Conclusions on Criticality: • Summary: • Modelling Spiking Neurons •

    Synchronization, Oscillations, and Metastability • Agent Applications • Criticality • Questions ?
  42. Acknowledgements • PhD Supervisor: Murray Shanahan • Andreas K. Fidjeland

    for NeMo – The simulation framework • Funded by Commonwealth Scholarship Thank you for your critical attention