Deep Learning Networks • Logic • Argumentation • Artificial Neural Networks • Agent Systems • Etc. . . • All with the purpose of reproducing “intelligence” • Human-like intelligence being the golden goal • Why not take at the biological level ?
each other via synapse. • Synapse is where the firing neuron's axon passes an electrical charge to the receiving neuron's dendrites. • Firing Neuron – Presynaptic neuron • Receiving Neuron – Postsynaptic neuron • Synapses are not direct electrical connections.
in nature fall into two primary sub classes: • Excitatory Neurons • Firing increases activity of neighbouring neurons • Inhibitory Neurons Firing decreases activity of neighbouring neurons • Interplay between these neurons responsible for many brain phenomena • Oscillations • Banding • Dynamic long range correlations
membrane potential remains at resting potential. • Electrical current flows into neuron via dendrites. • When membrane potential in soma exceeds threshold a spike (action potential) is released. • Spike travels down axon and through synapses to neighbouring neurons promoting or demoting neighbouring neuron activity. • After spike, firing neuron hyperpolarises. This prevents spike from reverberating back to source. • Slowly returns to resting potential after spike.
highly accurate but very expensive, instead we use the Izhikevich model [Izhikevich, 2003]. • Membrane potential (v) and refractory period (u) modelled by : • Neural spiking: • Where I is the dendrite current and a,b,c, and d are the model parameters. • The Izhikevich neuron is computationally efficient and produces neural spiking activity similar to that of real neurons. dv dt =0.04v2+5v+140−u+ I du dt =a(bv−u) if v⩾30then v←c u←u+d
these blue dots matter to us ? • From these neural firings various encodings can be achieved: • Rate Encoding • Mean Firing Rate • Smoothed firing rate • Time Encoding • Timing between spikes • Grouping of spikes according to time periods • These encodings allow us to: • Better understand various neural phenomena • Synchronization • Oscillations • Criticality • Apply this technique on artificial platforms • Robotics
know to what degree neurons fire together. • Are all neurons locked into the same firing patterns • Are all neurons firing independently • This tells us if the system is in a ordered state, chaotic state, or something in between. • This also may tell us how different parts of the brain may talk to each other (To follow).
which are synchronised can communicate with each other [Fries, 2009]. • For example, take 3 neural populations connected together: • In neural systems communication is not locked. A could communicate with C at a later stage. We get a dynamic formation of coalitions throughout time. These coalitions are said to be in a metastable state [Shanahan, (2010)].
what this phenomena may mean (very abstract idea) : *Credit to David Bhowmik for this explanation Not internally or externally Synchronised Internally Synchronised but not externally Internally and externally synchronised
spiking neural networks being used in designing intelligent agents. iCub Robot controlled by spiking neuron system [Gamez, et al., 2012] Autonomous agent in game controlled by spiking neuron system [Fountas, et al., 2011]
interacting components • The Critical Point • Balance between order and disorder [Bak, P., Tang, C., & Wiesenfeld, K. (1988)] • Build up of activity leading to a critical event. • Systems which demonstrate criticality: • Forest Fires • Avalanches • Earthquakes • Neural Populations ?
(SOC) ? • Sandpile model is SOC • Ising Model is NOT SOC • Critical behaviour regardless of control parameter • How to Identify SOC ? • Phase Transition ? • Power law ? • Time spent around critical point ? [Tagliazucchi, E., & Chialvo, D. R. (2012)]. • Overlaying avalanches [Beggs, J. M.(2012)] ? Phase Transition
• Taking a lesson from the sand pile model we look for criticality in the brain. • From avalanches of sand to avalanches of neurons. • Definition of Neuronal Avalanche [Beggs, J. M., & Plenz, D. (2003)]. • Divide neural firing into time frames of size Δt • Window is active if one or more neurons fire and is inactive otherwise • A neural avalanche is a series of active windows encapsulated by inactive windows Images from Beggs, J. M., & Plenz, D. (2003).
(2) • Avalanches characterised by: • Size – Number of neurons which fired within active time frames. • Length – Number of active windows within avalanche • Avalanche size follows Power Law (Like) distribution with exponent of -1.5 • Avalanche length length distribution follows Power Law (Like) distribution with exponent of -2 • Found in vitro in rat somatosensory cortex using 8 8 ϫ multielectrode array [Beggs, J. M., & Plenz, D. (2003)]. • Later found in vivo in rat somatosensory cortex using 8 4 multielectrode array ϫ [Gireesh, E. D., & Plenz, D. (2008)] Images from Beggs, J. M., & Plenz, D. (2003). Avalanche Size Distribution
operate at a near critical point is said to optimize network behaviour. • The criticality hypotheses states occurs in following areas [Beggs, J. M. (2008).]: 1. Information Processing. 2. Information Storage. 3. Computational Power. 4. Stability. Criticality: Network Optimisation