Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Neural Computing

Neural Computing

Examinations of humans' central nervous systems inspired the concept of artificial neural networks. In an artificial neural network, simple artificial nodes, known as "neurons", "neurodes", "processing elements" or "units", are connected together to form a network which mimics a biological neural network

Jehoshaphat Abu

June 27, 2016
Tweet

More Decks by Jehoshaphat Abu

Other Decks in Research

Transcript

  1. Outline • What are Neural Networks? • Biological Neural Networks

    • How Neural systems look like? • ANN – The basics • Feed forward net • Training • Example – Voice recognition • Applications – Feed forward nets • Recurrency • Elman nets • Demo – Drum Machine • Conclusion
  2. What are Neural Networks? • Models of the brain and

    nervous system • Highly parallel – Process information much more like the brain than a serial computer • Learning • Very simple principles • Very complex behaviours • Applications – As powerful problem solvers – As biological models
  3. How neural systems look like? • Neuron: the fundamental singalling/computational

    units • Synapses: the connections between neurons • Layer: neurons are organized into layers •Extremely complex: around 1011 neurons in the brain, each with 104 connections
  4. Neural Network Techniques • Computers have to be explicitly programmed

    – Analyze the problem to be solved. – Write the code in a programming language. • Neural networks learn from examples – No requirement of an explicit description of the problem. – No need for a programmer. – The neural computer adapts itself during a training period, based on examples of similar problems even without a desired solution to each problem. After sufficient training the neural computer is able to relate the problem data to the solutions, inputs to outputs, and it is then able to offer a viable solution to a brand new problem. – Able to generalize or to handle incomplete data.
  5. Biological Neural Nets • Pigeons as art experts (Watanabe et

    al. 1995) – Experiment: • Pigeon in Skinner box • Present paintings of two different artists (e.g. Chagall / Van Gogh) • Reward for pecking when presented a particular artist (e.g. Van Gogh)
  6. • Pigeons were able to discriminate between Van Gogh and

    Chagall with 95% accuracy (when presented with pictures they had been trained on) • Discrimination still 85% successful for previously unseen paintings of the artists • Pigeons do not simply memorise the pictures • They can extract and recognise patterns (the ‘style’) • They generalise from the already seen to make predictions • This is what neural networks (biological and artificial) are good at (unlike conventional computer)
  7. ANNs – The basics • ANNs incorporate the two fundamental

    components of biological neural nets: 1. Neurones (nodes) 2. Synapses (weights)
  8. Three main classes of interconnections (e.g. visual system): − Feedforward

    connections bring input to a given region from another region located at an earlier stage along a particular processing pathway − Recurrent synapses interconnect neurons within a particular region that are considered to be at the same stage along the processing pathway − Top-down connections carry signals back from areas located at later stages.
  9. Feed-forward nets • Information flow is unidirectional • Data is

    presented to Input layer • Passed on to Hidden Layer • Passed on to Output layer • Information is distributed • Information processing is parallel Internal representation (interpretation) of data
  10. • Feeding data through the net: (1 × 0.25) +

    (0.5 × (-1.5)) = 0.25 + (-0.75) = - 0.5 0.3775 1 1 5 . 0 = + e Squashing:
  11. • Data is presented to the network in the form

    of activations in the input layer • Examples – Pixel intensity (for pictures) – Molecule concentrations (for artificial nose) – Share prices (for stock market prediction) • Data usually requires preprocessing – Analogous to senses in biology • How to represent more abstract data, e.g. a name? – Choose a pattern, e.g. • 0-0-1 for “Biodun” • 0-1-0 for “Bimbola”
  12. Training the Network - Learning • Backpropagation – Requires training

    set (input / output pairs) – Starts with small random weights – Error is used to adjust weights (supervised learning)  Gradient descent on error landscape
  13. • Advantages – It works! – Relatively fast • Downsides

    – Requires a training set – Can be slow – Probably not biologically realistic • Alternatives to Backpropagation – Hebbian learning • Not successful in feed-forward nets – Reinforcement learning • Only limited success – Artificial evolution • More general, but can be even slower than backprop
  14. Example: Voice Recognition • Task: Learn to discriminate between two

    different voices saying “Hello” • Data – Sources • Steve Simpson • David Raubenheimer – Format • Frequency distribution (60 bins) • Analogy: cochlea
  15. • Network architecture – Feed forward network • 60 input

    (one for each frequency bin) • 6 hidden • 2 output (0-1 for “Steve”, 1-0 for “David”)
  16. • Calculate error Steve David 0.43 – 0 = 0.43

    0.26 –1 = 0.74 0.73 – 1 = 0.27 0.55 – 0 = 0.55
  17. • Backprop error and adjust weights Steve David 0.43 –

    0 = 0.43 0.26 – 1 = 0.74 0.73 – 1 = 0.27 0.55 – 0 = 0.55 1.17 0.82
  18. • Repeat process (sweep) for all training pairs – Present

    data – Calculate error – Backpropagate error – Adjust weights • Repeat process multiple times
  19. • Results – Voice Recognition – Performance of trained network

    • Discrimination accuracy between known “Hello”s – 100% • Discrimination accuracy between new “Hello”’s – 100% • Demo
  20. • Results – Voice Recognition (ctnd.) – Network has learnt

    to generalise from original data – Networks with different weight settings can have same functionality – Trained networks ‘concentrate’ on lower frequencies – Network is robust against non-functioning nodes
  21. Applications of Feed-forward nets – Pattern recognition • Character recognition

    • Face Recognition – Sonar mine/rock recognition (Gorman & Sejnowksi, 1988) – Navigation of a car (Pomerleau, 1989) – Stock-market prediction – Pronunciation (NETtalk) (Sejnowksi & Rosenberg, 1987)
  22. FFNs as Biological Modelling Tools • Signalling / Sexual Selection

    – Enquist & Arak (1994) • Preference for symmetry not selection for ‘good genes’, but instead arises through the need to recognise objects irrespective of their orientation – Johnstone (1994) • Exaggerated, symmetric ornaments facilitate mate recognition (but see Dawkins & Guilford, 1995)
  23. Recurrent Networks • Feed forward networks: – Information only flows

    one way – One input pattern produces one output – No sense of time (or memory of previous state) • Recurrency – Nodes connect back to other nodes or themselves – Information flow is multidirectional – Sense of time and memory of previous state(s) • Biological nervous systems show high levels of recurrency (but feed-forward structures exists too)
  24. Elman Nets • Elman nets are feed forward networks with

    partial recurrency • Unlike feed forward nets, Elman nets have a memory or sense of time
  25. Classic experiment on language acquisition and processing (Elman, 1990) •

    Task – Elman net to predict successive words in sentences. • Data – Suite of sentences, e.g. • “The boy catches the ball.” • “The girl eats an apple.” – Words are input one at a time • Representation – Binary representation for each word, e.g. • 0-1-0-0-0 for “girl” • Training method – Backpropagation
  26. Recap – Neural Networks • Components – biological plausibility –

    Neurone / node – Synapse / weight • Feed forward networks – Unidirectional flow of information – Good at extracting patterns, generalisation and prediction – Distributed representation of data – Parallel processing of data – Training: Backpropagation – Not exact models, but good at demonstrating principles • Recurrent networks – Multidirectional flow of information – Memory / sense of time – Complex temporal dynamics (e.g. CPGs) – Various training methods (Hebbian, evolution) – Often better biological models than FFNs