Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Learning Thought-Based Motor Control using Gaussian Processes

Learning Thought-Based Motor Control using Gaussian Processes

Safwan Choudhury

April 14, 2012
Tweet

More Decks by Safwan Choudhury

Other Decks in Research

Transcript

  1. Background •Machine learning in brain-computer interface (BCI) applications •Controlling Wheelchair

    Motion with Electroencephalography (EEG) •Traditional approaches for wheelchair and mobile robot control via EEG
  2. EEG Interface •Emotiv EPOC (Research Edition) headset from Sydney, Australia

    •Comes with training interface and a proprietary machine learning algorithm •Raw EEG data collection from 14 electrodes into Matlab via EEGLAB •The noise problem
  3. Project Goals • Improve classification accuracy of thought-based motor commands

    - Forward, Backward, Turn Left, Turn Right & Neutral • Embed external sensory information in decision-making - Head orientation from gyroscope readings - Speed sensor readings from optical encoders • Supervised learning with a reasonably small training datasets - User should not have to provide 1000 examples of one action
  4. Gaussian Processes • A Gaussian Process (GP) is a collection

    of random variables, any finite number of which have a joint Gaussian distribution. • Completely specified by its mean and covariance functions: • Mathematically, a Gaussian Process is expressed by the following notation: µ(x) = E[ f (x)] ∑(x,x') = E[( f (x) − µ(x))( f (x') − µ(x'))] f (x) ~ GP(µ(x),∑(x,x'))
  5. Gaussian Processes • One of several machine learning approaches which

    use kernel methods - Others include Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), Principle Component Analysis (PCA), etc. • Can be used for regression and classification applications. - In this project, GPs are used to learn classification of motor movements (i.e. forward/backward) from EEG signals • Novelty: Exploiting the probabilistic output of GP classification - We can use the uncertainty in the predictions from GP coupled with other sensory information to improve decision making.
  6. Simple Example • Very simple supervised learning example where input-output

    mapping is learned from training data using GP. • 1-D example illustrating the use of GP in regression - Assumption: Mean function centered at zero initially • 2-D example illustrating the use of GP in binary classification - Assumption: Use of logistic function for “squashing”
  7. Covariance Function • The properties of the sample functions are

    controlled by the choice of covariance function and its parameters. - Squared-Exponential Covariance Function: • The variables l, Мf and Мn are known as the hyperparameters ∑(x,x') = σ f 2 exp − 1 2l2 (x − x')2 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + σ2 n δ xx '
  8. Progress & Implementation • 14 simultaneous input signals from electrodes

    on Emotiv EPOC headset - AF3, F7, F3, FC5, T7, P7, O1, O2, PB, TB, FC6, F4, F8, AF4 • Using Gaussian Processes for Machine Learning (GPML) toolbox - Available at http://www.gaussianprocess.org by Rasmussen et al. • Almost have binary classification working for neutral/forward with decision algorithm for incorporating gyroscopic data. - Key challenges: picking appropriate covariance function and hyperparameters to yield “decent” results.