Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Automated regularization of M/EEG sensor covariance using cross- validation.

Automated regularization of M/EEG sensor covariance using cross- validation.

A new method for optimal regularization of MEG and EEG noise covariance estimates, implemented in Python

Based on: Engemann, D.A., Gramfort, A. (2014). Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage, ISSN 1053-8119
(http://www.sciencedirect.com/science/article/pii/S1053811914010325)

and

the oral presentation titled "Automated model selection for covariance estimation and spatial whitening of M/EEG signals" held by D. Engemann at the OHBM 2014 meeting, Hamburg, Germany

Denis A. Engemann

December 31, 2014
Tweet

More Decks by Denis A. Engemann

Other Decks in Science

Transcript

  1. Automated regularization of M/EEG sensor covariance using cross- validation. Based

    on: Engemann, D.A., Gramfort, A. (2014). Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage, ISSN 1053-8119 Alexandre Gramfort ParisTech, CEA/INSERM Neurospin, Paris, France Denis A. Engemann CEA/INSERM Neurospin, Gif-sur-Yvette, France ICM, Paris, France https://github.com/mne-tools/mne-python http://www.sciencedirect.com/science/article/pii/S1053811914010325
  2. - The problem of estimating the M/EEG noise covariance -

    A statistical learning solution to the problem - Impact on M/EEG inverse solutions - Implementation and API in MNE- Python Plat du jour
  3. M inimum N orm E stimates aka Tikhonov Regularization aka

    Ridge Regression M = R0GT (GR0GT + C) 1 •constraint linear model (cf. beamformer, S-LORETA, ...) •Gaussian, uncorrelated noise •whitening via covariance (C) ˆ X = RGt(GRGt + C) 1Y ˆ X = R ˜ Gt( ˜ GR ˜ Gt + I) 1 ˜ Y unwhitened whitened 98.749% M/EEG users used whitening
  4. Model selection:  Log-likelihood Given my model C how likely

    are unseen data Y? Higher log likelihood = superior C ➡ better whitening L ( Y |C ) = 1 2 T Trace( Y Y tC 1 ) 1 2 log((2 ⇡ ) N det( C )) .
  5. 1. Hand-set (REG) 2. Ledoit-Wolf (LW) 3. Cross-validated shrinkage (SC)

    4. Probabilistic PCA (PPCA) 5. Factor Analysis (FA) We compared 5 strategies: simple, fast complex, slow C0 = C + ↵I, ↵ > 0 CF A = HHt + diag( 1 , . . . , D) CPPCA = HHt + 2IN CLW = (1 ↵)C + ↵µI CSC = (1 ↵)C + ↵µI
  6. variable noise, true rank 10 300 500 700 900 1100

    1300 1500 1700 1900 1uPber of saPSles −49 −48 −47 −46 −45 −44 −43 −42 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 10 33CA FA LW 6C Factor Analysis wins
  7. 300 500 700 900 1100 1300 1500 1700 1900 1uPber

    of saPSles −73.0 −72.5 −72.0 −71.5 −71.0 −70.5 −70.0 −69.5 −69.0 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 40 33CA FA LW 6C variable noise, true rank 40 Shrinkage Estimators win!
  8. whitened Global Field Power ( 2) Expected value of 1

    during baseline, if appropriately whitened
  9. Text whitening based on best C 95 % of the

    signals expected to assume values betwwen -1.96 and 1.96 during baseline
  10. Our best models: SC Shrinkage Estimator - 71% Factor Analysis

    - 21%  Hand-set regularization - 8% Only FA won on combined sensors, even with small N PPCA was never better than FA
  11. What have we learned? M/EEG sensor noise is not homoscedastic

    Cross-validated covariance estimates yield spatial whitening without manual tuning The best model depends on  system, data rank and noise The best model stabilizes source estimates
  12. NEW in # Authors: Denis A. Engemann <[email protected]>
 # Alexandre

    Gramfort <[email protected]>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators

  13. NEW in # Authors: Denis A. Engemann <[email protected]>
 # Alexandre

    Gramfort <[email protected]>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators

  14. NEW in # Authors: Denis A. Engemann <[email protected]>
 # Alexandre

    Gramfort <[email protected]>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Open source implementation (BSD-3 license) - tested across datasets - finds optimal solution on unprocessed data - but also on rank reduced data (SSP, SSS, ICA) - build on top of scikit-learn (http://scikit-learn.org)
  15. NEW in # Authors: Denis A. Engemann <[email protected]>
 # Alexandre

    Gramfort <[email protected]>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Thanks for your attention - Happy whitening and M/EEG hacking!