Pro Yearly is on sale from $80 to $50! »

Automated regularization of M/EEG sensor covariance using cross- validation.

Automated regularization of M/EEG sensor covariance using cross- validation.

A new method for optimal regularization of MEG and EEG noise covariance estimates, implemented in Python

Based on: Engemann, D.A., Gramfort, A. (2014). Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage, ISSN 1053-8119
(http://www.sciencedirect.com/science/article/pii/S1053811914010325)

and

the oral presentation titled "Automated model selection for covariance estimation and spatial whitening of M/EEG signals" held by D. Engemann at the OHBM 2014 meeting, Hamburg, Germany

3fce18037ffa058993030a124d89764d?s=128

Denis A. Engemann

December 31, 2014
Tweet

Transcript

  1. Automated regularization of M/EEG sensor covariance using cross- validation. Based

    on: Engemann, D.A., Gramfort, A. (2014). Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage, ISSN 1053-8119 Alexandre Gramfort ParisTech, CEA/INSERM Neurospin, Paris, France Denis A. Engemann CEA/INSERM Neurospin, Gif-sur-Yvette, France ICM, Paris, France https://github.com/mne-tools/mne-python http://www.sciencedirect.com/science/article/pii/S1053811914010325
  2. - The problem of estimating the M/EEG noise covariance -

    A statistical learning solution to the problem - Impact on M/EEG inverse solutions - Implementation and API in MNE- Python Plat du jour
  3. M/EEG inverse solutions take into account the spatial structure of

    the sensor noise
  4. M inimum N orm E stimates aka Tikhonov Regularization aka

    Ridge Regression M = R0GT (GR0GT + C) 1 •constraint linear model (cf. beamformer, S-LORETA, ...) •Gaussian, uncorrelated noise •whitening via covariance (C) ˆ X = RGt(GRGt + C) 1Y ˆ X = R ˜ Gt( ˜ GR ˜ Gt + I) 1 ˜ Y unwhitened whitened 98.749% M/EEG users used whitening
  5. With whitened data the covariance would be diagonal C =

    1 T Y Y t
  6. None
  7. Woolrich, 2011, Neuroimage 10s 80s True sources VS LCMV Beamformer

    given C
  8. Regularize your covariance! but let the data tell you how.

  9. Model selection:  Log-likelihood Given my model C how likely

    are unseen data Y? Higher log likelihood = superior C ➡ better whitening L ( Y |C ) = 1 2 T Trace( Y Y tC 1 ) 1 2 log((2 ⇡ ) N det( C )) .
  10. Cross-validation -1234.3 -1324.7 -1467.0 -1178.9 average log likelihood and select

    the best model
  11. 1. Hand-set (REG) 2. Ledoit-Wolf (LW) 3. Cross-validated shrinkage (SC)

    4. Probabilistic PCA (PPCA) 5. Factor Analysis (FA) We compared 5 strategies: simple, fast complex, slow C0 = C + ↵I, ↵ > 0 CF A = HHt + diag( 1 , . . . , D) CPPCA = HHt + 2IN CLW = (1 ↵)C + ↵µI CSC = (1 ↵)C + ↵µI
  12. Which model crashes, which one flies?

  13. variable noise, true rank 10 300 500 700 900 1100

    1300 1500 1700 1900 1uPber of saPSles −49 −48 −47 −46 −45 −44 −43 −42 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 10 33CA FA LW 6C Factor Analysis wins
  14. Factor Analysis recovers true rank variable noise, true rank 10

  15. 300 500 700 900 1100 1300 1500 1700 1900 1uPber

    of saPSles −73.0 −72.5 −72.0 −71.5 −71.0 −70.5 −70.0 −69.5 −69.0 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 40 33CA FA LW 6C variable noise, true rank 40 Shrinkage Estimators win!
  16. Any estimator can be the best.

  17. MEG and EEG data

  18. Inspect models, apply model selection, whiten data.

  19. whitened Global Field Power ( 2) Expected value of 1

    during baseline, if appropriately whitened
  20. select model based on log likelihood score (bigger = closer

    to zero is better)
  21. Text whitening based on best C 95 % of the

    signals expected to assume values betwwen -1.96 and 1.96 during baseline
  22. Our best models: SC Shrinkage Estimator - 71% Factor Analysis

    - 21%  Hand-set regularization - 8% Only FA won on combined sensors, even with small N PPCA was never better than FA
  23. Inspect models, apply model selection, whiten data. Impact validation on

    MNE source estimates
  24. faces > scrambled SPM faces dataset, Henson (2003) 20 epochs

    40 epochs 60 epochs worst best std
  25. The regularized covariance stabilizes dSPM source amplitudes

  26. What have we learned? M/EEG sensor noise is not homoscedastic

    Cross-validated covariance estimates yield spatial whitening without manual tuning The best model depends on  system, data rank and noise The best model stabilizes source estimates
  27. NEW in # Authors: Denis A. Engemann <denis.engemann@gmail.com>
 # Alexandre

    Gramfort <alexandre.gramfort@telecom-paristech.fr>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators

  28. NEW in # Authors: Denis A. Engemann <denis.engemann@gmail.com>
 # Alexandre

    Gramfort <alexandre.gramfort@telecom-paristech.fr>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators

  29. NEW in # Authors: Denis A. Engemann <denis.engemann@gmail.com>
 # Alexandre

    Gramfort <alexandre.gramfort@telecom-paristech.fr>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Open source implementation (BSD-3 license) - tested across datasets - finds optimal solution on unprocessed data - but also on rank reduced data (SSP, SSS, ICA) - build on top of scikit-learn (http://scikit-learn.org)
  30. NEW in # Authors: Denis A. Engemann <denis.engemann@gmail.com>
 # Alexandre

    Gramfort <alexandre.gramfort@telecom-paristech.fr>
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Thanks for your attention - Happy whitening and M/EEG hacking!