Slide 1

Slide 1 text

Automated regularization of M/EEG sensor covariance using cross- validation. Based on: Engemann, D.A., Gramfort, A. (2014). Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage, ISSN 1053-8119 Alexandre Gramfort ParisTech, CEA/INSERM Neurospin, Paris, France Denis A. Engemann CEA/INSERM Neurospin, Gif-sur-Yvette, France ICM, Paris, France https://github.com/mne-tools/mne-python http://www.sciencedirect.com/science/article/pii/S1053811914010325

Slide 2

Slide 2 text

- The problem of estimating the M/EEG noise covariance - A statistical learning solution to the problem - Impact on M/EEG inverse solutions - Implementation and API in MNE- Python Plat du jour

Slide 3

Slide 3 text

M/EEG inverse solutions take into account the spatial structure of the sensor noise

Slide 4

Slide 4 text

M inimum N orm E stimates aka Tikhonov Regularization aka Ridge Regression M = R0GT (GR0GT + C) 1 •constraint linear model (cf. beamformer, S-LORETA, ...) •Gaussian, uncorrelated noise •whitening via covariance (C) ˆ X = RGt(GRGt + C) 1Y ˆ X = R ˜ Gt( ˜ GR ˜ Gt + I) 1 ˜ Y unwhitened whitened 98.749% M/EEG users used whitening

Slide 5

Slide 5 text

With whitened data the covariance would be diagonal C = 1 T Y Y t

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Woolrich, 2011, Neuroimage 10s 80s True sources VS LCMV Beamformer given C

Slide 8

Slide 8 text

Regularize your covariance! but let the data tell you how.

Slide 9

Slide 9 text

Model selection:  Log-likelihood Given my model C how likely are unseen data Y? Higher log likelihood = superior C ➡ better whitening L ( Y |C ) = 1 2 T Trace( Y Y tC 1 ) 1 2 log((2 ⇡ ) N det( C )) .

Slide 10

Slide 10 text

Cross-validation -1234.3 -1324.7 -1467.0 -1178.9 average log likelihood and select the best model

Slide 11

Slide 11 text

1. Hand-set (REG) 2. Ledoit-Wolf (LW) 3. Cross-validated shrinkage (SC) 4. Probabilistic PCA (PPCA) 5. Factor Analysis (FA) We compared 5 strategies: simple, fast complex, slow C0 = C + ↵I, ↵ > 0 CF A = HHt + diag( 1 , . . . , D) CPPCA = HHt + 2IN CLW = (1 ↵)C + ↵µI CSC = (1 ↵)C + ↵µI

Slide 12

Slide 12 text

Which model crashes, which one flies?

Slide 13

Slide 13 text

variable noise, true rank 10 300 500 700 900 1100 1300 1500 1700 1900 1uPber of saPSles −49 −48 −47 −46 −45 −44 −43 −42 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 10 33CA FA LW 6C Factor Analysis wins

Slide 14

Slide 14 text

Factor Analysis recovers true rank variable noise, true rank 10

Slide 15

Slide 15 text

300 500 700 900 1100 1300 1500 1700 1900 1uPber of saPSles −73.0 −72.5 −72.0 −71.5 −71.0 −70.5 −70.0 −69.5 −69.0 Log-lLNelLhood (D) heWerosFedasWLF - Wrue ranN 40 33CA FA LW 6C variable noise, true rank 40 Shrinkage Estimators win!

Slide 16

Slide 16 text

Any estimator can be the best.

Slide 17

Slide 17 text

MEG and EEG data

Slide 18

Slide 18 text

Inspect models, apply model selection, whiten data.

Slide 19

Slide 19 text

whitened Global Field Power ( 2) Expected value of 1 during baseline, if appropriately whitened

Slide 20

Slide 20 text

select model based on log likelihood score (bigger = closer to zero is better)

Slide 21

Slide 21 text

Text whitening based on best C 95 % of the signals expected to assume values betwwen -1.96 and 1.96 during baseline

Slide 22

Slide 22 text

Our best models: SC Shrinkage Estimator - 71% Factor Analysis - 21%  Hand-set regularization - 8% Only FA won on combined sensors, even with small N PPCA was never better than FA

Slide 23

Slide 23 text

Inspect models, apply model selection, whiten data. Impact validation on MNE source estimates

Slide 24

Slide 24 text

faces > scrambled SPM faces dataset, Henson (2003) 20 epochs 40 epochs 60 epochs worst best std

Slide 25

Slide 25 text

The regularized covariance stabilizes dSPM source amplitudes

Slide 26

Slide 26 text

What have we learned? M/EEG sensor noise is not homoscedastic Cross-validated covariance estimates yield spatial whitening without manual tuning The best model depends on  system, data rank and noise The best model stabilizes source estimates

Slide 27

Slide 27 text

NEW in # Authors: Denis A. Engemann 
 # Alexandre Gramfort 
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators


Slide 28

Slide 28 text

NEW in # Authors: Denis A. Engemann 
 # Alexandre Gramfort 
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators


Slide 29

Slide 29 text

NEW in # Authors: Denis A. Engemann 
 # Alexandre Gramfort 
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Open source implementation (BSD-3 license) - tested across datasets - finds optimal solution on unprocessed data - but also on rank reduced data (SSP, SSS, ICA) - build on top of scikit-learn (http://scikit-learn.org)

Slide 30

Slide 30 text

NEW in # Authors: Denis A. Engemann 
 # Alexandre Gramfort 
 #
 # License: BSD (3-clause)
 
 import mne
 
 data_path = mne.datasets.sample.data_path()
 raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
 event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
 
 raw = mne.io.Raw(raw_fname, preload=True)
 raw.info['bads'] += ['MEG 2443']
 raw.filter(1, 30)
 
 epochs = mne.Epochs(
 raw, events=mne.read_events(event_fname), event_id=1, tmin=-.2, tmax=0.5,
 picks=mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads'),
 baseline=None, reject=dict(mag=4e-12, grad=4000e-13, eeg=80e-6))
 
 ###############################################################################
 # Compute covariance using automated regularization and show whitening
 noise_covs = mne.cov.compute_covariance(epochs[:20], tmax=0, method='auto',
 return_estimators=True)
 
 evoked = epochs.average()
 evoked.plot() # plot evoked response
 evoked.plot_white(noise_covs) # compare estimators
 Thanks for your attention - Happy whitening and M/EEG hacking!