In clinical neuroscience, success often depends on reading out multiple modalities, i.e., brain images and physiological signals. However, clinical reality often sets limits on data availability. Is combining multiple modalities for predictive modeling worth the extra effort when data is regularly incomplete? In [1], we proposed a multi-modal machine learning model with explicit support for handling missing modalities. Combining MRI, fMRI and magnetoencephalography on the Cam-CAN database not only significantly enhanced age prediction but also facilitated detection of age-related cognitive decline captured by the estimated brain age delta. In, particular, combining MEG with MRI yielded enhanced detection of changes in fluid intelligence, sleep quality and memory function, highlighting the complementarity of these distinct biomedical signals. Strikingly, the added value of MEG was best explained by relatively simple features, i.e., the spatial distribution of fast brain rhythms in the beta/alpha range. These results potentially open the door to clinical translation via EEG-technology that is widely available in the hospital setting.
Unfortunately, MRI scans are not always available, closing the door to source modeling with individual anatomy. What then? Call linear models for rescue? While very effective for regressing biomedical outcomes on M/EEG signals, they fail systematically if the cortical generator of an observed behaviour is oscillatory. In that case, volume conduction induces distortions on extracranial signals mitigating the applicability of linear models. However, accurate modeling volume conduction depends on the availability of individual MRIs in the first place. In [2,3] we demonstrate through mathematical analysis, simulations and prediction of age from MEG (Cam-CAN) and EEG (Temple University Hospital) how to, nevertheless, construct predictive linear models in different data generating scenarios. We conclude that Riemannian geometry offers a practical alternative to source localization when predicting from power spectra, potentially enabling end-to-end learning without preprocessing.
References
--------------
[1] Engemann, DA., Kozynets, O., Sabbagh, D., Lemaitre, G., Varoquaux, G., Liem, F., & Gramfort, A. (2020). Combining magnetoencephalography with MRI enhances learning of surrogate-biomarkers. eLife. 9:e54055. 10.7554/eLife.54055
[2] Sabbagh, D., Ablin, P., Varoquaux, G., Gramfort, A., & Engemann, DA. (2019). Manifold-regression to predict from MEG/EEG brain signals without source modeling. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 32 (pp. 7321–7332).
[3] Sabbagh, D., Ablin, P., Varoquaux, G., Gramfort, A., & Engemann, DA. (2020 ). Predictive regression modeling with MEG/EEG: from source power to signals and cognitive states. NeuroImage. https://doi.org/10.1016/j.neuroimage.2020.116893