Visium - Multi-channel fluorescent images captured of the same tissue - Channels measure proteins marking for specific cell types Kristen R. Maynard 16 Sang Ho Kwon Visium-SPG = Visium SRT + immunofluorescence (using identical tissue samples) Fluorescent Protein Cell Type TMEM119 Microglia Neun Neurons OLIG2 Oligodendrocytes GFAP Astrocytes
- Single-cell data lacks spatial information, and SRT data lacks cell-type-composition information single-cell spatial Questions we can’t answer from SRT data alone: How are cellular populations distributed spatially? Which cell types are communicating in ligand-receptor interactions associated with schizophrenia? Image from Bo Xia: https://twitter.com/BoXia7/status/1261464021322137600?s=12 17 spot deconvolution
performance or accuracy of cell-type predictions? - Make orthogonal measurements*: image-derived counts - Leverage prior knowledge: neurons localize to gray matter - Self-consistency of results: broad vs. fine cell-type results 23 Stephanie C. Hicks
mark for several proteins - Fluorescence in image channels correlates with counts of measured cell types Can measure 5 distinct cell types: 25 • Astrocyte (GFAP) • Neuron (NeuN) • Oligodendrocyte (OLIG2) • Microglia (TMEM119) • Other (low signal in all channels) samuibrowser.com Chaichontat Sriworarat Stephanie C. Hicks doi.org/10.1101/2023.01.28.525943
image 2. Manually label example cells 3. Train cell-type classifier and apply on remaining data Image Channels Classified Cell Type Cell Mask 27 Annie B. Nguyen
on 600-cell dataset - Broke cells into 4 quartiles based on model confidence - Labelled 320 more cells, evenly sampled from all 4 quartiles 29 Cell Type Probability Astro 0.2 Oligo 0.3 Micro 0.1 Neuron 0.45 Other 0.05 4 quartiles * 4 sections * 5 cell types * 4 cells = 320 new cells 600 old cells + 320 new cells = 920 total cells Cell Type Probability Astro 0.01 Oligo 0.02 Micro 0.01 Neuron 0.93 Other 0.03 Less-confident neuron More-confident neuron Confidence = 0.45 Confidence = 0.93
tree 0.86 0.87 Logistic regression 0.91 0.90 Support vector machine 0.90 0.90 Dataset # Training # Test Split Old 600 480 120 80/20 New 320 240 80 75/25 Combined 920 720 200 ~78/22 1. Segment cells on IF image 2. Manually label N cells 3. Train cell-type classifier and apply on remaining data Grid search with 5-fold CV for each model to select hyperparameters Data Model Final model chosen
spatial domains B. Cell-cell communication; cell-type-informed ligand-receptor interactions in the context of schizophrenia risk A 41 Boyi Guo Melissa Grant-Peters
composition in Visium/spatial data - Tangram and Cell2location perform better than SPOTlight, with each scoring best in different metrics - Tangram matches overall snRNA-seq cell-type proportions - Cell2location slightly more accurately maps cell types to expected layers 42 - Existing spot deconvolution algorithms have limited accuracy - Incorporating other data types might improve cell-type predictions - RNAscope? Future Directions Tangram Cell2location SPOTlight
Louise Huuki-Myers Abby Spangler Kelsey D. Montgomery Sang Ho Kwon Heena R. Divecha Madhavi Tippani Matthew N. Tran Arta Seyedian Thomas M. Hyde Joel E. Kleinman Stephanie C. Page Keri Martinowich JHU Biostatistics Chaichontat Swirorarat Stephanie C. Hicks Boyi Guo JHU Biomedical Engineering Alexis Battle Prashanthi Ravichandran PsychENCODE consortium University College London Genetics and Genomic Medicine Mina Ryten Melissa Grant-Peters nick-eagles.github.io @Nick-Eagles Feel free to reach out! 44