Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Decomposing Dynamics from Different Time Scale for Time-lapse Image Sequences With A Deep CNN

Jason Chin
November 10, 2018
350

Decomposing Dynamics from Different Time Scale for Time-lapse Image Sequences With A Deep CNN

Jason Chin

November 10, 2018
Tweet

Transcript

  1. © 2018, DNAnexus, Inc. All Rights Reserved Decomposing Dynamics from

    Different Time Scale for Time-lapse Image Sequences With A Deep CNN Nov 10 , 2018 Jason Chin (twitter: @infoecho)1, Andrew Carroll1,2, Xiaoran Xin3, Ying Gu3 1DNAnexus 2Current affiliation: Google Brain 3Biochemistry and Molecular Biology, Pennsylvania State University
  2. © 2018, DNAnexus, Inc. All Rights Reserved Cellulose Synthesis Process

    Cellulose: - the single most abundant biopolymer on earth. - makes up about 95 percent of paper and 90 percent of cotton - cellulose has also been considered as a major component of biofuels. - Understanding how cellulose is synthesized may allow us to optimize its use as a renewable energy source. https://news.psu.edu/story/141566/201 0/05/18/research/secrets-cellulose https://www.greencarcongress.com/2018/04/20180402- pennstate.html
  3. © 2018, DNAnexus, Inc. All Rights Reserved Understand the Biological

    Mechanism Through Single Cell Imaging Spinning Disk Confocal Microscope System http://www.plantcell.org/content/plantcell/27/10/2926.full.pdf Biological Model for Cellulous Synthesis
  4. © 2018, DNAnexus, Inc. All Rights Reserved Tagging CESA with

    GFP to Study The Synthase Mechanism GFP-CESA Cellulose synthase complex (CSC) CESA = cellulose synthase proteins GFP-CESA3 localizes to the plasma membrane, Golgi apparatus
  5. © 2018, DNAnexus, Inc. All Rights Reserved Dual Color Images

    mCherry: TUA5 (an α-tubulin) GFP: CESA3
  6. © 2018, DNAnexus, Inc. All Rights Reserved GFP-CESA3 Only Images

    GFP-CESA3 localization: - Plasma membrane (~ coalignment with microtubules) - Golgi apparatus Can we separate them? Frame rate: 5 s between frames
  7. © 2018, DNAnexus, Inc. All Rights Reserved Goal: Decompose Each

    Image as Slow + Fast Components ≅ + Original Image (~All GFP-CESA) Slow Component (~CESA on membrane) Fast Component (~CESA in Golgi)
  8. © 2018, DNAnexus, Inc. All Rights Reserved How To Catch

    Different Spatial-Temporal “Features”? • Size • Brightness • Shape • Moving speed, etc. Can we exploit all these features “automatically” to sperate fast and slow components with a deep neural network architecture?
  9. © 2018, DNAnexus, Inc. All Rights Reserved Deep Angels http://deepangel.media.mit.edu/

    With large training set, deep learning can catch “statistical features” about “elephants”.
  10. © 2018, DNAnexus, Inc. All Rights Reserved Unsupervised Learning with

    Autoeconder 2006, Science Encoder Network Decoder Network Latent Space Representation A nice hands-on example: https://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html Reducing reconstruction errors Network
  11. © 2018, DNAnexus, Inc. All Rights Reserved Exploit Temporal Correlation

    Structure T0 T0 + 5 seconds T1 + 1 min ”Slow” components remain correlated for longer time interval. © 2018, DNAnexus, Inc. All Rights Reserved Exploit Temporal Correlation Structure T0 T0 + 5 seconds T1 + 1 min ”Slow” components remain correlated for longer time interval. second autocorrelation
  12. © 2018, DNAnexus, Inc. All Rights Reserved From (T- ∆t)

    to T Encoder Network Decoder Network Image at T Image at T - ∆t If ∆t is longer than the typically “faster” components, then we can catch the slow component using such autoencoder architecture. Convolutional network to capture local spatial features at time T - ∆t De-convolution network to “reconstruct” images at time T Reconstruction Reduce loss ~ difference between prediction and data
  13. © 2018, DNAnexus, Inc. All Rights Reserved Multi Timescale Reconstruction

    Y-8 = reconstruction from (T-8) R4 = correction for reconstruction from (T-4) R2 = correction for reconstruction from (T-2) R1 = correction for reconstruction from (T-1) Total Lost = | IT – Y-8 |2 + | IT – Y-8 – R-4 |2 + | IT – Y-8 – R-4 – R-2 |2 + | IT – Y-8 – R-4 – R-2 – R-1 |2 Y-8 + R-4 + R-2 + R-1 IT Slowest Component Fastest Component
  14. © 2018, DNAnexus, Inc. All Rights Reserved The Network Module

    48x30x30 48x26x26 48x22x22 8x90x90 8x22x22 1x32x32 1x32x32 48x22x22 48x26x26 48x30x30 48x32x32 3x3 3x3 5x5 5x5 5x5 5x5 1x1 1x1 1x1 1x32x32 Convolutions De-convolutions N x 512 x 512 ITa Region a Region b Randomly pick 32x32 regions from randomly picked time slices ITb ITa-1 ITb-1 ITb-2 ITb-2 ITa-4 ITb-4 ITa-8 ITb-8 IT-8 Y-8 IT-4 R-4 ~ IT - Y-8 IT-2 R-2 ~ IT - Y-8 - R-4 IT-1 R-1 ~ IT - Y-8 - R-4 - R-2 Compute loss, optimize network weight 4 modules
  15. © 2018, DNAnexus, Inc. All Rights Reserved Same Model, Different

    Movie Original Slow Fast Pseudo Color Composing
  16. © 2018, DNAnexus, Inc. All Rights Reserved Successfully Capture Temporal

    Correlation Structures T0 T1 + 1 min Slow Fast Slow Fast second autocorrelation
  17. © 2018, DNAnexus, Inc. All Rights Reserved Other Related Work

    original Slow (stationary) Moving Video Prediction with Autoencoder Video Background Removal With Robust PCA https://sites.google.com/site/backgroundsubtraction/rece nt-background-modeling/background-modeling-via-rpca
  18. © 2018, DNAnexus, Inc. All Rights Reserved Jupyter Notebook in

    Cloud 1. Store Jupyter notebook directly in the DNAnexus platform 2. Access data object directly 3. Flexibility using different kinds of instances
  19. © 2018, DNAnexus, Inc. All Rights Reserved Summary • A

    deep neural network architecture that can “learn” spatiotemporal correlation structure to sperate movements for single cell images • More quantitative benchmarking will help to guide for better network architectures • Can we measure interesting important kinetic quantities simultaneously (for example, improve or incorporate particle tracking)? • Build models with some better constrains and/or priors, for example, modeling covariance structure using gaussian processes • + Supervised learning -> Super-resolution
  20. © 2018, DNAnexus, Inc. All Rights Reserved Thanks For Your

    Attention Question? Twitter handle: @infoecho