Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Unsupervised and Semi-Supervised Deep Learning for Medical Imaging

Unsupervised and Semi-Supervised Deep Learning for Medical Imaging

Availability of labelled data for supervised learning is a major problem for narrow AI in current day industry. In imaging, the task of semantic segmentation (pixel-level labelling) requires humans to provide strong pixel-level annotations for millions of images and is difficult when compared to the task of generating weak image-level labels. Unsupervised representation learning along with semi-supervised classification is essential when strong annotations are hard to come by. This talk will introduce you to the techniques available in unsupervised learning and semi-supervised learning with specific focus on brain tumor segmentation from MRI using Stacked De-noising Auto-Encoders [(SDAEs)](http://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf), which achieved competitive results in comparison with purely supervised Convolutional Neural Networks [(CNNs)](http://cs231n.github.io/convolutional-networks/), and highlight recent breakthroughs in AI with Generative Adversarial Networks [(GANs)](http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/) for computer vision. Although the focus is on medical imaging, the techniques will be presented in a domain agnostic manner and can be easily translated for other sectors of deep learning.

663ca539c576c9e43318be3904ec7d22?s=128

Kiran Vaidhya

July 29, 2017
Tweet

Transcript

  1. Unsupervised and Semi-Supervised Deep Learning in Medical Imaging Kiran Vaidhya

    Algorithms Researcher kiran@predible.co
  2. Cancer July 2017 2 Source: http://www.clevelandclinicmeded.com

  3. How does cancer occur? July 2017 3 Uncontrollable cell division

    caused by genetic mutations Treatments: • Surgery • Chemotherapy • Radiation therapy
  4. How do we “see” inside the body? July 2017 4

    MRI scanner Visualization of the 3D image Top Front Side
  5. Spotting brain cancer July 2017 5 • Most aggressive and

    most malignant brain cancer • Only 2% survive post treatment | Median survival of 14 months What is Glioblastoma? Normal brain T1 contrast Flair T2 T1
  6. Knowledge of tumor sub-types help in treatment July 2017 6

    Whole tumor Tumor core Active tumor Entire tumor map Intra-tumor classification is essential to understand treatment response Source: http://braintumorsegmentation.org/
  7. Plan for treatment Visualize 3D model Glioblastoma treatment requires pixel-wise

    labelling July 2017 7 • Tedious slice-by-slice labelling is carried out by doctors 3D rendering of glioblastoma Generate pixel-wise labels • Labelling can be performed by deep networks
  8. Glioblastoma segmentation from brain MRI is non-trivial July 2017 8

    < 300 scans Cost: 2449$ ~ Rs. 1,60,000 4D data • [4 x 155 x 240 x 240] tensors • Dense annotations are very expensive! Complexity of data • Limited amount of annotated data – overfitting • Only 2% of pixels contain tumor Shortage of samples Can we leverage unsupervised feature learning? Heterogeneity
  9. Deep Unsupervised Feature Extraction July 2017 9

  10. Can we use Auto-encoders to extract features? July 2017 10

    Encoder Decoder Denoising Autoencoder = ෍( − ) 2 Tied weights Encode Compute loss Decode f() = . = (. ) = (. ෥ ) Prevent identity mapping! 28x28 28x28
  11. How do Auto-encoders learn underlying structure? July 2017 11 Weights

    of Autoencoder Source: Stacked Denoising Autoencoder – Vincent et al (2010) Learn Reconstruction Function
  12. From MNIST to brain MRI? July 2017 12 • Extract

    ROI around tumor • Sample patches from the ROI BRATS 2015 dataset Extract small patches Samples of [4 x 21 x 21] patches extracted around tumor Patch size = [ 4 x 21 x 21 ]
  13. Training Autoencoders on brain MRI July 2017 13 Feed noisy

    patches Reconstruct original patch 3D patch size: 4 x 21 x 21 Drop 20% pixels 1728 – 3500 – 1728 Extract, noise, reconstruct!
  14. July 2017 14

  15. Pre-training How do we train deep Auto-encoders? July 2017 15

    • Deep layers – learn a hierarchy of features • Vanishing and exploding gradients • Pre-train layer by layer Stacking
  16. Learn rich latent representations July 2017 16 [4x21x21] patch Extracted

    feature representation Don’t noise during inference
  17. Fine-tuning for Classification July 2017 17

  18. How do we use Autoencoders for classification? July 2017 18

    Input patch Predicted vector Logistic Layer Stacked Denoising Autoencoders = σ =1
  19. Fine-tune the network for classification July 2017 19 4 x

    21 x 21 4 sequences of MRI Edema Background Non-enhancing Active tumor Necrotic tumor 1764 3500 2000 1000 5 Architecture Extract, classify, stride, repeat
  20. Segmentation results July 2017 20

  21. DeepMedic 220 0.90 0.75 0.72 SDAE 135 0.85 0.78 0.73

    Performance of semi-supervised learning? July 2017 21 Scans Whole Tumor Tumor Core Active Tumor SDAE 20 (Pre-trained on 135) 0.84 0.72 0.74 Dice = 2 ∗ ∩ + 1764 3500 2000 1000 5 Architecture Our Model: SDAE State-of-the-art: DeepMedic 11 layers + 3D Convolutions + 2 Pathways + Fully supervised DeepMedic - https://arxiv.org/pdf/1603.05959.pdf
  22. Unsupervised classification July 2017 22

  23. Can we perform classification with just unlabelled data? July 2017

    23 Error map Abnormal brain 882 3500 882 Training: Use only normal data Testing: Plot error map ෍( − ) 2 Whole tumor dice: 0.80 Normal brain
  24. How good is novelty detection? July 2017 24 Scans Lesion

    Dice Novelty detector (Unsupervised) 28 0.64 DeepMedic (Fully supervised) 28 0.66 ISLES dataset Stroke lesion segmentation Trained on BRATS Novelty Detector DeepMedic - https://arxiv.org/pdf/1603.05959.pdf
  25. More false positive reduction July 2017 25 Use novelty detector

    to reject false positives Novelty detector’s mask Raw prediction Post-processed Ground Truth Semi-Supervised learning in Brain Tumor Segmentation - https://arxiv.org/pdf/1611.08664.pdf
  26. Hybrid architectures July 2017 26

  27. Can we do joint training on labelled and unlabelled data?

    July 2017 27 • Joint training - Reconstruct on unlabelled data Ladder networks • Add skip connections to fuse features - Reconstruct and classify on labelled data Loss = R + ∝ C Reconstruction loss Classification loss Reconstruct Classify Input Source - https://arxiv.org/pdf/1507.02672.pdf
  28. Fully convolutional ladder networks July 2017 28 Classification/Regression Reconstruction Or

    Semantic Segmentation The ᴪ-Net Convolution + MaxPooling Convolution + MaxPooling Convolution + MaxPooling Deconvolution Deconvolution Fully connected BRATS 2017 Challenge • Segment tumor • Predict patient prognosis Loss = R + αC + βS
  29. The future of data-efficient learning July 2017 29 Learn from

    imaging centres Supervise Clone
  30. > sudo kill cancer Kiran Vaidhya Algorithms Researcher kiran@predible.co Acknowledgments:

    Varghese Alex – co-author Subramaniam Thirunavukkarasu - co-author Dr. Ganapathy Krishnamurthi - Assistant Professor, IIT Madras Dr. C. Kesavdas - Professor, SCTIMST