Machine Learning in Medical Imaging

Machine Learning in Medical Imaging

Plenary Talk at ISMRM Annual Meeting, Montreal, Canada, May 14th, 2019

A3d61bc22cd700a92e7d4136a4d29e8f?s=128

Jong Chul Ye

May 14, 2019
Tweet

Transcript

  1. Machine Learning in Medical Imaging Jong Chul Ye, Ph.D Endowed

    Chair Professor BISPL - BioImaging, Signal Processing, and Learning lab. Dept. Bio & Brain Engineering Dept. Mathematical Sciences KAIST, Korea
  2. Speaker Name: Jong Chul Ye I have no financial interests

    or relationships to disclose with regard to the subject matter of this presentation. Declaration of Financial Interests or Relationships
  3. Deep Learning Era in Medical Imaging Diabetic eye diagnosis Gulshan,

    V. et al. JAMA (2016) Skin Cancer diagnosis Esteva et al, Nature (2017) OCT diagnosis De Fauw et al, Nature Medicine (2018) Figure courtesy of X. Cao & D. Shen Image registration Image segmentation Ronneberger et al, MICCAI, 2015
  4. Classical Learning vs Deep Learning Diagnosis Classical machine learning Deep

    learning (no feature engineering) Feature Engineering Esteva et al, Nature Medicine, (2019)
  5. Deep Learning for MRI Diagnosis Diagnosis & analysis Focus of

    this talk: Reconstruction
  6. Unmet Needs in MRI q MR is an essential tool

    for diagnosis q MR exam protocol : 30~60 min/patient ü should increase the throughput of MR scanning q Cardiac imaging, fMRI ü Should improve temporal resolution q Multiple contrast acquisition in a short time
  7. MR Acceleration Lustig et al, MRM, 2007; Jung et al,

    PMB, 2007; MRM, 2009 fast pulse sequence parallel/multiband imaging Compressed sensing Shin et al, MRM, 2014; Haldar et al, TMI, 2014 Lee et al, MRM, 2016; Ongie et al, 2017 Sodickson et al, MRM, 1997; Pruessmann et al, MRM 1999; Griswold et al, MRM, 2002 Mansfield, JPC 1977; Ahn et al, TMI, 1986 MRF Ma et al, Nature, 2013; Jiang et al, MRM 2015
  8. Year 2016: Deep Learning Revolution in MR Kwon et al,

    Medical Physics, 2017 Hammernik et al, MRM, 2018 Wang et al, ISBI, 2016 Yang et al, NIPS, 2016 Multilayer perceptron Variational network Deep learning prior ADMM-Net
  9. None
  10. Image Domain Learning In vivo golden angle radial acquisition results

    (collaboration with KH Sung at UCLA) Object # of views Ground truth 302 views X : Input 75 views Target : abdomen Acceleration factor : x4 Training dataset : 15 slices 13.118e-2 (a) Ground truth (2nd slice) (b) X : Input (75) Accelerated Projection Reconstruction MR imaging using Deep Residual Learning MRM Highlight September 2018 2.3705e-2 (a) Ground truth (2nd slice) (c) Proposed (15) Object # of views Ground truth 302 views X : Input 75 views Target : abdomen Acceleration factor : x4 Training dataset : 15 slices In vivo golden angle radial acquisition results (collaboration with KH Sung at UCLA) Han et al. MRM, 2018 Domain adaptation network
  11. Image Domain Learning Mardani et al, IEEE TMI 2018 GANCS

  12. Image Domain Learning QSMNet Yoon et al, NeuroImage, 2018 Courtesy

    of Jongho Lee
  13. Image Domain Learning Zero-Filling R=5 R=5 MANTIS R=5 Global Low

    Rank Local Low Rank Joint X-P Recon R=5 R=5 Direct parameter mapping (MANTIS) Liu et al. MRM, 2019 Courtesy of Fang Liu
  14. Image Domain Learning Accelerated MR fingerprinting Extracted Features FNN FNN

    Signal Evolution Feature Extraction (FE) Module Low- Dimensional Feature Maps U-Net U-Net Spatially-Constrained Quantification (SQ) Module Output Tissue Property Maps T2 Input High- Dimension al MRF Signal T1 FNN: fully- connected neural network 200ms 100ms 0 R = 8 0 500 1000 1500 2000 2500 [X,Y]: [1 256] Index: 0 [R,G,B]: [0.03922 0 0] Template Matching Deep Learning 0 500 1000 1500 2000 2500 [X,Y]: [1 256] Index: 0 [R,G,B]: [0.03922 0 0] Courtesy of Dinggang Shen Feng Z et al, IEEE TMI, 2019
  15. Hybrid Domain Learning Deep Cascade of CNNs for MRI Reconstruction

    Schlemper et al. IEEE TMI 2017 Courtesy of D. Rueckert (a) 11x Undersampled (b) CNN reconstruction (c) Ground Truth
  16. Hybrid Domain Learning Eo et al , MRM, 2018 KIKI-net:

    cross-domain CNN Courtesy of Doshik Hwang
  17. Domain-transform Learning AUTOMAP Zhu et al, Nature, 2018

  18. Sensor-domain Learning RAKI: Robust ANN for k-space Interpolation 1Akçakaya et

    al, MRM, 2019 Courtesy of Mehmet Akcakaya
  19. Sensor-domain Learning CNN k-space deep learning Han et al, arXiv:1805.03779

    (2018)
  20. Hmm, ML was already used for MR recon…

  21. What’s so special this time ? q Accuracy: high quality

    recon > CS q Fast reconstruction time q Business model: vendor-driven training q Interpretable models q Flexibility: more than recon Imaging time Reconstruction time Conventional Compressed Sensing Machine Learning
  22. Variational Network (R=4) CG SENSE PI-CS: TGV Learning: VN PI

    PI-CS Learning Hammernik MRM 2018 Courtesy of Florian Knoll
  23. K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS

    K-space learning Han et al, arXiv:1805.03779 (2018)
  24. K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS

    K-space learning Han et al, arXiv:1805.03779 (2018)
  25. Research Goal Ø To improve temporal resolution of TWIST imaging

    using deep k-space learning Ø To generate multiple reconstruction results with various spatial and temporal resolution using one network VS = 5 VS = 2 CNN K-space Deep Learning for Time-resolved MRI Cha et al, arXiv:1806.00806 (2018).
  26. K-space Learning TWIST TWIST True Dynamics VS = 5

  27. Ours with VS=2 True Dynamics VS = 2 K-space Learning

    TWIST
  28. Ours with VS=5 True Dynamics VS = 5 K-space Learning

    TWIST
  29. Rec_PF Zero-filling PD-net ADMM-net r e f PSNR: 41.8442 SSIM:

    0.9920 PSNR: 38.5132 SSIM: 0.9651 PSNR: 32.5067 SSIM: 0.9127 PSNR: 23.3076 SSIM: 0.5980 Kim et al, Radiology 2018 pCASL denoising Primal Dual Net DPI-net Cheng et al, ISMRM 2019, p3983 Jun et al, MRM, 2019 Metal Artifact Removal Kwon et al, MICCAI 2018 T2 reconstruction from T1 and partial T2 Courtesy of Dinggang Shen Conventional minimum phase RF Reinforced learning-designed RF Peak RF reduced by x2.1 RF design D Shin, ISMRM 2019 #757
  30. Deep Learning for Synthetic/Pseudo CT Leynes, Hope, Larson, et al.

    J Nuc Med 2017. Input: Multi-contrast MRI Output: Synthetic CT Architecture: 3D Convolutional Neural Network Slide Courtesy of Peder Larson
  31. GAN for Image Translation Kang et al, Medical Physics, 2018

    Input: phase 1 Proposed Target: phase 8 Input- output Unsupervised Denoising for Multiphase Coronary CT Angiography
  32. CollaGAN: Collaborative GAN Lee et al, CVPR, 2019

  33. CollaGAN for MR Contrast Imputation Lee et al, CVPR, 2019

  34. CollaGAN for Contrast Imputation Lee et al, CVPR, 2019

  35. CollaGAN for MAGiC T2FLAIR Correction MAGiC T2-FLAIR T2-FLAIR MAGiC T2-FLAIR

    T2-FLAIR MAGiC T2-FLAIR T2-FLAIR MAGiC T2-FLAIR T2-FLAIR
  36. WHY DEEP LEARNING WORKS FOR RECON ? DOES IT CREATE

    ANY ARTIFICIAL FEATURES ?
  37. Signal Representation: Key to Classical Recon x = X hx,

    e bn ibn <latexit sha1_base64="tCLG2nbXwywzwFFUoqMNNZJmas8=">AAACOnicbVDLSsNAFJ34rPVVdelmsAgupCQq6EYounHZgn1AU8pkctMOnUzCzEQtod/lxq9w58KNC0Xc+gFO2oBaPTBwOOdc5t7jxZwpbdtP1tz8wuLScmGluLq2vrFZ2tpuqiiRFBo04pFse0QBZwIammkO7VgCCT0OLW94mfmtG5CKReJaj2LohqQvWMAo0UbqlepuSPTAC9K7MT53VRJilxPR54C/jUPs3jIfNOM+pLmMvXFPuPJ3NNNwr1S2K/YE+C9xclJGOWq90qPrRzQJQWjKiVIdx451NyVSM8phXHQTBTGhQ9KHjqGChKC66eT0Md43io+DSJonNJ6oPydSEio1Cj2TzJZUs14m/ud1Eh2cdVMm4kSDoNOPgoRjHeGsR+wzCVTzkSGESmZ2xXRAJKHatF00JTizJ/8lzaOKc1yx6yfl6kVeRwHtoj10gBx0iqroCtVQA1F0j57RK3qzHqwX6936mEbnrHxmB/2C9fkFVQWu8g==</latexit> Synthesis basis Analysis basis coefficients Basis engineering: Wavelets, sparse basis(CS), dictionary learning, low-rank basis
  38. Ultimate Signal Representation for MR ? = <latexit sha1_base64="2wsinhV7OEj9020G2B+xBypL2+k=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4KokKehGKXjy2YD+gDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz321lZXVvf2CxsFbd3dvf2SweHTR2nimGDxSJW7YBqFFxiw3AjsJ0opFEgsBWM7qZ+6wmV5rF8MOME/YgOJA85o8ZK9ZteqexW3BnIMvFyUoYctV7pq9uPWRqhNExQrTuemxg/o8pwJnBS7KYaE8pGdIAdSyWNUPvZ7NAJObVKn4SxsiUNmam/JzIaaT2OAtsZUTPUi95U/M/rpCa89jMuk9SgZPNFYSqIicn0a9LnCpkRY0soU9zeStiQKsqMzaZoQ/AWX14mzfOKd1Fx65fl6m0eRwGO4QTOwIMrqMI91KABDBCe4RXenEfnxXl3PuatK04+cwR/4Hz+AI13jMM=</latexit> x

    = X hx, e bn ibn <latexit sha1_base64="tCLG2nbXwywzwFFUoqMNNZJmas8=">AAACOnicbVDLSsNAFJ34rPVVdelmsAgupCQq6EYounHZgn1AU8pkctMOnUzCzEQtod/lxq9w58KNC0Xc+gFO2oBaPTBwOOdc5t7jxZwpbdtP1tz8wuLScmGluLq2vrFZ2tpuqiiRFBo04pFse0QBZwIammkO7VgCCT0OLW94mfmtG5CKReJaj2LohqQvWMAo0UbqlepuSPTAC9K7MT53VRJilxPR54C/jUPs3jIfNOM+pLmMvXFPuPJ3NNNwr1S2K/YE+C9xclJGOWq90qPrRzQJQWjKiVIdx451NyVSM8phXHQTBTGhQ9KHjqGChKC66eT0Md43io+DSJonNJ6oPydSEio1Cj2TzJZUs14m/ud1Eh2cdVMm4kSDoNOPgoRjHeGsR+wzCVTzkSGESmZ2xXRAJKHatF00JTizJ/8lzaOKc1yx6yfl6kVeRwHtoj10gBx0iqroCtVQA1F0j57RK3qzHqwX6936mEbnrHxmB/2C9fkFVQWu8g==</latexit> basis 1 1 x coefficient
  39. = <latexit sha1_base64="2wsinhV7OEj9020G2B+xBypL2+k=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4KokKehGKXjy2YD+gDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz321lZXVvf2CxsFbd3dvf2SweHTR2nimGDxSJW7YBqFFxiw3AjsJ0opFEgsBWM7qZ+6wmV5rF8MOME/YgOJA85o8ZK9ZteqexW3BnIMvFyUoYctV7pq9uPWRqhNExQrTuemxg/o8pwJnBS7KYaE8pGdIAdSyWNUPvZ7NAJObVKn4SxsiUNmam/JzIaaT2OAtsZUTPUi95U/M/rpCa89jMuk9SgZPNFYSqIicn0a9LnCpkRY0soU9zeStiQKsqMzaZoQ/AWX14mzfOKd1Fx65fl6m0eRwGO4QTOwIMrqMI91KABDBCe4RXenEfnxXl3PuatK04+cwR/4Hz+AI13jMM=</latexit> x = X hx, e bn ibn

    <latexit sha1_base64="tCLG2nbXwywzwFFUoqMNNZJmas8=">AAACOnicbVDLSsNAFJ34rPVVdelmsAgupCQq6EYounHZgn1AU8pkctMOnUzCzEQtod/lxq9w58KNC0Xc+gFO2oBaPTBwOOdc5t7jxZwpbdtP1tz8wuLScmGluLq2vrFZ2tpuqiiRFBo04pFse0QBZwIammkO7VgCCT0OLW94mfmtG5CKReJaj2LohqQvWMAo0UbqlepuSPTAC9K7MT53VRJilxPR54C/jUPs3jIfNOM+pLmMvXFPuPJ3NNNwr1S2K/YE+C9xclJGOWq90qPrRzQJQWjKiVIdx451NyVSM8phXHQTBTGhQ9KHjqGChKC66eT0Md43io+DSJonNJ6oPydSEio1Cj2TzJZUs14m/ud1Eh2cdVMm4kSDoNOPgoRjHeGsR+wzCVTzkSGESmZ2xXRAJKHatF00JTizJ/8lzaOKc1yx6yfl6kVeRwHtoj10gBx0iqroCtVQA1F0j57RK3qzHqwX6936mEbnrHxmB/2C9fkFVQWu8g==</latexit> basis 1 1 x coefficient x = X hx, e bn(x)ibn(x) <latexit sha1_base64="erKoPFMSsbyGoza+nkJyT8tbnTk=">AAACdHicfVFNa9tAEF0pTeu4SeM2hx7awzbG4EIxUltoL4GQXnJ0of4Ay5jVauQsWa3E7qi1EfoF+Xe55WfkknNWjsC1XTqw8Hhv3szsTJhJYdDz7hx379n+8xeNg+bLw6NXx63Xb4YmzTWHAU9lqschMyCFggEKlDDONLAklDAKr39U+ug3aCNS9QuXGUwTNlciFpyhpWatmyBheBXGxaKkZ4HJExpIpuYS6Fr4RIM/IgIUMoKipmlYzlSAsMDVDIWGqCy6a8/HMtCbdSoD/a9j1mp7PW8VdBf4NWiTOvqz1m0QpTxPQCGXzJiJ72U4LZhGwSWUzSA3kDF+zeYwsVCxBMy0WLUvaccyEY1TbZ9CumL/dhQsMWaZhDazmtFsaxX5L22SY/x9WgiV5QiKPzWKc0kxpdUFaCQ0cJRLCxjXws5K+RXTjKO9U9Muwd/+8i4Yfu75X3rez6/t84t6HQ3yjpySLvHJN3JOLkmfDAgn985bhzofnAf3vdt2O0+prlN7TshGuL1HHSHCMQ==</latexit> Ideal basis should be adaptive to the input in real-time Ultimate Signal Representation for MR ?
  40. Encoder-Decoder CNN as Signal Representation Ye et al, ICML, 2019

    Ye et al, SIAM J. Imaging Sciences, 2018 encoder decoder Synthesis basis Analysis basis
  41. Nonlinear Convolutional Framelet Representation ReLU (input adaptive) pooling un-pooling Learned

    filters Ye et al, ICML, 2019
  42. Input Space Partitioning in ReLU Networks Ye et al, ICML,

    2019
  43. Channel, Depth, and Skipped Connection # of representation # of

    network elements # of channel Network depth Skipped connection Ye et al, ICML, 2019
  44. Take-away message • Deep learning is very powerful for recon

    • Deep learning is a novel image representation with automatic input adaptivity. • Extension of classical regression, CS, PCA, etc • More training data gives better representation Don’t be afraid of using it !
  45. Acknowledgement • Daniel Rueckert (Imperial College) • Florian Knoll (NYU)

    • Fang Liu (Univ. of Wisconsin) • Mehmet Akcakaya (Univ. of Minnesota) • Dong Liang (SIAT, China) • Dinggang Shen (UNC) • Peder Larson (UCSF) • Grant – NRF of Korea – Ministry of Trade Industry and Energy • Hyunwook Park (KAIST) • Sung-hong Park (KAIST) • Jongho Lee (SNU) • Doshik Hwang (Yonsei Univ) • Won-Jin Moon (KonkukUniv Medical Center) • Eungyeop Kim (Gachon Univ. Medical Center) • Leonard Sunwoo (SNUBH) • Kyuhwan Jung (Vuno)