Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep Learning for Biomedical Image Reconstruction

Deep Learning for Biomedical Image Reconstruction

Tutorial Talk, IEEE Symp. on Biomedical Imaging (ISBI), April 11th, 2019, Venice, Italy

Jong Chul Ye

April 11, 2019
Tweet

More Decks by Jong Chul Ye

Other Decks in Research

Transcript

  1. Deep Learning for Biomedical Image Reconstruction Jong Chul Ye, Ph.D

    Endowed Chair Professor BISPL - BioImaging, Signal Processing and Learning Lab. Dept. Bio & Brain Engineering KAIST, Korea Time: 14:45-18:00 Date: Thursday April 11th Place: Venetian Ballroom C Tutorial ThPM1T2 Latest material can be downloaded from https://bispl.weebly.com
  2. Outline q Introduction to biomedical image reconstruction q Deep Learning:

    a brief review q Examples of deep learning for biomedical image reconstruction ü MRI ü Low-dose CT ü Optics ü Ultrasound q Interpretation of deep image recon ü Unrolled sparse recovery, FBPConvNet ü Variational neural network ü ADMM-Net, Learned primal dual ü Learned projected gradient method ü Deep convolutional framelets ü Representation learning q Advanced topics of deep image recon ü Unsupervised learning ü Contrast/image imputation
  3. Analytic Reconstruction (b) Delay and Sum (DAS) Time-reversal of a

    scattered wave (a) MR Imaging Beautiful analytic reconstruction results from fully sampled data
  4. Analytic Recon- Unmet Needs in Medical Imaging q High radiation

    dose CT for high quality imaging increases the risk of cancer for patients. q Long scan time of MR significantly reduces the scanner usage and reduce hospital revenue. q Low image equality of US is a technical huddle for portable ultrasound imaging system.
  5. 7 Forward mapping By physics Measurement data Prior Knowledge (smoothness,

    sparsity,etc) Reconstructed image ˆ x = arg min x ky Axk2 2 + kDxk Model-based Iterative Recon (MBIR)
  6. 9 measurements vector # non-zeros • Incoherent projection • Underdetermined

    system • Sparse unknown vector Courtesy of Dr. Dror Baron b A x n ⇥ 1 k m ⇥ 1 m ' k log(n) ⌧ n Compressed Sensing (CS)
  7. Structured Low-Rank Approaches Jin et al, TIP, 2015; Jin et

    al, TIP, 2018 Ye et al, TIT, 2017; Ongie et al, SIIMS,2017 Ongie et al, TSP, 2018 MR Applications Image processing/computer vision Super-resolution Microscopy Theoretical guarantee Shin et al, MRM, 2014; Haldar et al, TMI, 2014 Lee et al , MRM, 2016; Ongie et al, SIIMS, 2017 Min et al, TIP, 2018
  8. Year 2016: Deep Learning Breakthrough Kwon et al, Medical Physics,

    2017 Hammernik et al, MRM, 2018 Wang et al, ISBI, 2016 Yang et al, NIPS, 2016 Multilayer perceptron Variational network Deep learning prior ADMM-Net
  9. Year 2016: Deep Learning Breakthrough in MR Kwon et al,

    Medical Physics, 2017 Hammernik et al, MRM, 2018 Wang et al, ISBI, 2016 Yang et al, NIPS, 2016 Multilayer perceptron Variational network Deep learning prior ADMM-Net
  10. Challenges to Imaging Community q Real or cosmetic changes ?

    q Why it works ? q What is the optimal architecture ? q What is the link to the signal processing approaches ? q No use of domain expertise ?
  11. ImageNet Challenge (Fei-Fei, 2009) • 1,000 object categories • Images:

    1.2M (training), 100k (test) • ImageNet Large Scale Visual Recognition Challenge (LSVRC) Deng et al, CVPR, 2009
  12. Deep Learning Age • Deep learning has been successfully used

    for classification, low-level computer vision, etc • Even outperforms human observers Figure modified from Kaiming He’s presentation
  13. Generative Adversarial Networks (GoodFellow, NIPS, 2016) Figure adopted from •

    BEGAN: Boundary Equilibrium Generative Adversarial Networks, 17’03 • StackGAN : Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, ‘16.12
  14. First CNN: LeNet (LeCun, 1998) LeCun, Yann, et al. "Gradient-based

    learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
  15. Pooling & Unpooling 12 20 30 0 8 12 2

    0 34 70 37 7 112 100 22 12 20 30 112 37 13 8 79 18 20 0 30 0 0 0 0 0 112 0 37 0 0 0 0 0 max pooling average pooling unpooling 13 0 8 0 0 0 0 0 79 0 18 0 0 0 0 0
  16. Visual Information Processing in Brain 37 Kravitz et al, Trends

    in Cognitive Sciences January 2013, Vol. 17, No. 1
  17. Retina, V1 Layer Receptive fields of two ganglion cells in

    retina à convolution Orientation column in V1 http://darioprandi.com/docs/talks/image-reconstruction-recognition/graphics/pinwheels.jpg Figure courtesy by distillery.com
  18. 41

  19. Unmet Needs in MRI q MR is an essential tool

    for diagnosis q MR exam protocol : 30~60 min/patient ü should increase the throughput of MR scanning q Cardiac imaging, fMRI ü Should improve temporal resolution q Multiple contrast acquisition
  20. MR Acceleration Lustig et al, MRM, 2007 Jung et al,

    PMB, 2007; MRM, 2009 Ma et al, Nature, 2013 fast pulse sequence parallel/multiband imaging Compressed sensing, MRF structured low-rank method Shin et al, MRM, 2014; Haldar et al, TMI, 2014 Lee et al, MRM, 2016; Ongie et al, 2017 Sodickson et al, MRM, 1997; Pruessmann et al, MRM 1999; Griswold et al, MRM, 2002 Mansfield, JPC 1977; Ahn et al, TMI, 1986
  21. Image Domain Learning In vivo golden angle radial acquisition results

    (collaboration with KH Sung at UCLA) Object # of views Ground truth 302 views X : Input 75 views Target : abdomen Acceleration factor : x4 Training dataset : 15 slices 13.118e-2 (a) Ground truth (2nd slice) (b) X : Input (75) Accelerated Projection Reconstruction MR imaging using Deep Residual Learning MRM Highlight September 2018 2.3705e-2 (a) Ground truth (2nd slice) (c) Proposed (15) Object # of views Ground truth 302 views X : Input 75 views Target : abdomen Acceleration factor : x4 Training dataset : 15 slices In vivo golden angle radial acquisition results (collaboration with KH Sung at UCLA) Han et al. MRM, 2018; Lee et al, TBME, 2018 Domain adaptation network
  22. Image Domain Learning Zero-Filling R=5 R=5 MANTIS R=5 Global Low

    Rank Local Low Rank Joint X-P Recon R=5 R=5 Direct parameter mapping (MANTIS) Liu et al. MRM, 2019 Courtesy of Fang Liu
  23. Hybrid Domain Learning Deep Cascade of CNNs for MRI Reconstruction

    Schlemper et al. IEEE TMI 2017 Courtesy of D. Rueckert
  24. Hybrid Domain Learning Eo et al , MRM, 2018 KIKI-net:

    cross-domain CNN Courtesy of Doshik Hwang
  25. 58 What’s so special this time ? q High quality

    recon: better than CS q Fast reconstruction time q Business model: vendor-driven training q Interpretable models Imaging time Reconstruction time Conventional Compressed Sensing Machine Learning
  26. Variational Network (R=4) CG SENSE PI-CS: TGV Learning: VN PI

    PI-CS Learning Hammernik MRM 2018 Courtesy of Florian Knoll
  27. K-space Deep Learning (Radial R=6) Han et al, in revision,

    2018 Ground-truth Acceleration Image learning CS K-space learning
  28. K-space Deep Learning (Radial R=6) Han et al, in revision,

    2018 Ground-truth Acceleration Image learning CS K-space learning
  29. Research Goal Ø To improve temporal resolution of TWIST imaging

    using deep k-space learning Ø To generate multiple reconstruction results with various spatial and temporal resolution using one network VS = 5 VS = 2 CNN K-space Deep Learning for Time-resolved MRI Cha et al, in revision, 2018
  30. Ours with VS=2 True Dynamics VS = 2 K-space Learning

    TWIST Cha et al, in revision, 2018
  31. Ours with VS=5 True Dynamics VS = 5 K-space Learning

    TWIST Cha et al, in revision, 2018
  32. Dose Reduction Techniques • To reduce the radiation exposure, sparse

    view CT, low-dose CT and interior tomography. Sparse-view CT (Down-sampled View) Low-dose CT (Reduced X-ray dose) Interior Tomography (Truncated FOV)
  33. Low-dose CT • To reduce the radiation exposure, sparse-view CT,

    low-dose CT and interior tomography. Sparse-view CT (Down-sampled View) Low-dose CT (Reduced X-ray dose) Interior Tomography (Truncated FOV)
  34. Energy dependent attenuation coefficient metal bone Water & soft tissue

    X-ray source Detector Streaking artifacts+ random noises Difficult to design shrinkage using statistical approaches
  35. 75 Wavelet transform level 2 level 1 level 3 level

    4 Wavelet recomposition + Residual learning : Low-resolution image bypass High SNR band CNN (Kang, et al, Medical Physics 44(10))
  36. 76 Da Cunha, Arthur L., Jianping Zhou, and Minh N.

    Do. "The nonsubsampled contourlet transform: theory, design, and applications." Image Processing, IEEE Transactions on 15.10 (2006): 3089-3101.
  37. Sparse-View CT • To reduce the radiation exposure, sparse-view CT,

    low-dose CT and interior tomography. Sparse-view CT (Down-sampled View) Low-dose CT (Reduced X-ray dose) Interior Tomography (Truncated FOV)
  38. Image Domain Learning Tight Frame U-Net JC Ye et al,

    SIAM Journal Imaging Sciences, 2018 Han et al, TMI, 2018
  39. 90 view recon U-Net vs. Tight-Frame U-Net • JC Ye

    et al, SIAM Journal Imaging Sciences, 2018 • Y. Han and J. C. Ye, TMI, 2018
  40. 9 View Dual Energy CT for Baggage Screening Han et

    al, arXiv preprint arXiv:1712.10248, (2017); CT Meeting (2017)
  41. 1st view 2nd view 3rd view 4th view 5th view

    6th view 7th view 8th view 9th view
  42. FBP

  43. TV

  44. ROI CT (Interior CT) • To reduce the radiation exposure,

    sparse-view CT, low-dose CT and interior tomography. Sparse-view CT (Down-sampled View) Low-dose CT (Reduced X-ray dose) Interior Tomography (Truncated FOV) Ward et al, SIIMS, 2015; Lee et al, SIIMS, 2015
  45. Laser L1 L2 P BS1 GM L3 CL OL L4

    L5 M1 L6 M2 BS2 Camera M3 Experimental Set-up [1] P : pinhole, L : lens, CL, condenser lens, OL : objective lens, BS : beam splitter, M : mirror, GM : galvano mirror, Measurement Total E-field Incident E-field Scattered E-field K-space-Trajectory [1] Yoon, Jonghee, et al. "Label-free characterization of white blood cells by measuring 3D refractive index maps." arXiv preprint arXiv:1505.02609 (2015). Optical Diffraction Tomography
  46. B-mode / Plane Wave Ultrasound Imaging B-mode Imaging Plane Wave

    Imaging Couade M, JVDI, 2015 Yoon et al, TMI, 2018
  47. Unmet Needs in US Imaging • Power consumption ü Many

    Rx should be used ü All ADC at RX needs power • Temporal resolution ü #scan line * echo sound speed ü Limiting factor for 3D or fast acquisition • Inaccuracy of time-reversal model ü DAS is based on time-rersal ü Time reversal is based on continuous model ü Needs adaptive beamformer • Needs for new US products ü Portable US ü 3-D US ü Ultrafast US
  48. Deep Learning for US Reconstruction Luchies AC, et al. TMI,

    2018 Vedula, et al, MICCAI, 2018. Zhou et al. TUFFC , 2018 Gasse et al. TUFFC , 2017 Deep Fourier Beamformer Deep Coherent Compounding ML to SL conversion Super-resolution Plane Wave
  49. Low-power Fast US using RF Subsampling Rx subsampled SC subsampled

    - Use the portion of receivers - Reduce power consumption of ADC - Portable ultrasound systems - Use the portion of scanlines - Reduce RF data acquisition time - Ultra-fast ultrasound systems Yoon et al, TMI, 2018
  50. RF Interpolation via Deep Neural Networks Alphinion EC12R • Linear

    (L3-12H) : 8.48Mhz • Convex (SC1-4H): 3.2Mhz • Rx-SC (64x384) • 15,000 Depth Yoon et al, TMI, 2018
  51. CNN: Too Simple to Analyze..? Convolution & pooling à stone

    age tools of signal processing What do they do ?
  52. • What is the role of the nonlinearity such as

    rectified linear unit (ReLU) ? • Why do we need a pooling and unpooling in some architectures ? • Why do some networks need fully connected layers whereas the others do not ? • What is the role of by-pass connection or residual network ? • What is the role of the filter channels in convolutional layer ? Many Mysteries…
  53. • Direct connection to sparse recovery • Cannot explain the

    role of channel Learned ISTA (LISTA) Gregor et al,ICML, 2010
  54. FBPConvNet Jin et al. TIP 2017 • Extension of LISTA

    when the normal operator is shift-invariant
  55. • Multichannel filters from the decomposition of regularization term •

    Different from standard CNN Variational Neural Networks Courtesy of Florian Knoll
  56. Learned Primal Dual Adler et al, TMI, 2018 Replacing the

    primal & dual proximal operators with CNN
  57. Generative Model • Image reconstruction as a distribution matching –

    However, difficult to explain the role of black-box network Bora et al, Compressed Sensing using Generative Models, arXiv:1703.03208
  58. Hd(f) = U⌃V T : Non-local basis : Local basis

    Convolution Framelets (Yin et al; 2017) > = I > = I Hd(f)
  59. Hd(f) Hd(f) = ˜ T ˜ T C C =

    T Hd(f) C = T (f ~ ) Encoder: ˜ T = I ˜ = PR(V ) Hd(f) = U⌃V T Unlifting: f = (˜C) ~ ⌧(˜ ) : Non-local basis : Local basis : Frame condition : rank condition convolution pooling un-pooling convolution : User-defined pooling : Learnable filters Hpi (gi) = X k,l [Ci]kl e Bkl i Decoder: Deep Convolutional Framelets (Y, Han, Cha; 2018)
  60. Missing elements can be found by low rank Hankel structured

    matrix completion Nuclear norm Projection on sampling positions min m kH(m)k⇤ subject to P⌦(b) = P⌦(f) RankH(f) = k * Jin KH et al IEEE TCI, 2016 * Jin KH et al.,IEEE TIP, 2015 * Ye JC et al., IEEE TIT, 2016 m Why Hankel Matrix?
  61. Problem of U-net Pooling does NOT satisfy the frame condition

    JC Ye et al, SIAM Journal Imaging Sciences, 2018 Y. Han et al, TMI, 2018. ext > ext = I + > 6= I
  62. Improving U-net using Deep Conv Framelets • Dual Frame U-net

    • Tight Frame U-net JC Ye et al, SIAM Journal Imaging Sciences, 2018 Y. Han and J. C. Ye, TMI, 2018
  63. U-Net versus Dual Frame U-Net Y. Han and J. C.

    Ye, TMI, 2018; Yoo et al, SIJAM, 2018
  64. Isola, et al. CVPR. 2017. Zhu, et al. CVPR, 2017.

    Style Transfer : power of wavelet pooling Pix2pix CycleGAN
  65. = X i=1 nbn <latexit sha1_base64="hF8YSuka+Y36npfrys6bhoSxFsk=">AAACCHicbVDLSgMxFM34rPVVdenCYBFclRkVdFMounFZwT6gU4YkzbShSWZIMkIZZunGX3HjQhG3foI7/8ZMOwttPRA4nHMuuffgmDNtXPfbWVpeWV1bL22UN7e2d3Yre/ttHSWK0BaJeKS6GGnKmaQtwwyn3VhRJDCnHTy+yf3OA1WaRfLeTGLaF2goWcgIMlYKKkd1XyciSFndy3xMDQok9AUyIxymOAtsourW3CngIvEKUgUFmkHlyx9EJBFUGsKR1j3PjU0/RcowwmlW9hNNY0TGaEh7lkokqO6n00MyeGKVAQwjZZ80cKr+nkiR0HoisE3mO+p5Lxf/83qJCa/6KZNxYqgks4/ChEMTwbwVOGCKEsMnliCimN0VkhFSiBjbXdmW4M2fvEjaZzXvvObeXVQb10UdJXAIjsEp8MAlaIBb0AQtQMAjeAav4M15cl6cd+djFl1yipkD8AfO5w/SO5nZ</latexit> Eigenface Representation of a

    Face x <latexit sha1_base64="xEIMch3yuo7JxT4Wy1udoMzDhIk=">AAAB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsyooMuiG5cV7APbUjLpnTY0kxmSjFiG/oUbF4q49W/c+Tdm2llo64HA4Zx7ybnHjwXXxnW/ncLK6tr6RnGztLW9s7tX3j9o6ihRDBssEpFq+1Sj4BIbhhuB7VghDX2BLX98k/mtR1SaR/LeTGLshXQoecAZNVZ66IbUjPwgfZr2yxW36s5AlomXkwrkqPfLX91BxJIQpWGCat3x3Nj0UqoMZwKnpW6iMaZsTIfYsVTSEHUvnSWekhOrDEgQKfukITP190ZKQ60noW8ns4R60cvE/7xOYoKrXsplnBiUbP5RkAhiIpKdTwZcITNiYgllitushI2ooszYkkq2BG/x5GXSPKt651X37qJSu87rKMIRHMMpeHAJNbiFOjSAgYRneIU3RzsvzrvzMR8tOPnOIfyB8/kD/eORHg==</latexit> PCA basis basis X <latexit sha1_base64="mRxQ3nFrUAg1mos1/MJ2As7dETY=">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMhM7PLPISw5Be8eFDEqz/kzb9xNtmDJhY0FFXddHdFKWfa+P63V1pb39jcKm9Xdnb39g+qh0dtnVhFaIskPFHdCGvKmaQtwwyn3VRRLCJOO9HkLvc7T1RplshHM01pKPBIspgRbHKpr60YVGt+3Z8DrZKgIDUo0BxUv/rDhFhBpSEca90L/NSEGVaGEU5nlb7VNMVkgke056jEguowm986Q2dOGaI4Ua6kQXP190SGhdZTEblOgc1YL3u5+J/Xsya+CTMmU2uoJItFseXIJCh/HA2ZosTwqSOYKOZuRWSMFSbGxVNxIQTLL6+S9kU9uKz7D1e1xm0RRxlO4BTOIYBraMA9NKEFBMbwDK/w5gnvxXv3PhatJa+YOYY/8D5/ADKVjlU=</latexit>                   coefficient
  66. GLM Basis Representation of fMRI = X i=1 nbn <latexit

    sha1_base64="hF8YSuka+Y36npfrys6bhoSxFsk=">AAACCHicbVDLSgMxFM34rPVVdenCYBFclRkVdFMounFZwT6gU4YkzbShSWZIMkIZZunGX3HjQhG3foI7/8ZMOwttPRA4nHMuuffgmDNtXPfbWVpeWV1bL22UN7e2d3Yre/ttHSWK0BaJeKS6GGnKmaQtwwyn3VhRJDCnHTy+yf3OA1WaRfLeTGLaF2goWcgIMlYKKkd1XyciSFndy3xMDQok9AUyIxymOAtsourW3CngIvEKUgUFmkHlyx9EJBFUGsKR1j3PjU0/RcowwmlW9hNNY0TGaEh7lkokqO6n00MyeGKVAQwjZZ80cKr+nkiR0HoisE3mO+p5Lxf/83qJCa/6KZNxYqgks4/ChEMTwbwVOGCKEsMnliCimN0VkhFSiBjbXdmW4M2fvEjaZzXvvObeXVQb10UdJXAIjsEp8MAlaIBb0AQtQMAjeAav4M15cl6cd+djFl1yipkD8AfO5w/SO5nZ</latexit> x <latexit sha1_base64="xEIMch3yuo7JxT4Wy1udoMzDhIk=">AAAB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsyooMuiG5cV7APbUjLpnTY0kxmSjFiG/oUbF4q49W/c+Tdm2llo64HA4Zx7ybnHjwXXxnW/ncLK6tr6RnGztLW9s7tX3j9o6ihRDBssEpFq+1Sj4BIbhhuB7VghDX2BLX98k/mtR1SaR/LeTGLshXQoecAZNVZ66IbUjPwgfZr2yxW36s5AlomXkwrkqPfLX91BxJIQpWGCat3x3Nj0UqoMZwKnpW6iMaZsTIfYsVTSEHUvnSWekhOrDEgQKfukITP190ZKQ60noW8ns4R60cvE/7xOYoKrXsplnBiUbP5RkAhiIpKdTwZcITNiYgllitushI2ooszYkkq2BG/x5GXSPKt651X37qJSu87rKMIRHMMpeHAJNbiFOjSAgYRneIU3RzsvzrvzMR8tOPnOIfyB8/kD/eORHg==</latexit> GLM basis Regression coefficient
  67. Sparse Representation in CS bn <latexit sha1_base64="+PJYnVb53ACFuJdwoQMCxK7vOoI=">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQq6LLoxmUF+4AmlMn0ph06mYSZiVBCf8ONC0Xc+jPu/BunbRbaemDgcM693DMnTAXXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTDFssEYnqhlSj4BJbhhuB3VQhjUOBnXB8N/M7T6g0T+SjmaQYxHQoecQZNVby/ZiaURjl4bQv+9WaW3fnIKvEK0gNCjT71S9/kLAsRmmYoFr3PDc1QU6V4UzgtOJnGlPKxnSIPUsljVEH+TzzlJxZZUCiRNknDZmrvzdyGms9iUM7Ocuol72Z+J/Xy0x0E+RcpplByRaHokwQk5BZAWTAFTIjJpZQprjNStiIKsqMraliS/CWv7xK2hd177LuPlzVGrdFHWU4gVM4Bw+uoQH30IQWMEjhGV7hzcmcF+fd+ViMlpxi5xj+wPn8AWXkkek=</latexit> hx, e bn

    i <latexit sha1_base64="U7NhedCxI11UMRr+85PC6B6cwUI=">AAACGXicbVDLSsNAFJ34rPVVdelmsAgupCQq6LLoxmUF+4AmhMnkph06mYSZiVpCf8ONv+LGhSIudeXfOG2z0NYDA4dzzmXuPUHKmdK2/W0tLC4tr6yW1srrG5tb25Wd3ZZKMkmhSROeyE5AFHAmoKmZ5tBJJZA44NAOBldjv30HUrFE3OphCl5MeoJFjBJtJL9iu5yIHgc3JrofRPnD6Ni9ZyFoxkPICxUHI1+4chL0K1W7Zk+A54lTkCoq0PArn26Y0CwGoSknSnUdO9VeTqRmlMOo7GYKUkIHpAddQwWJQXn55LIRPjRKiKNEmic0nqi/J3ISKzWMA5Mcr6pmvbH4n9fNdHTh5UykmQZBpx9FGcc6weOacMgkUM2HhhAqmdkV0z6RhGpTZtmU4MyePE9aJzXntGbfnFXrl0UdJbSPDtARctA5qqNr1EBNRNEjekav6M16sl6sd+tjGl2wipk99AfW1w9mz6HH</latexit> x <latexit sha1_base64="774qhuNAFXKctSHUINibxc5Dim4=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KooMeiF48VbC20oWy2m3bpZhN2X8QS+jO8eFDEq7/Gm//GTZuDtg4sDDPvsfMmSKQw6LrfTmlldW19o7xZ2dre2d2r7h+0TZxqxlsslrHuBNRwKRRvoUDJO4nmNAokfwjGN7n/8Mi1EbG6x0nC/YgOlQgFo2ilbi+iOArC7GlK+tWaW3dnIMvEK0gNCjT71a/eIGZpxBUySY3pem6CfkY1Cib5tNJLDU8oG9Mh71qqaMSNn80iT8mJVQYkjLV9CslM/b2R0ciYSRTYyTyiWfRy8T+vm2J45WdCJSlyxeYfhakkGJP8fjIQmjOUE0so08JmJWxENWVoW6rYErzFk5dJ+6zundfdu4ta47qoowxHcAyn4MElNOAWmtACBjE8wyu8Oei8OO/Ox3y05BQ7h/AHzucPV6aRSA==</latexit> X n <latexit sha1_base64="eQZvkOUKW8DFp/whBQaQiuX1XSc=">AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexqQI9BLx4jmAckS5idzCZj5rHMzAphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+nfntJ6oNU/LBThIaCjyULGYEWye1eiYVfdkvV/yqPwdaJUFOKpCj0S9/9QaKpIJKSzg2phv4iQ0zrC0jnE5LvdTQBJMxHtKuoxILasJsfu0UnTllgGKlXUmL5urviQwLYyYicp0C25FZ9mbif143tfF1mDGZpJZKslgUpxxZhWavowHTlFg+cQQTzdytiIywxsS6gEouhGD55VXSuqgGl1X/vlap3+RxFOEETuEcAriCOtxBA5pA4BGe4RXePOW9eO/ex6K14OUzx/AH3ucPtu+PNg==</latexit> basis Wavelet basis Learned Dictionary Sparse coefficient
  68. Ultimate Signal Representation ? = <latexit sha1_base64="2wsinhV7OEj9020G2B+xBypL2+k=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4KokKehGKXjy2YD+gDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz321lZXVvf2CxsFbd3dvf2SweHTR2nimGDxSJW7YBqFFxiw3AjsJ0opFEgsBWM7qZ+6wmV5rF8MOME/YgOJA85o8ZK9ZteqexW3BnIMvFyUoYctV7pq9uPWRqhNExQrTuemxg/o8pwJnBS7KYaE8pGdIAdSyWNUPvZ7NAJObVKn4SxsiUNmam/JzIaaT2OAtsZUTPUi95U/M/rpCa89jMuk9SgZPNFYSqIicn0a9LnCpkRY0soU9zeStiQKsqMzaZoQ/AWX14mzfOKd1Fx65fl6m0eRwGO4QTOwIMrqMI91KABDBCe4RXenEfnxXl3PuatK04+cwR/4Hz+AI13jMM=</latexit> x = X

    hx, e bn ibn <latexit sha1_base64="tCLG2nbXwywzwFFUoqMNNZJmas8=">AAACOnicbVDLSsNAFJ34rPVVdelmsAgupCQq6EYounHZgn1AU8pkctMOnUzCzEQtod/lxq9w58KNC0Xc+gFO2oBaPTBwOOdc5t7jxZwpbdtP1tz8wuLScmGluLq2vrFZ2tpuqiiRFBo04pFse0QBZwIammkO7VgCCT0OLW94mfmtG5CKReJaj2LohqQvWMAo0UbqlepuSPTAC9K7MT53VRJilxPR54C/jUPs3jIfNOM+pLmMvXFPuPJ3NNNwr1S2K/YE+C9xclJGOWq90qPrRzQJQWjKiVIdx451NyVSM8phXHQTBTGhQ9KHjqGChKC66eT0Md43io+DSJonNJ6oPydSEio1Cj2TzJZUs14m/ud1Eh2cdVMm4kSDoNOPgoRjHeGsR+wzCVTzkSGESmZ2xXRAJKHatF00JTizJ/8lzaOKc1yx6yfl6kVeRwHtoj10gBx0iqroCtVQA1F0j57RK3qzHqwX6936mEbnrHxmB/2C9fkFVQWu8g==</latexit> basis 1 1 x coefficient
  69. = <latexit sha1_base64="2wsinhV7OEj9020G2B+xBypL2+k=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4KokKehGKXjy2YD+gDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz321lZXVvf2CxsFbd3dvf2SweHTR2nimGDxSJW7YBqFFxiw3AjsJ0opFEgsBWM7qZ+6wmV5rF8MOME/YgOJA85o8ZK9ZteqexW3BnIMvFyUoYctV7pq9uPWRqhNExQrTuemxg/o8pwJnBS7KYaE8pGdIAdSyWNUPvZ7NAJObVKn4SxsiUNmam/JzIaaT2OAtsZUTPUi95U/M/rpCa89jMuk9SgZPNFYSqIicn0a9LnCpkRY0soU9zeStiQKsqMzaZoQ/AWX14mzfOKd1Fx65fl6m0eRwGO4QTOwIMrqMI91KABDBCe4RXenEfnxXl3PuatK04+cwR/4Hz+AI13jMM=</latexit> x = X hx, e bn ibn

    <latexit sha1_base64="tCLG2nbXwywzwFFUoqMNNZJmas8=">AAACOnicbVDLSsNAFJ34rPVVdelmsAgupCQq6EYounHZgn1AU8pkctMOnUzCzEQtod/lxq9w58KNC0Xc+gFO2oBaPTBwOOdc5t7jxZwpbdtP1tz8wuLScmGluLq2vrFZ2tpuqiiRFBo04pFse0QBZwIammkO7VgCCT0OLW94mfmtG5CKReJaj2LohqQvWMAo0UbqlepuSPTAC9K7MT53VRJilxPR54C/jUPs3jIfNOM+pLmMvXFPuPJ3NNNwr1S2K/YE+C9xclJGOWq90qPrRzQJQWjKiVIdx451NyVSM8phXHQTBTGhQ9KHjqGChKC66eT0Md43io+DSJonNJ6oPydSEio1Cj2TzJZUs14m/ud1Eh2cdVMm4kSDoNOPgoRjHeGsR+wzCVTzkSGESmZ2xXRAJKHatF00JTizJ/8lzaOKc1yx6yfl6kVeRwHtoj10gBx0iqroCtVQA1F0j57RK3qzHqwX6936mEbnrHxmB/2C9fkFVQWu8g==</latexit> basis 1 1 x coefficient x = X hx, e bn(x)ibn(x) <latexit sha1_base64="erKoPFMSsbyGoza+nkJyT8tbnTk=">AAACdHicfVFNa9tAEF0pTeu4SeM2hx7awzbG4EIxUltoL4GQXnJ0of4Ay5jVauQsWa3E7qi1EfoF+Xe55WfkknNWjsC1XTqw8Hhv3szsTJhJYdDz7hx379n+8xeNg+bLw6NXx63Xb4YmzTWHAU9lqschMyCFggEKlDDONLAklDAKr39U+ug3aCNS9QuXGUwTNlciFpyhpWatmyBheBXGxaKkZ4HJExpIpuYS6Fr4RIM/IgIUMoKipmlYzlSAsMDVDIWGqCy6a8/HMtCbdSoD/a9j1mp7PW8VdBf4NWiTOvqz1m0QpTxPQCGXzJiJ72U4LZhGwSWUzSA3kDF+zeYwsVCxBMy0WLUvaccyEY1TbZ9CumL/dhQsMWaZhDazmtFsaxX5L22SY/x9WgiV5QiKPzWKc0kxpdUFaCQ0cJRLCxjXws5K+RXTjKO9U9Muwd/+8i4Yfu75X3rez6/t84t6HQ3yjpySLvHJN3JOLkmfDAgn985bhzofnAf3vdt2O0+prlN7TshGuL1HHSHCMQ==</latexit> Ideal basis should be adaptive to the input Ultimate Signal Representation ?
  70. Nonlinear Convolutional Framelet Representation • Ye et al, SIAM Journal

    Imaging Sciences, 2018. • Ye et al, arXiv:1901.07647, 2019 encoder basis decoder basis
  71. Nonlinear Convolutional Framelet Representation ReLU (input adaptive) pooling un-pooling Learned

    filters Can be obtained from vectorized version of deep convolutional framelets
  72. Expressivity of CNN # of representation # of network elements

    # of channel Network depth Skipped connection Ye et al, arXiv:1901.07647, 2019
  73. Take-away message y = X i h ,bi(x)ie bi(x) <latexit

    sha1_base64="wvdFNgdWBgyp03OsXJvyc2GFH4c=">AAACVnicbVFdS+QwFE2r7ujsh9V99OXiICgsQ6uCvgiiL/vowo4K06Gk6e0YTNKapOpQ+ifdl/Wn7MuymVrBVS8knJxzLrk5SUvBjQ3DR89fWFz60Fte6X/89PnLarC2fm6KSjMcsUIU+jKlBgVXOLLcCrwsNVKZCrxIr0/n+sUtasML9dPOSpxIOlU854xaRyWBjCW1V2lezxo4gthUMuEQC6qmAuObimbQ7t/g2Zc2Cd/uDnC/A7FuvRDf8QwtFxlC/Sy/9Nb3zQ4kwSAchm3BWxB1YEC6OkuChzgrWCVRWSaoMeMoLO2kptpyJrDpx5XBkrJrOsWxg4pKNJO6jaWBLcdkkBfaLWWhZV921FQaM5Opc85nNK+1OfmeNq5sfjipuSori4o9XZRXAmwB84wh4xqZFTMHKNPczQrsimrKrPuJvgshev3kt+B8dxjtDcMf+4Pjky6OZbJBNsk2icgBOSbfyRkZEUZ+kT+e7y14v72//pLfe7L6XtfzlfxXfvAPgDC0RA==</latexit> y = X i h ,bi(x)ie bi(x) <latexit sha1_base64="wvdFNgdWBgyp03OsXJvyc2GFH4c=">AAACVnicbVFdS+QwFE2r7ujsh9V99OXiICgsQ6uCvgiiL/vowo4K06Gk6e0YTNKapOpQ+ifdl/Wn7MuymVrBVS8knJxzLrk5SUvBjQ3DR89fWFz60Fte6X/89PnLarC2fm6KSjMcsUIU+jKlBgVXOLLcCrwsNVKZCrxIr0/n+sUtasML9dPOSpxIOlU854xaRyWBjCW1V2lezxo4gthUMuEQC6qmAuObimbQ7t/g2Zc2Cd/uDnC/A7FuvRDf8QwtFxlC/Sy/9Nb3zQ4kwSAchm3BWxB1YEC6OkuChzgrWCVRWSaoMeMoLO2kptpyJrDpx5XBkrJrOsWxg4pKNJO6jaWBLcdkkBfaLWWhZV921FQaM5Opc85nNK+1OfmeNq5sfjipuSori4o9XZRXAmwB84wh4xqZFTMHKNPczQrsimrKrPuJvgshev3kt+B8dxjtDcMf+4Pjky6OZbJBNsk2icgBOSbfyRkZEUZ+kT+e7y14v72//pLfe7L6XtfzlfxXfvAPgDC0RA==</latexit> • Deep learning is a novel image representation with automatic input adaptivity. • Extension of classical regression, CS, PCA, etc • More training data gives better representatio à Don’t be afraid of using it !
  74. Unsupervised Learning for low-dose CT 184 • Multiphase Cardiac CT

    denoising – Phase 1, 2: low-dose, Phase 3 ~ 10: normal dose – Goal: dynamic changes of heart structure – No reference available Kang et al, Medical Physics, 2018
  75. 185 Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT

    Angiography Kang et al, Medical Physics, 2018 Unsupervised Learning for low-dose CT
  76. Lose dose (20%) Kang et al, Medical Physics, 2018 Input:

    phase 1 Proposed Target: phase 8 Input- output
  77. Lose dose (5%) Kang et al, unpublished data Input: phase

    1 Proposed Target: phase 8 Input- output
  78. (a) (b) (c) (d) (e) (f) (g) (h) Input: phase

    1 Proposed Without identity loss GAN Ablation Study Kang et al, Medical Physics, 2018
  79. Input: phase 1 Proposed Without identity loss GAN Ablation Study

    (a) (b) (c) (d) (e) (f) (g) (h) Kang et al, Medical Physics, 2018
  80. CycleGAN with Explicit PSF layer for Blind Deconv Lim et

    al, ISBI, 2019; arXiv:1904.02910 (2019)
  81. CycleGAN with Explicit PSF layer for Blind Deconv Lim et

    al, ISBI, 2019; arXiv:1904.02910 (2019)
  82. missing contrast for radiomics study ID T1w T2w T1-FLAIR T2-FLAIR

    #1 #2 ⋮ ⋮ ⋮ ⋮ ⋮ ID T1w T2w T1-FLAIR T2-FLAIR #1 #2 ⋮ ⋮ ⋮ ⋮ ⋮ incorrect contrast from synthetic MRI T2FLAIR (left) MAGIC T2FLAIR (right) • Partial volume effects • Motion artifact • Basilar artery and CSF pulsation artifact Synergistic Imputation from Multiple MR Contrast
  83. Proposed method : CollaGAN • Imputation using Multiple inputs Input

    images Input images Target domain Input images Fake image G Multiple Inputs Single Generator Single Discriminator Adversarial model Multiple Cycle Consistency Lee et al, CVPR, 2019
  84. • Mask vector for Single Generator Input images Input images

    Target domain Input images Fake image G Input : images + mask vector + Mask Vector = 2D one-hot vector Multiple Inputs Single Generator Single Discriminator Adversarial model Multiple Cycle Consistency Proposed method : CollaGAN Lee et al, CVPR, 2019
  85. • Adversarial model using Dgan Input images Input images Target

    domain Input images Fake image G Fake image Real image Real / Fake (1) (2) D Dgan (1),(2) Multiple Inputs Single Generator Single Discriminator Adversarial model Multiple Cycle Consistency Proposed method : CollaGAN Lee et al, CVPR, 2019
  86. • Dclsf for Single Discriminator Input images Input images Target

    domain Input images Fake image G Fake image Real image Real / Fake Domain classification (1) (2) D Dgan Dclsf (1),(2) (2) Multiple Inputs Single Generator Single Discriminator Adversarial model Multiple Cycle Consistency Proposed method : CollaGAN Lee et al, CVPR, 2019
  87. • Multiple Cycle Consistency Loss Input images Input images Target

    domain Input images New Input images Original domain New Input images Cyclic input images New Input images Original domain New Input images Cyclic input images Fake image G Reconstructe d image Reconstructe d image Reconstructed image G New Input images Original domain New Input images Cyclic input images Fake image Real image Real / Fake Domain classification (1) (2) D Dgan Dclsf (1),(2) (2) Multiple Inputs Single Generator Single Discriminator Adversarial model Multiple Cycle Consistency Proposed method : CollaGAN Lee et al, CVPR, 2019
  88. Lee et al, unpublished data MAGiC T2-FLAIR T2-FLAIR MAGiC T2-FLAIR

    T2-FLAIR MAGiC T2-FLAIR T2-FLAIR MAGiC T2-FLAIR T2-FLAIR
  89. Quantitative evaluation Segmentation performance on BRATS[1-2] Segmentation network Original T1

    T2 T2F T1c Labels T1Colla T2Colla T2FColla T1GdColla • T1 / T1Gd / T2w / T2-FLAIR
  90. Outlooks q End-to-End AI for radiological imaging Ø From AI-powered

    image acquisition to diagnosis for clear and rapid radiological imaging Existing AI Solutions: Diagnosis Our future: from acquisition to diagnosis
  91. Acknowledgement • Daniel Rueckert (Imperial College) • Florian Knoll (NYU)

    • Fang Liu (Univ. of Wisconsin) • Mehmet Akcakaya (Univ. of Minnesota) • Dong Liang (SIAT, China) • Grant – NRF of Korea – Ministry of Trade Industry and Energy • Hyunwook Park (KAIST) • Sung-hong Park (KAIST) • Jongho Lee (SNU) • Doshik Hwang (Yonsei Univ) • Won-Jin Moon (KonkukUniv Medical Center) • Eungyeop Kim (Gachon Univ. Medical Center) • Leonard Sunwoo (SNUBH) • Won Chang (SNUBH) • Chang-Min Park (SNUH) • Joon-Beom Seo (AMC) • Donghyun Yang (AMC) • Hakhee Kim (AMC) • Jungu Ri (AMC)
  92. References (in the order they appeared in the presentation) 1.

    Zou, Y., & Pan, X. (2004). Exact image reconstruction on PI-lines from minimum data in helical cone-beam CT. Physics in Medicine & Biology, 49(6), 941. 2. Lee, D., Jin, K. H., Kim, E. Y., Park, S. H., & Ye, J. C. (2016). Acceleration of MR parameter mapping using annihilating filter-based low rank Hankel matrix (ALOHA). Magnetic resonance in medicine, 76(6), 1848-1864. 3. Lee, J., Jin, K. H., & Ye, J. C. (2016). Reference-free single-pass EPI N yquist ghost correction using annihilating filter-based low rank H ankel matrix (ALOHA). Magnetic resonance in medicine, 76(6), 1775-1789. 4. Jin, K. H., Lee, D., & Ye, J. C. (2016). A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix. IEEE Transactions on Computational Imaging, 2(4), 480-495. 5. Jin, K. H., Um, J. Y., Lee, D., Lee, J., Park, S. H., & Ye, J. C. (2017). MRI artifact correction using sparse+ low-rank decomposition of annihilating filter-based hankel matrix. Magnetic resonance in medicine, 78(1), 327-340. 6. Jin, K. H., & Ye, J. C. (2018). Sparse and low-rank decomposition of a Hankel structured matrix for impulse noise removal. IEEE Transactions on Image Processing, 27(3), 1448-1461. 7. Ye, J. C., Kim, J. M., Jin, K. H., & Lee, K. (2017). Compressive sampling using annihilating filter-based low-rank interpolation. IEEE Transactions on Information Theory, 63(2), 777-801.
  93. 8. Shin, P. J., Larson, P. E., Ohliger, M. A.,

    Elad, M., Pauly, J. M., Vigneron, D. B., & Lustig, M. (2014). Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion. Magnetic resonance in medicine, 72(4), 959-970. 9. Haldar, J. P. (2014). Low-rank modeling of local $ k $-space neighborhoods (LORAKS) for constrained MRI. IEEE transactions on medical imaging, 33(3), 668-681. 10.Haldar, J. P., & Zhuo, J. (2016). P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data. Magnetic resonance in medicine, 75(4), 1499-1514. 11.Ongie, G., & Jacob, M. (2016). Off-the-grid recovery of piecewise constant images from few Fourier samples. SIAM journal on imaging sciences, 9(3), 1004-1041. 12.Ongie, G., & Jacob, M. (2017). A fast algorithm for convolutional structured low-rank matrix recovery. IEEE transactions on computational imaging, 3(4), 535-550. 13.Ongie, G., Biswas, S., & Jacob, M. (2018). Convex recovery of continuous domain piecewise constant images from nonuniform fourier samples. IEEE Transactions on Signal Processing, 66(1), 236-250. 14.Jin, K. H., & Ye, J. C. (2015). Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Transactions on Image Processing, 24(11), 3498-3511.
  94. 15.Min, J., Jin, K. H., Unser, M., & Ye, J.

    C. (2018). Grid-Free Localization Algorithm Using Low-Rank Hankel Matrix for Super-Resolution Microscopy. IEEE Transactions on Image Processing, 27(10), 4771-4786. 16.Kwon, K., Kim, D., & Park, H. (2017). A parallel MR imaging method using multilayer perceptron. Medical physics, 44(12), 6209-6224. 17.Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., Pock, T., & Knoll, F. (2018). Learning a variational network for reconstruction of accelerated MRI data. Magnetic resonance in medicine, 79(6), 3055-3071. 18.Wang, S., Su, Z., Ying, L., Peng, X., Zhu, S., Liang, F., ... & Liang, D. (2016, April). Accelerating magnetic resonance imaging via deep learning. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) (pp. 514- 517). IEEE. 19.Sun, J., Li, H., & Xu, Z. (2016). Deep ADMM-Net for compressive sensing MRI. In Advances in neural information processing systems (pp. 10-18) 20.Kang, E., Min, J., & Ye, J. C. (2017). A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Medical physics, 44(10), e360-e375. 21.Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
  95. 22.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,

    Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680). 23.Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International Conference on Machine Learning (pp. 214-223). 24.Berthelot, D., Schumm, T., & Metz, L. (2017). Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717. 25.Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 5907-5915). 26.LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. 27.Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). 28.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  96. 29.Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of

    the IEEE conference on computer vision and pattern recognition. 2015. 30.He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. 31.Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. 32.LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." nature 521.7553 (2015): 436. 33.Lee, Honglak, et al. "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations." Proceedings of the 26th annual international conference on machine learning. ACM, 2009. 34.Kravitz, Dwight J., et al. "The ventral visual pathway: an expanded neural framework for the processing of object quality." Trends in cognitive sciences 17.1 (2013): 26-49. 35.Riesenhuber, M., & Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature neuroscience, 2(11), 1019. 36.Quiroga, R. Quian, et al. "Invariant visual representation by single neurons in the human brain." Nature 435.7045 (2005): 1102. 37.Mansfield, Peter. "Multi-planar image formation using NMR spin echoes." Journal of Physics C: Solid State Physics 10.3 (1977): L55.
  97. 38.Ahn, C. B., J. H. Kim, and Z. H. Cho.

    "High-speed spiral-scan echo planar NMR imaging-I." IEEE transactions on medical imaging 5.1 (1986): 2-7. 39.Sodickson, Daniel K., and Warren J. Manning. "Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays." Magnetic resonance in medicine38.4 (1997): 591-603. 40.Pruessmann, Klaas P., et al. "SENSE: sensitivity encoding for fast MRI." Magnetic resonance in medicine 42.5 (1999): 952-962. 41.Griswold, Mark A., et al. "Generalized autocalibrating partially parallel acquisitions (GRAPPA)." Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 47.6 (2002): 1202- 1210. 42.Ma, Dan, et al. "Magnetic resonance fingerprinting." Nature495.7440 (2013): 187. 43.Lustig, M., Donoho, D., & Pauly, J. M. (2007). Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 58(6), 1182-1195. 44.Jung, Hong, et al. "k-t FOCUSS: a general compressed sensing framework for high resolution dynamic MRI." Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine61.1 (2009): 103-116. 45.Jung, H., Ye, J. C., & Kim, E. Y. (2007). Improved k–t BLAST and k–t SENSE using FOCUSS. Physics in Medicine & Biology, 52(11), 3201.
  98. 46.Han, Y., Yoo, J., Kim, H. H., Shin, H. J.,

    Sung, K., & Ye, J. C. (2018). Deep learning with domain adaptation for accelerated projection-reconstruction MR. Magnetic resonance in medicine, 80(3), 1189-1205. 47.Lee, D., Yoo, J., Tak, S., & Ye, J. C. (2018). Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Transactions on Biomedical Engineering, 65(9), 1985-1995. 48.Yoon, Jaeyeon, et al. "Quantitative susceptibility mapping using deep neural network: QSMnet." Neuroimage 179 (2018): 199-206. 49.Mardani, Morteza, et al. "Deep Generative Adversarial Neural Networks for Compressive Sensing MRI." IEEE transactions on medical imaging 38.1 (2019): 167-179. 50.Liu, F., Feng, L., & Kijowski, R. (2019). MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magnetic resonance in medicine. 51.Schlemper, J., Caballero, J., Hajnal, J. V., Price, A. N., & Rueckert, D. (2018). A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE transactions on Medical Imaging, 37(2), 491-503. 52.Eo, T., Jun, Y., Kim, T., Jang, J., Lee, H. J., & Hwang, D. (2018). KIKI-net: cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images. Magnetic resonance in medicine, 80(5), 2188- 2201.
  99. 53.Aggarwal, H. K., Mani, M. P., & Jacob, M. (2019).

    MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE transactions on medical imaging, 38(2), 394-405. 54.Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R., & Rosen, M. S. (2018). Image reconstruction by domain-transform manifold learning. Nature, 555(7697), 487. 55.Akçakaya, Mehmet, et al. "Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: Database-free deep learning for fast imaging." Magnetic resonance in medicine 81.1 (2019): 439-453. 56.Han, Y., & Ye, J. C. (2018). k-space deep learning for accelerated MRI. arXiv preprint arXiv:1805.03779. 57.Cha, E., Kim, E. Y., & Ye, J. C. (2018). k-Space Deep Learning for Parallel MRI: Application to Time-Resolved MR Angiography. arXiv preprint arXiv:1806.00806. 58.Lee, J., Han, Y., & Ye, J. C. (2018). k-Space Deep Learning for Reference-free EPI Ghost Correction. arXiv preprint arXiv:1806.00153. 59.Yan, H., & Mao, J. (1993). Data truncation artifact reduction in MR imaging using a multilayer neural network. IEEE transactions on medical imaging, 12(1), 73- 77. 60.Kim, K. H., Choi, S. H., & Park, S. H. (2017). Improving arterial spin labeling by using deep learning. Radiology, 287(2), 658-666. 61.Brenner, D. J., & Hall, E. J. (2007). Computed tomography—an increasing source of radiation exposure. New England Journal of Medicine, 357(22), 2277- 2284.
  100. 62.Pearce, Mark S., et al. "Radiation exposure from CT scans

    in childhood and subsequent risk of leukaemia and brain tumours: a retrospective cohort study." The Lancet 380.9840 (2012): 499-505. 63.Da Cunha, A. L., Zhou, J., & Do, M. N. (2006). The nonsubsampled contourlet transform: theory, design, and applications. IEEE transactions on image processing, 15(10), 3089-3101. 64.Kang, E., Chang, W., Yoo, J., & Ye, J. C. (2018). Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE transactions on medical imaging, 37(6), 1358-1369. 65.Kang, E., & Ye, J. C. (2017). Wavelet domain residual network (WavResNet) for low-dose X-ray CT reconstruction. arXiv preprint arXiv:1703.01383. 66.Yang, Qingsong, et al. "Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss." IEEE transactions on medical imaging 37.6 (2018): 1348-1357. 67.Chen, Hu, et al. "Low-dose CT with a residual encoder-decoder convolutional neural network." IEEE transactions on medical imaging 36.12 (2017): 2524- 2535. 68.Wolterink, J. M., Leiner, T., Viergever, M. A., & Išgum, I. (2017). Generative adversarial networks for noise reduction in low-dose CT. IEEE transactions on medical imaging, 36(12), 2536-2545. 69.Würfl, T., Hoffmann, M., Christlein, V., Breininger, K., Huang, Y., Unberath, M., & Maier, A. K. (2018). Deep learning computed tomography: learning projection-domain weights from image domain in limited angle problems. IEEE transactions on medical imaging, 37(6), 1454-1463.
  101. 70.Han, Y., & Ye, J. C. (2018). Framing U-Net via

    deep convolutional framelets: Application to sparse-view CT. IEEE transactions on medical imaging, 37(6), 1418-1429. 71.Jin, K. H., McCann, M. T., Froustey, E., & Unser, M. (2017). Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9), 4509-4522. 72.Ye, J. C., Han, Y., & Cha, E. (2018). Deep convolutional framelets: A general deep learning framework for inverse problems. SIAM Journal on Imaging Sciences, 11(2), 991-1048. 73.Adler, Jonas, and Ozan Öktem. "Learned primal-dual reconstruction." IEEE transactions on medical imaging 37.6 (2018): 1322-1332. 74.Gupta, H., Jin, K. H., Nguyen, H. Q., McCann, M. T., & Unser, M. (2018). CNN- based projected gradient descent for consistent CT image reconstruction. IEEE transactions on medical imaging, 37(6), 1440-1453. 75.Han, Y., Kang, J., & Ye, J. C. (2018). Deep learning reconstruction for 9-view dual energy CT baggage scanner. arXiv preprint arXiv:1801.01258. 76.Ward, J. P., Lee, M., Ye, J. C., & Unser, M. (2015). Interior tomography using 1D generalized total variation. Part I: Mathematical foundation. SIAM Journal on Imaging Sciences, 8(1), 226-247. 77.Lee, M., Han, Y., Ward, J. P., Unser, M., & Ye, J. C. (2015). Interior tomography using 1D generalized total variation. Part II: Multiscale implementation. SIAM Journal on Imaging Sciences, 8(4), 2452-2486.
  102. 78.Han, Y., & Ye, J. C. (2018). One Network to

    Solve All ROIs: Deep Learning CT for Any ROI using Differentiated Backprojection. arXiv preprint arXiv:1810.00500. 79.Rivenson, Y., Göröcs, Z., Günaydin, H., Zhang, Y., Wang, H., & Ozcan, A. (2017). Deep learning microscopy. Optica, 4(11), 1437-1443. 80.Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D., & Ozcan, A. (2018). Phase recovery and holographic image reconstruction using deep learning in neural networks. Light: Science & Applications, 7(2), 17141. 81.Li, S., Deng, M., Lee, J., Sinha, A., & Barbastathis, G. (2018). Imaging through glass diffusers using densely connected convolutional networks. Optica, 5(7), 803-813. 82.Nehme, E., Weiss, L. E., Michaeli, T., & Shechtman, Y. (2018). Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica, 5(4), 458- 464. 83.Lim, J., Lee, K., Jin, K. H., Shin, S., Lee, S., Park, Y., & Ye, J. C. (2015). Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography. Optics express, 23(13), 16933-16948. 84.Choi, G., Ryu, D., Jo, Y., Kim, Y. S., Park, W., Min, H. S., & Park, Y. (2019). Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography. Optics express, 27(4), 4927-4943. 85.Yoon, Y. H., & Ye, J. C. (2018, April). Deep learning for accelerated ultrasound imaging. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6673-6676). IEEE.
  103. 86.Yoon, Y. H., Khan, S., Huh, J., & Ye, J.

    C. (2019). Efficient B-mode ultrasound image reconstruction from sub-sampled rf data using deep learning. IEEE transactions on medical imaging, 38(2), 325-336. 87.Khan, S., Huh, J., & Ye, J. C. (2019). Universal Deep Beamformer for Variable Rate Ultrasound Imaging. arXiv preprint arXiv:1901.01706. 88.Luchies, A. C., & Byram, B. C. (2018). Deep neural networks for ultrasound beamforming. IEEE transactions on medical imaging, 37(9), 2010-2021. 89.Zhou, Z., Wang, Y., Yu, J., Guo, Y., Guo, W., & Qi, Y. (2018). High Spatial– Temporal Resolution Reconstruction of Plane-Wave Ultrasound Images With a Multichannel Multiscale Convolutional Neural Network. IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 65(11), 1983-1996. 90.Gasse, M., Millioz, F., Roux, E., Garcia, D., Liebgott, H., & Friboulet, D. (2017). High-quality plane wave compounding using convolutional neural networks. IEEE transactions on ultrasonics, ferroelectrics, and frequency control, 64(10), 1637-1639. 91.Vedula, S., Senouf, O., Bronstein, A. M., Michailovich, O. V., & Zibulevsky, M. (2017). Towards CT-quality ultrasound imaging using deep learning. arXiv preprint arXiv:1710.06304. 92.Senouf, Ortal, et al. "High frame-rate cardiac ultrasound imaging with deep learning." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018. 93.Vedula, Sanketh, et al. "High quality ultrasonic multi-line transmission through deep learning." International Workshop on Machine Learning for Medical Image Reconstruction. Springer, Cham, 2018.
  104. 94.Bora, A., Jalal, A., Price, E., & Dimakis, A. G.

    (2017, August). Compressed sensing using generative models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 537-546). 95.Yoo, J., Wahab, A., & Ye, J. C. (2018). A mathematical framework for deep learning in elastic source imaging. SIAM Journal on Applied Mathematics, 78(5), 2791-2818. 96.Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232). 97.Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232). 98.Yoo, J., Uh, Y., Chun, S., Kang, B., & Ha, J. W. (2019). Photorealistic Style Transfer via Wavelet Transforms. arXiv preprint arXiv:1903.09760. 99.Ye, J. C., & Sung, W. K. (2019). Understanding Geometry of Encoder- Decoder CNNs. arXiv preprint arXiv:1901.07647. 100.Kang, E., Koo, H. J., Yang, D. H., Seo, J. B., & Ye, J. C. (2019). Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Medical physics, 46(2), 550-562. 101.Wolterink, Jelmer M., et al. "Deep MR to CT synthesis using unpaired data." International Workshop on Simulation and Synthesis in Medical Imaging. Springer, Cham, 2017.
  105. 102.Dar, S. U., Yurt, M., Karacan, L., Erdem, A., Erdem,

    E., & Çukur, T. (2019). Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE transactions on medical imaging. 103.Lee, D., Kim, J., Moon, W. J., & Ye, J. C. (2019). CollaGAN: Collaborative GAN for Missing Image Data Imputation. CVPR, ArXiv preprint arXiv:1901.09764. 104.Choi, Y., Choi, M., Kim, M., Ha, J. W., Kim, S., & Choo, J. (2018). StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8789-8797). 105.Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7), 3142-3155. 106.Shang, W., Sohn, K., Almeida, D., & Lee, H. (2016, June). Understanding and improving convolutional neural networks via concatenated rectified linear units. In international conference on machine learning (pp. 2217-2225). 107.Gregor, K., & LeCun, Y. (2010, June). Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on International Conference on Machine Learning (pp. 399-406). Omnipress. 108.Lim et al, Blind Deconvolution Microscopy Using Cycle Consistent CNN with Explicit PSF Layer, arXiv:1904.02910 (2019) 109.Khan et, Deep Learning-based Universal Beamformer for Ultrasound Imaging, arXiv:1904.02843 (2019)