Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deep Convolutional Framelets: A general deep learning framework for inverse problems

Jong Chul Ye
September 16, 2018

Deep Convolutional Framelets: A general deep learning framework for inverse problems

Keynote Talk by Jong Chul Ye, MLMIR - Machine Learning for Medical Image Reconstruction, MICCAI Workshop, Sept. 16th, 2018, Granada, Spain

Jong Chul Ye

September 16, 2018
Tweet

More Decks by Jong Chul Ye

Other Decks in Research

Transcript

  1. Jong Chul Ye Deep Convolutional Framelets: A general deep learning

    framework for inverse problems Bio-Imaging, Signal Processing, & Learning (BISPL) Dept. Bio & Brain Engineering Dept. Mathematical Sciences KAIST, Korea
  2. • Successful demonstration of deep learning for various image reconstruction

    problems – Low-dose x-ray CT (Kang et al, Chen et al, Wolterink et al, Ye et al) – Sparse view CT (Jin et al, Han et al, Adler et al) – Interior tomography (Han et al) – Stationary CT for baggage inspection (Han et al) – CS-MRI (Hammernik et al, Schlemper et al, Yang et al, Lee et al, Zhu et al) – US imaging (Yoon et al ) – Diffuse optical tomography (Yoo et al) – Elastic tomography (Yoo et al) – Optical diffraction tomography (Kamilov et al) – etc • Advantages – Very fast reconstruction time – Significantly improved results Deep Learning for Inverse Problems
  3. 6

  4. Too Simple to Analyze..? Convolution & pooling à stone age

    tools of signal processing What do they do ?
  5. • What is the role of the nonlinearity such as

    rectified linear unit (ReLU) ? • Why do we need a pooling and unpooling in some architectures ? • Why do some networks need fully connected layers whereas the others do not ? • What is the role of by-pass connection or residual network ? • What is the role of the filter channels in convolutional layer ? Many Mysteries…
  6. Our Proposal: Deep Learning == Deep Convolutional Framelets • Ye

    et al, “Deep convolutional framelets: A general deep learning framework for inverse problems”, SIAM Journal Imaging Sciences, 11(2), 991-1048, 2018.
  7. Why we are excited about Hankel matrix ? T -T

    0 n1 -n1 0 * FRI Sampling theory (Vetterlie et al) and compressed sensing
  8.   1 2 3 4 5 6 7 8

    9 -1 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 12 2 3 4 5 6 7 8 9 12 13 3 4 5 6 7 8 9 10 10 10 10 0 11 11 11 1 2 3 4 5 Finite length convolution Matrix Representation * ALOHA : Annihilating filter based LOw rank Hankel matrix Approach * Jin KH et al. IEEE TCI, 2016 * Jin KH et al.,IEEE TIP, 2015 * Ye JC et al. IEEE TIT, 2016 Annihilating filter-based low-rank Hankel matrix
  9. Missing elements can be found by low rank Hankel structured

    matrix completion Nuclear norm Projection on sampling positions min m kH(m)k⇤ subject to P⌦(b) = P⌦(f) RankH(f) = k * Jin KH et al IEEE TCI, 2016 * Jin KH et al.,IEEE TIP, 2015 * Ye JC et al., IEEE TIT, 2016 m Annihilating filter-based low-rank Hankel matrix
  10. Key Observation Data-Driven Hankel matrix decomposition => Deep Learning •

    Ye et al, “Deep convolutional framelets: A general deep learning framework for inverse problems”, SIAM Journal Imaging Sciences, 11(2), 991-1048, 2018.
  11. Hd(f) = U⌃V T : Non-local basis : Local basis

    Convolution Framelets (Yin et al; 2017) > = I > = I Hd(f)
  12. Hd(f) Hd(f) = ˜ T ˜ T C C =

    T Hd(f) C = T (f ~ ) Encoder: ˜ T = I ˜ = PR(V ) Hd(f) = U⌃V T Unlifting: f = (˜C) ~ ⌧(˜ ) : Non-local basis : Local basis : Frame condition : rank condition convolution pooling un-pooling convolution : User-defined pooling : Learnable filters Hpi (gi) = X k,l [Ci]kl e Bkl i Decoder: Deep Convolutional Framelets (Y, Han, Cha; 2018)
  13. Conic fi [Ci]kl 0 Hpi (gi) = X k,l [Ci]kl

    e Bkl i Hpi (fi) ' Linear Lifting Geometry of CNN gi Linear Un-lifting Ci(fi) Ci(fi) 0 i ⇣i ⇣ e i ⌘
  14. fi [Ci]kl 0 Hpi (gi) = X k,l [Ci]kl e

    Bkl i Hpi (fi) ' Lifting Geometry of Residual CNN Ci(fi) Ci(fi) 0 i ⇣i ⇣ e i ⌘ gi Un-lifting
  15. fi Nonlinear Lifting to Feature space Comparison with Kernel PCA

    gi Nonlinear Pre-Image calculation (fi) <latexit sha1_base64="E9VdeouKNx3eJ5UDAMrvyn9icnU=">AAAB+nicbVBNS8NAFHzxs9avWI9egkWol5KIoN6KXjxWMLbQhLDZbtqlm03Y3Ygl5K948aDi1V/izX/jps1BWwcWhpn3eLMTpoxKZdvfxsrq2vrGZm2rvr2zu7dvHjQeZJIJTFycsET0QyQJo5y4iipG+qkgKA4Z6YWTm9LvPRIhacLv1TQlfoxGnEYUI6WlwGx43TFteTFS4zDKoyKgp4HZtNv2DNYycSrShArdwPzyhgnOYsIVZkjKgWOnys+RUBQzUtS9TJIU4QkakYGmHMVE+vkse2GdaGVoRYnQjytrpv7eyFEs5TQO9WQZUi56pfifN8hUdOnnlKeZIhzPD0UZs1RilUVYQyoIVmyqCcKC6qwWHiOBsNJ11XUJzuKXl4l71r5qO3fnzc511UYNjuAYWuDABXTgFrrgAoYneIZXeDMK48V4Nz7moytGtXMIf2B8/gD8DZP1</latexit> <latexit sha1_base64="E9VdeouKNx3eJ5UDAMrvyn9icnU=">AAAB+nicbVBNS8NAFHzxs9avWI9egkWol5KIoN6KXjxWMLbQhLDZbtqlm03Y3Ygl5K948aDi1V/izX/jps1BWwcWhpn3eLMTpoxKZdvfxsrq2vrGZm2rvr2zu7dvHjQeZJIJTFycsET0QyQJo5y4iipG+qkgKA4Z6YWTm9LvPRIhacLv1TQlfoxGnEYUI6WlwGx43TFteTFS4zDKoyKgp4HZtNv2DNYycSrShArdwPzyhgnOYsIVZkjKgWOnys+RUBQzUtS9TJIU4QkakYGmHMVE+vkse2GdaGVoRYnQjytrpv7eyFEs5TQO9WQZUi56pfifN8hUdOnnlKeZIhzPD0UZs1RilUVYQyoIVmyqCcKC6qwWHiOBsNJ11XUJzuKXl4l71r5qO3fnzc511UYNjuAYWuDABXTgFrrgAoYneIZXeDMK48V4Nz7moytGtXMIf2B8/gD8DZP1</latexit> <latexit sha1_base64="E9VdeouKNx3eJ5UDAMrvyn9icnU=">AAAB+nicbVBNS8NAFHzxs9avWI9egkWol5KIoN6KXjxWMLbQhLDZbtqlm03Y3Ygl5K948aDi1V/izX/jps1BWwcWhpn3eLMTpoxKZdvfxsrq2vrGZm2rvr2zu7dvHjQeZJIJTFycsET0QyQJo5y4iipG+qkgKA4Z6YWTm9LvPRIhacLv1TQlfoxGnEYUI6WlwGx43TFteTFS4zDKoyKgp4HZtNv2DNYycSrShArdwPzyhgnOYsIVZkjKgWOnys+RUBQzUtS9TJIU4QkakYGmHMVE+vkse2GdaGVoRYnQjytrpv7eyFEs5TQO9WQZUi56pfifN8hUdOnnlKeZIhzPD0UZs1RilUVYQyoIVmyqCcKC6qwWHiOBsNJ11XUJzuKXl4l71r5qO3fnzc511UYNjuAYWuDABXTgFrrgAoYneIZXeDMK48V4Nz7moytGtXMIf2B8/gD8DZP1</latexit> <latexit sha1_base64="E9VdeouKNx3eJ5UDAMrvyn9icnU=">AAAB+nicbVBNS8NAFHzxs9avWI9egkWol5KIoN6KXjxWMLbQhLDZbtqlm03Y3Ygl5K948aDi1V/izX/jps1BWwcWhpn3eLMTpoxKZdvfxsrq2vrGZm2rvr2zu7dvHjQeZJIJTFycsET0QyQJo5y4iipG+qkgKA4Z6YWTm9LvPRIhacLv1TQlfoxGnEYUI6WlwGx43TFteTFS4zDKoyKgp4HZtNv2DNYycSrShArdwPzyhgnOYsIVZkjKgWOnys+RUBQzUtS9TJIU4QkakYGmHMVE+vkse2GdaGVoRYnQjytrpv7eyFEs5TQO9WQZUi56pfifN8hUdOnnlKeZIhzPD0UZs1RilUVYQyoIVmyqCcKC6qwWHiOBsNJ11XUJzuKXl4l71r5qO3fnzc511UYNjuAYWuDABXTgFrrgAoYneIZXeDMK48V4Nz7moytGtXMIf2B8/gD8DZP1</latexit> C = 1 N N X i=1 (fi) >(fi) <latexit sha1_base64="LKnrl556MzHt0DAUIt0n5VZ7wPI=">AAACK3icbZDLSsNAFIYnXmu9RV26GSxC3ZREBHVRKHbjqlQwttCkYTKdtEMnF2YmQgl5IDe+iiAurLj1PZy0WdjWHwZ+vnMOc87vxYwKaRhTbW19Y3Nru7RT3t3bPzjUj46fRJRwTCwcsYh3PSQIoyGxJJWMdGNOUOAx0vHGzbzeeSZc0Ch8lJOYOAEahtSnGEmFXL3ZrEPb5winZpa2MlskgZvSupn1W3Z7RKt2gOTI81M/c+lFTvq2jOJF7OoVo2bMBFeNWZgKKNR29Xd7EOEkIKHEDAnRM41YOinikmJGsrKdCBIjPEZD0lM2RAERTjo7NoPnigygH3H1Qgln9O9EigIhJoGnOvMlxXIth//Veon0b5yUhnEiSYjnH/kJgzKCeXJwQDnBkk2UQZhTtSvEI6SikyrfsgrBXD551ViXtdua+XBVadwVaZTAKTgDVWCCa9AA96ANLIDBC3gDn2CqvWof2pf2PW9d04qZE7Ag7ecX5JCoQA==</latexit> <latexit sha1_base64="LKnrl556MzHt0DAUIt0n5VZ7wPI=">AAACK3icbZDLSsNAFIYnXmu9RV26GSxC3ZREBHVRKHbjqlQwttCkYTKdtEMnF2YmQgl5IDe+iiAurLj1PZy0WdjWHwZ+vnMOc87vxYwKaRhTbW19Y3Nru7RT3t3bPzjUj46fRJRwTCwcsYh3PSQIoyGxJJWMdGNOUOAx0vHGzbzeeSZc0Ch8lJOYOAEahtSnGEmFXL3ZrEPb5winZpa2MlskgZvSupn1W3Z7RKt2gOTI81M/c+lFTvq2jOJF7OoVo2bMBFeNWZgKKNR29Xd7EOEkIKHEDAnRM41YOinikmJGsrKdCBIjPEZD0lM2RAERTjo7NoPnigygH3H1Qgln9O9EigIhJoGnOvMlxXIth//Veon0b5yUhnEiSYjnH/kJgzKCeXJwQDnBkk2UQZhTtSvEI6SikyrfsgrBXD551ViXtdua+XBVadwVaZTAKTgDVWCCa9AA96ANLIDBC3gDn2CqvWof2pf2PW9d04qZE7Ag7ecX5JCoQA==</latexit> <latexit sha1_base64="LKnrl556MzHt0DAUIt0n5VZ7wPI=">AAACK3icbZDLSsNAFIYnXmu9RV26GSxC3ZREBHVRKHbjqlQwttCkYTKdtEMnF2YmQgl5IDe+iiAurLj1PZy0WdjWHwZ+vnMOc87vxYwKaRhTbW19Y3Nru7RT3t3bPzjUj46fRJRwTCwcsYh3PSQIoyGxJJWMdGNOUOAx0vHGzbzeeSZc0Ch8lJOYOAEahtSnGEmFXL3ZrEPb5winZpa2MlskgZvSupn1W3Z7RKt2gOTI81M/c+lFTvq2jOJF7OoVo2bMBFeNWZgKKNR29Xd7EOEkIKHEDAnRM41YOinikmJGsrKdCBIjPEZD0lM2RAERTjo7NoPnigygH3H1Qgln9O9EigIhJoGnOvMlxXIth//Veon0b5yUhnEiSYjnH/kJgzKCeXJwQDnBkk2UQZhTtSvEI6SikyrfsgrBXD551ViXtdua+XBVadwVaZTAKTgDVWCCa9AA96ANLIDBC3gDn2CqvWof2pf2PW9d04qZE7Ag7ecX5JCoQA==</latexit> <latexit sha1_base64="LKnrl556MzHt0DAUIt0n5VZ7wPI=">AAACK3icbZDLSsNAFIYnXmu9RV26GSxC3ZREBHVRKHbjqlQwttCkYTKdtEMnF2YmQgl5IDe+iiAurLj1PZy0WdjWHwZ+vnMOc87vxYwKaRhTbW19Y3Nru7RT3t3bPzjUj46fRJRwTCwcsYh3PSQIoyGxJJWMdGNOUOAx0vHGzbzeeSZc0Ch8lJOYOAEahtSnGEmFXL3ZrEPb5winZpa2MlskgZvSupn1W3Z7RKt2gOTI81M/c+lFTvq2jOJF7OoVo2bMBFeNWZgKKNR29Xd7EOEkIKHEDAnRM41YOinikmJGsrKdCBIjPEZD0lM2RAERTjo7NoPnigygH3H1Qgln9O9EigIhJoGnOvMlxXIth//Veon0b5yUhnEiSYjnH/kJgzKCeXJwQDnBkk2UQZhTtSvEI6SikyrfsgrBXD551ViXtdua+XBVadwVaZTAKTgDVWCCa9AA96ANLIDBC3gDn2CqvWof2pf2PW9d04qZE7Ag7ecX5JCoQA==</latexit> PCA of • Nonlinear lifting & unlifting • Deterministic kernel • Difficulty in multilevel extension
  16. Problem of U-net Pooling does NOT satisfy the frame condition

    JC Ye et al, SIAM Journal Imaging Sciences, 2018 Y. Han et al, TMI, 2018. ext > ext = I + > 6= I
  17. Improving U-net using Deep Conv Framelets • Dual Frame U-net

    • Tight Frame U-net JC Ye et al, SIAM Journal Imaging Sciences, 2018 Y. Han and J. C. Ye, TMI, 2018
  18. Low-Dose CT • To reduce the radiation exposure, sparse-view CT,

    low-dose CT and interior tomography. Sparse-view CT (Down-sampled View) Low-dose CT (Reduced X-ray dose) Interior Tomography (Truncated FOV)
  19. K-space Deep Learning for Accelerated MRI Han, Y., & Ye,

    J. C. (2018). k-Space Deep Learning for Accelerated MRI. arXiv preprint arXiv:1805.03779. Conventional Image Domain Learning
  20. K-space Deep Learning for Accelerated MRI Deep Neural Network IFT

    Han, Y., & Ye, J. C. (2018). k-Space Deep Learning for Accelerated MRI. arXiv preprint arXiv:1805.03779. Proposed k-space Deep Learning ALOHA : k-space interpolation à k-space interpolation using deep learning ? Yes
  21. ALOHA for Compressed Sensing MRI ALOHA: Annihilating filter-based low-rank Hankel

    matrix approach • Jin KH et al IEEE TCI, 2016 • Lee et al, MRM, 2015
  22. Improved Time-Resolved MRA using k-Space Deep Learning Eunju Cha ,

    Eung Yeop Kim and Jong Chul Ye 1 1 2 Dept. of Bio and Brain Engineering, KAIST, Dept. of Radiology, Gachon University Gil Mdeical Center 1 2 Motivation Ø To cover k-space data at different rate Ø Regular sampling pattern following view sharing of several temporal frames • Reconstruction using GRAPPA TWIST Fixed spatial resolution Limited temporal resolution How to reconstruct?
  23. Improved Time-Resolved MRA using k-Space Deep Learning Research Goal Ø

    To improve temporal resolution of TWIST imaging using deep k-space learning Ø To generate multiple reconstruction results with various spatial and temporal resolution using one network VS = 5 VS = 2 CNN
  24. Semi-Supervised Learning for low-dose CT 54 • Multiphase Cardiac CT

    denoising – Phase 1, 2: low-dose, Phase 3 ~ 10: normal dose – Goal: dynamic changes of heart structure – No reference available Kang et al, arXiv:1806.09748
  25. 55 • Cardiac CT denoising – Cycle Consistent Adversarial Denoising

    Network for Multiphase Coronary CT Angiography Semi-supervised Learning using Cyclic-GAN
  26. fbp admire recon 139 fbp-target input - admire input -

    recon AMC002_20180903 FBP(Phase1) FBP(Phase8) RECON ADMIRE Phase1 - ADMIRE Phase1 - RECON
  27. 1st view 2nd view 3rd view 4th view 5th view

    6th view 7th view 8th view 9th view
  28. Semi-Supervised High Resolution View Synthesis 128 256 64 128 6

    4 256 256 512 512 512 1024 512 1024 512 256 512 256128 • Key idea • Training with measured views • Inference with non-measured views
  29. FBP

  30. TV

  31. Summary • Deep learning for inverse problems • Significant performance

    gain • Has becomes mainstream topics • Deep convolutional framelets: • A new mathematical tool for understanding deep neural network for inverse problems • Biomedical image reconstruction • Key application for machine learning • Semi-supervised learning • New opportunities