Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Geometry of Deep Learning for Inverse Problems: A Signal Processing Perspective

Geometry of Deep Learning for Inverse Problems: A Signal Processing Perspective

One World Imaging & inverse problems (IMAGINE) Webinar Series
https://sites.google.com/view/oneworldimagine

Jong Chul Ye

July 08, 2020
Tweet

More Decks by Jong Chul Ye

Other Decks in Research

Transcript

  1. Geometry of Deep Learning for Inverse Problems: A Signal Processing

    Perspective Jong Chul Ye, Ph.D Professor BISPL - BioImaging, Signal Processing, and Learning lab. KAIST, Korea
  2. Classical Learning vs Deep Learning Diagnosis Classical machine learning Deep

    learning (no feature engineering) Feature Engineering Esteva et al, Nature Medicine, (2019)
  3. 4

  4. Kernel Machines: Limitations Fixed during run time Non-adaptive RKHS •

    limited space (expressivity) • top-down definition
  5. Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine

    No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing + Frame No Yes Yes No Yes Summary of Classical Machine Learning
  6. : Non-local basis : Local basis Convolution Framelets: Non-local &

    Local basis Yin et al, SIAM J. Imaging Sciences, 2017
  7. Convolution Framelet Expansion Global and local basis Framelet coefficient Frame

    (dual) basis Yin et al, SIAM J. Imaging Sciences, 2017
  8. Convolution Framelet: Pros and Cons Convolution Framelet + Regularization Pro:

    data-driven model Cons: expressivity limited, non-inductive
  9. : Non-local basis : Local basis : Pooling : Convolution

    filters Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  10. : Pooling : Convolution filters Encoder: convolution pooling Convolution Framelets:

    Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  11. : Pooling : Convolution filters Encoder: convolution pooling un-pooling convolution

    Decoder: Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  12. Our Theoretical Findings y = X i hbi(x), xi˜ bi(x)

    <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  13. analysis basis y = X i hbi(x), xi˜ bi(x) <latexit

    sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Encoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  14. analysis basis synthesis basis y = X i hbi(x), xi˜

    bi(x) <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Encoder Decoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  15. Linear Encoder-Decoder (ED) CNN Learned filters y = ˜ BB>x

    = X i hx, bi i˜ bi <latexit sha1_base64="bo3reUJLRRRgiLys4OrWvNpVArY=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3</latexit> pooling un-pooling
  16. Linear E-D CNN w/ Skipped Connection more redundant expression Learned

    filters y = ˜ BB>x = X i hx, bi i˜ bi <latexit sha1_base64="bo3reUJLRRRgiLys4OrWvNpVArY=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3</latexit>
  17. Deep Convolutional Framelets x = ˜ BB>x = X i

    hx, bi i˜ bi <latexit sha1_base64="9EuOyjKGC2x9hAgBpajvIdywLlA=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2</latexit> Perfect reconstruction Ye et al, SIIMS 2018; Ye et al, ICML 2019 Frame conditions w skipped connection w/o skipped connection
  18. Deep Convolutional Framelets x = ˜ BB>x = X i

    hx, bi i˜ bi <latexit sha1_base64="9EuOyjKGC2x9hAgBpajvIdywLlA=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2</latexit> Perfect reconstruction Ye et al, SIAM J. Imaging Science, 2018 Frame conditions w skipped connection w/o skipped connection Frame Conditions for Pooling layers
  19. Deep Convolution Framelet: Pros and Cons Deep Convolutional Framelet +

    Regularization Pros: data-driven, expressive Cons: non-inductive, transductive
  20. Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine

    No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing No Yes Yes No Yes Deep Convolutional Framelet + CS Yes Yes Yes No Yes Summary So Far
  21. Role of ReLUs? Generator for Multiple Expressions y = ˜

    B(x)B(x)>x = X i hx, bi(x)i˜ bi(x) <latexit sha1_base64="T/1m1u26m8O8vLHErH3u6EKQhAM=">AAACM3icbVDLSgMxFM3Ud31VXboJFkFByowKuhGKbsSVotVCpw6Z9LYNZjJDckdaSv/JjT/iQhAXirj1H0zbWaj1QsLhPEjuCRMpDLrui5ObmJyanpmdy88vLC4tF1ZWr02cag4VHstYV0NmQAoFFRQooZpoYFEo4Sa8OxnoN/egjYjVFXYTqEespURTcIaWCgpn3SMfhWwAPd7qbA+vWx/jhHboEfVNGgWC+pKplgTa2aFhIAY2X4+YLDpig0LRLbnDoePAy0CRZHMeFJ78RszTCBRyyYypeW6C9R7TKLiEft5PDSSM37EW1CxULAJT7w137tNNyzRoM9b2KKRD9meixyJjulFonRHDtvmrDcj/tFqKzcN6T6gkRVB89FAzlRRjOiiQNoQGjrJrAeNa2L9S3maacbQ1520J3t+Vx8H1bsnbK7kX+8XyZVbHLFknG2SLeOSAlMkpOScVwskDeSZv5N15dF6dD+dzZM05WWaN/Brn6xvQsKgT</latexit> ⌃l(x) = 2 6 6 6 4 1 0 · · · 0 0 2 · · · 0 . . . . . . ... . . . 0 0 · · · ml 3 7 7 7 5 <latexit sha1_base64="1HHS4n8UkvGQcnzeL2YdPrnnXeg=">AAACkXicbVHLSgMxFM2M7/qqunQTLIpuyowK6kIpuhHcKFoVmjpkMrdtMMkMSUYsQ//H73Hn35i2g4/qhQvnnnNfuYkzwY0Ngg/Pn5qemZ2bX6gsLi2vrFbX1u9NmmsGTZaKVD/G1IDgCpqWWwGPmQYqYwEP8fPFUH94AW14qu5sP4O2pF3FO5xR66io+kZueVfSJ7H7uodPMYmhy1URS2o1fx1gYoZqFOIdHDgnLEmtGQWEjJlxwv6kWCEvZfQNkt+MSwom+5btChkJNxxU8rVKVK0F9WBk+C8IS1BDpV1H1XeSpCyXoCwT1JhWGGS2XVBtORMwqJDcQEbZM+1Cy0FFJZh2MbroAG87JsGdVDtXFo/YnxUFlcb0Zewy3X49M6kNyf+0Vm47x+2Cqyy3oNh4UCcX2KZ4+D044RqYFX0HKNPc7YpZj2rKrPvEijtCOPnkv+B+vx4e1IObw1rjvDzHPNpEW2gXhegINdAlukZNxLwV79A79c78Df/Eb/hlru+VNRvol/lXn1/Lv9k=</latexit> Input dependent {0,1} matrix --> Input adaptivity
  22. Input Space Partitioning for Multiple Expressions A CNN performs automatic

    assignment of distinct linear representation depending on input
  23. Expressivity of E-D CNN # of representation # of network

    elements # of channel Network depth
  24. Expressivity of E-D CNN # of representation # of network

    elements # of channel Network depth Skipped connection
  25. Data-driven model Adaptive expans ion Expressivity Inductive model Learning Kernel

    Machine No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No No No No Compressed sensing No Yes Yes No Yes Deep Convolutional F ramelet + CS Yes Yes Yes No Yes Deep Learning Yes Yes Yes Yes Yes Deep Learning as an Ultimate Learning Machine
  26. Lipschitz Continuity K = max p Kp, Kp = k

    ˜ B(zp)B(zp)>k2 <latexit sha1_base64="zV0QFc8bcwR20HLOVcDQeQMOtmY=">AAACIHicbZDLSgMxFIYz9V5voy7dBItQQcpMFeqmUOpGcKNgbaFTh0wmtaGZmZicEWv1Udz4Km5cKKI7fRrTy0KtBxI+/v8ckvMHUnANjvNpZaamZ2bn5heyi0vLK6v22vq5TlJFWY0mIlGNgGgmeMxqwEGwhlSMRIFg9aB7OPDr10xpnsRn0JOsFZHLmLc5JWAk3y4dl72I3PgSH/ty17tKSTggXMbenQdchAxX87e+3BndFx4k0rvzi76dcwrOsPAkuGPIoXGd+PaHFyY0jVgMVBCtm64jodUnCjgV7D7rpZpJQrvkkjUNxiRiutUfLniPt40S4naizIkBD9WfE30Sad2LAtMZEejov95A/M9rptA+aPV5LFNgMR091E4FhgQP0sIhV4yC6BkgVHHzV0w7RBEKJtOsCcH9u/IknBcL7l7BOd3PVarjOObRJtpCeeSiEqqgI3SCaoiiB/SEXtCr9Wg9W2/W+6g1Y41nNtCvsr6+AcwXoYQ=</latexit> z1 <latexit sha1_base64="Ob3+IEXFhF5uWyRIGKNYQ89lNRY=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbTbt0swm7E6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSKQw6LrfTmFldW19o7hZ2tre2d0r7x80TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuTaiFg94DjhfkQHSoSCUbTS/VPP65UrbtWdgSwTLycVyFHvlb+6/ZilEVfIJDWm47kJ+hnVKJjkk1I3NTyhbEQHvGOpohE3fjY7dUJOrNInYaxtKSQz9fdERiNjxlFgOyOKQ7PoTcX/vE6K4ZWfCZWkyBWbLwpTSTAm079JX2jOUI4toUwLeythQ6opQ5tOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnRfn3fmYtxacfOYQ/sD5/AEPZo2k</latexit> zp <latexit sha1_base64="Q3WIlMLDjf+qfP58xUVUuKL5KD4=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbSbt0swm7G6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBssFrFqB1Sj4BIbhhuB7UQhjQKBrWB0M/Vbj6g0j+WDGSfoR3QgecgZNVa6f+olvXLFrbozkGXi5aQCOeq98le3H7M0QmmYoFp3PDcxfkaV4UzgpNRNNSaUjegAO5ZKGqH2s9mpE3JilT4JY2VLGjJTf09kNNJ6HAW2M6JmqBe9qfif10lNeOVnXCapQcnmi8JUEBOT6d+kzxUyI8aWUKa4vZWwIVWUGZtOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnBfn3fmYtxacfOYQ/sD5/AFu4o3j</latexit> Related to the generalizability Dependent on the Local Lipschitz
  27. Which Domain is Good for Learning? Han et al, IEEE

    Trans. Medical Imaging (in press), 2019 Lee et al, MRM (in press), 2019 Han et al, Medical Physics, 2020
  28. Image Domain Learning is Essential? 73 Kravitz et al, Trends

    in Cognitive Sciences January 2013, Vol. 17, No. 1
  29. ALOHA CNN k-Space Deep Learning Han et al, IEEE TMI,2019

    Jin et al, IEEE TCI,2016 ; Ye et al, IEEE TIT, 2017
  30. k-space Learning for EPI Ghost Correction Image domain loss L2

    loss is calculated on the image domain k-space (with Ghost) e eo o … e e o o … ALOHA IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P Neural network k-space (with Ghost) e e o o … e e o o … IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P k-space learning Network Input Network Label 34 Lee et al, MRM (in press), 2019
  31. 7T EPI result (R=2) ALOHA Ghost image Half ROI learning

    With Reference PEC-SENSE Proposed (Full ROI) GSR : 10.48% GSR : 9.71% GSR : 15.04% GSR : 8.80% GSR : 4.92% 49 Lee et al, MRM (in press), 2019
  32. DBP Domain Deep Learning Han et al, Medical Physics 46

    (12), e855-e872, 2020 Han et al, IEEE TMI, 2020 Differentiated Backprojection
  33. • Ramp filtering • Back-projection • Differentiation • Back-projection •

    Hilbert transform Two Approaches for CT Reconstruction Zou, Y et al, PMB (2004). Backprojection Filtration (BPF) Filtered Backprojection (FBP)
  34. DBP Domain Conebeam Artifact Removal Han et al, IEEE TMI,

    2020 https://www.ndt.net/article/wcndt00/papers/idn730/idn730.htm Standard Method: FDK Algorithm
  35. Unsupervised Wavelet Directional Learning Song et al. "Unsupervised Denoising for

    Satellite Imagery using Wavelet Subband CycleGAN." arXiv:2002.09847 (2020).
  36. Classical vs. Deep Learning for Inverse Problems Diagnosis Classical Regularized

    Recon (basis engineering) Deep Recon (no basis engineering) Basis Engineering