Geometry of Deep Learning for Inverse Problems: A Signal Processing Perspective

Geometry of Deep Learning for Inverse Problems: A Signal Processing Perspective

One World Imaging & inverse problems (IMAGINE) Webinar Series
https://sites.google.com/view/oneworldimagine

A3d61bc22cd700a92e7d4136a4d29e8f?s=128

Jong Chul Ye

July 08, 2020
Tweet

Transcript

  1. Geometry of Deep Learning for Inverse Problems: A Signal Processing

    Perspective Jong Chul Ye, Ph.D Professor BISPL - BioImaging, Signal Processing, and Learning lab. KAIST, Korea
  2. Classical Learning vs Deep Learning Diagnosis Classical machine learning Deep

    learning (no feature engineering) Feature Engineering Esteva et al, Nature Medicine, (2019)
  3. Deep Learning for Inverse Problems Diagnosis Diagnosis & analysis New

    trend of deep learning: inverse problems
  4. 4

  5. None
  6. None
  7. WHY DEEP LEARNING WORKS FOR RECON ? DOES IT CREATE

    ANY ARTIFICIAL FEATURES ?
  8. ANOTHER MYSTERY

  9. CNN Encoder-Decoder CNN for Inverse Problems

  10. CNN Encoder-Decoder CNN for Inverse Problems

  11. CNN Successful applications to various inverse problems Encoder-Decoder CNN for

    Inverse Problems
  12. Why Same Architecture Works for Different Inverse Problems ?

  13. Desiderate of Machine Learning

  14. Kernel Machines: Representer Theorem SVM, Kernel regressions RKHS (Reproducing Kernel

    Hilbert Space)
  15. Kernel Machines: Limitations Fixed during run time Non-adaptive RKHS •

    limited space (expressivity) • top-down definition
  16. Perceptron: Universal Approximation Theorem Space can be larger than kernel

  17. Perceptron: Limitations Fixed during run time * Expressivity still limited

    Can be exponentially large
  18. Frame Expansion • Linear representation à No learning • Expressive

    à eg. shift invariant space
  19. Compressed Sensing with Frame Thresholding: nonlinear à learning Limitations •

    Top-down model • transductive à non-inductive
  20. Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine

    No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing + Frame No Yes Yes No Yes Summary of Classical Machine Learning
  21. Ye et al, SIAM J. Imaging Sciences, 2018 Deep Convolutional

    Framelets
  22. Convolution & Hankel Matrix Circular convolution (periodic boundary condition)

  23. : Non-local basis : Local basis Convolution Framelets: Non-local &

    Local basis Yin et al, SIAM J. Imaging Sciences, 2017
  24. Convolution Framelet Expansion Global and local basis Framelet coefficient Frame

    (dual) basis Yin et al, SIAM J. Imaging Sciences, 2017
  25. Convolution Framelet: Pros and Cons Convolution Framelet + Regularization Pro:

    data-driven model Cons: expressivity limited, non-inductive
  26. : Non-local basis : Local basis : Pooling : Convolution

    filters Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  27. : Pooling : Convolution filters Convolution Framelets: Why So Special?

    Ye et al, SIAM J. Imaging Sciences, 2018
  28. : Pooling : Convolution filters Convolution Framelets: Why So Special?

    Ye et al, SIAM J. Imaging Sciences, 2018
  29. : Pooling : Convolution filters Encoder: convolution pooling Convolution Framelets:

    Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  30. : Pooling : Convolution filters Encoder: convolution pooling un-pooling convolution

    Decoder: Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018
  31. Single Resolution Network Architecture

  32. Multi-Resolution Network Architecture

  33. How Can We Restore Frame Representation? Vectorization of Feature Map

  34. Our Theoretical Findings y = X i hbi(x), xi˜ bi(x)

    <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  35. y = X i hbi(x), xi˜ bi(x) <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Our

    Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  36. y = X i hbi(x), xi˜ bi(x) <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Our

    Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  37. analysis basis y = X i hbi(x), xi˜ bi(x) <latexit

    sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Encoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  38. analysis basis synthesis basis y = X i hbi(x), xi˜

    bi(x) <latexit sha1_base64="DaaFmbtzayW3V2tBvW3rbADydJY=">AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y</latexit> Encoder Decoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019
  39. Linear Encoder-Decoder (ED) CNN Learned filters y = ˜ BB>x

    = X i hx, bi i˜ bi <latexit sha1_base64="bo3reUJLRRRgiLys4OrWvNpVArY=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3</latexit> pooling un-pooling
  40. Linear E-D CNN w/ Skipped Connection more redundant expression Learned

    filters y = ˜ BB>x = X i hx, bi i˜ bi <latexit sha1_base64="bo3reUJLRRRgiLys4OrWvNpVArY=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3</latexit>
  41. Deep Convolutional Framelets x = ˜ BB>x = X i

    hx, bi i˜ bi <latexit sha1_base64="9EuOyjKGC2x9hAgBpajvIdywLlA=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2</latexit> Perfect reconstruction Ye et al, SIIMS 2018; Ye et al, ICML 2019 Frame conditions w skipped connection w/o skipped connection
  42. Deep Convolutional Framelets x = ˜ BB>x = X i

    hx, bi i˜ bi <latexit sha1_base64="9EuOyjKGC2x9hAgBpajvIdywLlA=">AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2</latexit> Perfect reconstruction Ye et al, SIAM J. Imaging Science, 2018 Frame conditions w skipped connection w/o skipped connection Frame Conditions for Pooling layers
  43. Deep Convolution Framelet: Pros and Cons Deep Convolutional Framelet +

    Regularization Pros: data-driven, expressive Cons: non-inductive, transductive
  44. Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine

    No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing No Yes Yes No Yes Deep Convolutional Framelet + CS Yes Yes Yes No Yes Summary So Far
  45. Ye et al, ICML, 2019 How to make it inductive?

    Role of Nonlinearities
  46. Role of ReLUs? Generator for Multiple Expressions y = ˜

    B(x)B(x)>x = X i hx, bi(x)i˜ bi(x) <latexit sha1_base64="T/1m1u26m8O8vLHErH3u6EKQhAM=">AAACM3icbVDLSgMxFM3Ud31VXboJFkFByowKuhGKbsSVotVCpw6Z9LYNZjJDckdaSv/JjT/iQhAXirj1H0zbWaj1QsLhPEjuCRMpDLrui5ObmJyanpmdy88vLC4tF1ZWr02cag4VHstYV0NmQAoFFRQooZpoYFEo4Sa8OxnoN/egjYjVFXYTqEespURTcIaWCgpn3SMfhWwAPd7qbA+vWx/jhHboEfVNGgWC+pKplgTa2aFhIAY2X4+YLDpig0LRLbnDoePAy0CRZHMeFJ78RszTCBRyyYypeW6C9R7TKLiEft5PDSSM37EW1CxULAJT7w137tNNyzRoM9b2KKRD9meixyJjulFonRHDtvmrDcj/tFqKzcN6T6gkRVB89FAzlRRjOiiQNoQGjrJrAeNa2L9S3maacbQ1520J3t+Vx8H1bsnbK7kX+8XyZVbHLFknG2SLeOSAlMkpOScVwskDeSZv5N15dF6dD+dzZM05WWaN/Brn6xvQsKgT</latexit> ⌃l(x) = 2 6 6 6 4 1 0 · · · 0 0 2 · · · 0 . . . . . . ... . . . 0 0 · · · ml 3 7 7 7 5 <latexit sha1_base64="1HHS4n8UkvGQcnzeL2YdPrnnXeg=">AAACkXicbVHLSgMxFM2M7/qqunQTLIpuyowK6kIpuhHcKFoVmjpkMrdtMMkMSUYsQ//H73Hn35i2g4/qhQvnnnNfuYkzwY0Ngg/Pn5qemZ2bX6gsLi2vrFbX1u9NmmsGTZaKVD/G1IDgCpqWWwGPmQYqYwEP8fPFUH94AW14qu5sP4O2pF3FO5xR66io+kZueVfSJ7H7uodPMYmhy1URS2o1fx1gYoZqFOIdHDgnLEmtGQWEjJlxwv6kWCEvZfQNkt+MSwom+5btChkJNxxU8rVKVK0F9WBk+C8IS1BDpV1H1XeSpCyXoCwT1JhWGGS2XVBtORMwqJDcQEbZM+1Cy0FFJZh2MbroAG87JsGdVDtXFo/YnxUFlcb0Zewy3X49M6kNyf+0Vm47x+2Cqyy3oNh4UCcX2KZ4+D044RqYFX0HKNPc7YpZj2rKrPvEijtCOPnkv+B+vx4e1IObw1rjvDzHPNpEW2gXhegINdAlukZNxLwV79A79c78Df/Eb/hlru+VNRvol/lXn1/Lv9k=</latexit> Input dependent {0,1} matrix --> Input adaptivity
  47. Input Space Partitioning for Multiple Expressions A CNN performs automatic

    assignment of distinct linear representation depending on input
  48. Geometry of Manifold Partitioning

  49. Toy Examples: Two layer Perceptron with ReLUs

  50. A final exam question from BiS400@KAIST

  51. Expressivity of E-D CNN # of representation # of network

    elements
  52. Expressivity of E-D CNN # of representation # of network

    elements # of channel
  53. Expressivity of E-D CNN # of representation # of network

    elements # of channel Network depth
  54. Expressivity of E-D CNN # of representation # of network

    elements # of channel Network depth Skipped connection
  55. Inductive Bias in Partitioning X. Zhang and D. Wu, ICLR,

    2020.
  56. None
  57. Data-driven model Adaptive expans ion Expressivity Inductive model Learning Kernel

    Machine No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No No No No Compressed sensing No Yes Yes No Yes Deep Convolutional F ramelet + CS Yes Yes Yes No Yes Deep Learning Yes Yes Yes Yes Yes Deep Learning as an Ultimate Learning Machine
  58. Ye Ye et al, ICML, 2019 More about ReLUs

  59. Lipschitz Continuity K = max p Kp, Kp = k

    ˜ B(zp)B(zp)>k2 <latexit sha1_base64="zV0QFc8bcwR20HLOVcDQeQMOtmY=">AAACIHicbZDLSgMxFIYz9V5voy7dBItQQcpMFeqmUOpGcKNgbaFTh0wmtaGZmZicEWv1Udz4Km5cKKI7fRrTy0KtBxI+/v8ckvMHUnANjvNpZaamZ2bn5heyi0vLK6v22vq5TlJFWY0mIlGNgGgmeMxqwEGwhlSMRIFg9aB7OPDr10xpnsRn0JOsFZHLmLc5JWAk3y4dl72I3PgSH/ty17tKSTggXMbenQdchAxX87e+3BndFx4k0rvzi76dcwrOsPAkuGPIoXGd+PaHFyY0jVgMVBCtm64jodUnCjgV7D7rpZpJQrvkkjUNxiRiutUfLniPt40S4naizIkBD9WfE30Sad2LAtMZEejov95A/M9rptA+aPV5LFNgMR091E4FhgQP0sIhV4yC6BkgVHHzV0w7RBEKJtOsCcH9u/IknBcL7l7BOd3PVarjOObRJtpCeeSiEqqgI3SCaoiiB/SEXtCr9Wg9W2/W+6g1Y41nNtCvsr6+AcwXoYQ=</latexit> z1 <latexit sha1_base64="Ob3+IEXFhF5uWyRIGKNYQ89lNRY=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbTbt0swm7E6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSKQw6LrfTmFldW19o7hZ2tre2d0r7x80TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuTaiFg94DjhfkQHSoSCUbTS/VPP65UrbtWdgSwTLycVyFHvlb+6/ZilEVfIJDWm47kJ+hnVKJjkk1I3NTyhbEQHvGOpohE3fjY7dUJOrNInYaxtKSQz9fdERiNjxlFgOyOKQ7PoTcX/vE6K4ZWfCZWkyBWbLwpTSTAm079JX2jOUI4toUwLeythQ6opQ5tOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnRfn3fmYtxacfOYQ/sD5/AEPZo2k</latexit> zp <latexit sha1_base64="Q3WIlMLDjf+qfP58xUVUuKL5KD4=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbSbt0swm7G6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBssFrFqB1Sj4BIbhhuB7UQhjQKBrWB0M/Vbj6g0j+WDGSfoR3QgecgZNVa6f+olvXLFrbozkGXi5aQCOeq98le3H7M0QmmYoFp3PDcxfkaV4UzgpNRNNSaUjegAO5ZKGqH2s9mpE3JilT4JY2VLGjJTf09kNNJ6HAW2M6JmqBe9qfif10lNeOVnXCapQcnmi8JUEBOT6d+kzxUyI8aWUKa4vZWwIVWUGZtOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnBfn3fmYtxacfOYQ/sD5/AFu4o3j</latexit> Related to the generalizability Dependent on the Local Lipschitz
  60. Backpropagation (Backprop, BP)

  61. Key Step of BP Derivation: Backward Propagation

  62. Rectified Linear Unit (ReLU)

  63. ReLU makes back prop symmetric

  64. Implication to Optimization Positive semidefinite à with sufficient small step

    size, cost always decreases
  65. GEOMETRY DRIVEN DESIGN

  66. Which Domain is Good for Learning? Han et al, IEEE

    Trans. Medical Imaging (in press), 2019 Lee et al, MRM (in press), 2019 Han et al, Medical Physics, 2020
  67. Image Domain Learning is Essential? 73 Kravitz et al, Trends

    in Cognitive Sciences January 2013, Vol. 17, No. 1
  68. k-Space Deep Learning Han et al, IEEE Trans. Medical Imaging,

    2019 Lee et al, MRM, 2019
  69. ALOHA CNN k-Space Deep Learning Han et al, IEEE TMI,2019

    Jin et al, IEEE TCI,2016 ; Ye et al, IEEE TIT, 2017
  70. K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS

    K-space learning Han et al, IEEE TMI , 2020
  71. K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS

    K-space learning Han et al, IEEE TMI , 2020
  72. k-space Learning for EPI Ghost Correction Image domain loss L2

    loss is calculated on the image domain k-space (with Ghost) e eo o … e e o o … ALOHA IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P Neural network k-space (with Ghost) e e o o … e e o o … IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P k-space learning Network Input Network Label 34 Lee et al, MRM (in press), 2019
  73. 7T EPI result (R=2) ALOHA Ghost image Half ROI learning

    With Reference PEC-SENSE Proposed (Full ROI) GSR : 10.48% GSR : 9.71% GSR : 15.04% GSR : 8.80% GSR : 4.92% 49 Lee et al, MRM (in press), 2019
  74. DBP Domain Deep Learning Han et al, Medical Physics 46

    (12), e855-e872, 2020 Han et al, IEEE TMI, 2020 Differentiated Backprojection
  75. • Ramp filtering • Back-projection • Differentiation • Back-projection •

    Hilbert transform Two Approaches for CT Reconstruction Zou, Y et al, PMB (2004). Backprojection Filtration (BPF) Filtered Backprojection (FBP)
  76. DBP Domain ROI Tomography Interior (ROI) Tomography à 2-D Deconvolution

    problem Han et al, Medical Physics, 2019
  77. DBP Domain Conebeam Artifact Removal Han et al, IEEE TMI,

    2020 https://www.ndt.net/article/wcndt00/papers/idn730/idn730.htm Standard Method: FDK Algorithm
  78. DBP Domain Conebeam Artifact Removal Exact Factorizationà 2-D Deconvolution problem

    F. Dennerlein et al, IEEE TMI, 2008
  79. DBP Domain Conebeam Artifact Removal Han et al, IEEE TMI,

    2020
  80. Unsupervised Wavelet Directional Learning Song et al. "Unsupervised Denoising for

    Satellite Imagery using Wavelet Subband CycleGAN." arXiv:2002.09847 (2020).
  81. Noise Patterns in Satellite Imagery

  82. Wavelet Directional Residual Learning

  83. Unsupervised Learning with CycleGAN Optimal transport geometry of cycleGAN @

    SPACE Webinar, Tues, July 14th, 11:00am EST
  84. Agricultural area

  85. Cloud

  86. Ocean

  87. WHAT DOES A CNN LEARN?

  88. Classical vs. Deep Learning for Inverse Problems Diagnosis Classical Regularized

    Recon (basis engineering) Deep Recon (no basis engineering) Basis Engineering
  89. Questions?