Slide 1

Slide 1 text

Geometry of Deep Learning for Inverse Problems: A Signal Processing Perspective Jong Chul Ye, Ph.D Professor BISPL - BioImaging, Signal Processing, and Learning lab. KAIST, Korea

Slide 2

Slide 2 text

Classical Learning vs Deep Learning Diagnosis Classical machine learning Deep learning (no feature engineering) Feature Engineering Esteva et al, Nature Medicine, (2019)

Slide 3

Slide 3 text

Deep Learning for Inverse Problems Diagnosis Diagnosis & analysis New trend of deep learning: inverse problems

Slide 4

Slide 4 text

4

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

WHY DEEP LEARNING WORKS FOR RECON ? DOES IT CREATE ANY ARTIFICIAL FEATURES ?

Slide 8

Slide 8 text

ANOTHER MYSTERY

Slide 9

Slide 9 text

CNN Encoder-Decoder CNN for Inverse Problems

Slide 10

Slide 10 text

CNN Encoder-Decoder CNN for Inverse Problems

Slide 11

Slide 11 text

CNN Successful applications to various inverse problems Encoder-Decoder CNN for Inverse Problems

Slide 12

Slide 12 text

Why Same Architecture Works for Different Inverse Problems ?

Slide 13

Slide 13 text

Desiderate of Machine Learning

Slide 14

Slide 14 text

Kernel Machines: Representer Theorem SVM, Kernel regressions RKHS (Reproducing Kernel Hilbert Space)

Slide 15

Slide 15 text

Kernel Machines: Limitations Fixed during run time Non-adaptive RKHS • limited space (expressivity) • top-down definition

Slide 16

Slide 16 text

Perceptron: Universal Approximation Theorem Space can be larger than kernel

Slide 17

Slide 17 text

Perceptron: Limitations Fixed during run time * Expressivity still limited Can be exponentially large

Slide 18

Slide 18 text

Frame Expansion • Linear representation à No learning • Expressive à eg. shift invariant space

Slide 19

Slide 19 text

Compressed Sensing with Frame Thresholding: nonlinear à learning Limitations • Top-down model • transductive à non-inductive

Slide 20

Slide 20 text

Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing + Frame No Yes Yes No Yes Summary of Classical Machine Learning

Slide 21

Slide 21 text

Ye et al, SIAM J. Imaging Sciences, 2018 Deep Convolutional Framelets

Slide 22

Slide 22 text

Convolution & Hankel Matrix Circular convolution (periodic boundary condition)

Slide 23

Slide 23 text

: Non-local basis : Local basis Convolution Framelets: Non-local & Local basis Yin et al, SIAM J. Imaging Sciences, 2017

Slide 24

Slide 24 text

Convolution Framelet Expansion Global and local basis Framelet coefficient Frame (dual) basis Yin et al, SIAM J. Imaging Sciences, 2017

Slide 25

Slide 25 text

Convolution Framelet: Pros and Cons Convolution Framelet + Regularization Pro: data-driven model Cons: expressivity limited, non-inductive

Slide 26

Slide 26 text

: Non-local basis : Local basis : Pooling : Convolution filters Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018

Slide 27

Slide 27 text

: Pooling : Convolution filters Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018

Slide 28

Slide 28 text

: Pooling : Convolution filters Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018

Slide 29

Slide 29 text

: Pooling : Convolution filters Encoder: convolution pooling Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018

Slide 30

Slide 30 text

: Pooling : Convolution filters Encoder: convolution pooling un-pooling convolution Decoder: Convolution Framelets: Why So Special? Ye et al, SIAM J. Imaging Sciences, 2018

Slide 31

Slide 31 text

Single Resolution Network Architecture

Slide 32

Slide 32 text

Multi-Resolution Network Architecture

Slide 33

Slide 33 text

How Can We Restore Frame Representation? Vectorization of Feature Map

Slide 34

Slide 34 text

Our Theoretical Findings y = X i hbi(x), xi˜ bi(x) AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y Ye et al, SIIMS, 2018; Ye et al, ICML, 2019

Slide 35

Slide 35 text

y = X i hbi(x), xi˜ bi(x) AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019

Slide 36

Slide 36 text

y = X i hbi(x), xi˜ bi(x) AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019

Slide 37

Slide 37 text

analysis basis y = X i hbi(x), xi˜ bi(x) AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y Encoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019

Slide 38

Slide 38 text

analysis basis synthesis basis y = X i hbi(x), xi˜ bi(x) AAACGXicbZDLSsNAFIYnXmu9RV26GSxCBSmJCroRim5cVrAXaEKYTCbt0MkkzEykIfQ13Pgqblwo4lJXvo3TNoK2/jDw851zOHN+P2FUKsv6MhYWl5ZXVktr5fWNza1tc2e3JeNUYNLEMYtFx0eSMMpJU1HFSCcRBEU+I21/cD2ut++JkDTmdypLiBuhHqchxUhp5JlWdunINPIohA5DvMcI9D1aHR4dw6EjpsBRlAU/3DMrVs2aCM4buzAVUKjhmR9OEOM0IlxhhqTs2lai3BwJRTEjo7KTSpIgPEA90tWWo4hIN59cNoKHmgQwjIV+XMEJ/T2Ro0jKLPJ1Z4RUX87WxvC/WjdV4YWbU56kinA8XRSmDKoYjmOCARUEK5Zpg7Cg+q8Q95FAWOkwyzoEe/bkedM6qdmnNev2rFK/KuIogX1wAKrABuegDm5AAzQBBg/gCbyAV+PReDbejPdp64JRzOyBPzI+vwEaXJ8Y Encoder Decoder Our Theoretical Findings Ye et al, SIIMS, 2018; Ye et al, ICML, 2019

Slide 39

Slide 39 text

Linear Encoder-Decoder (ED) CNN Learned filters y = ˜ BB>x = X i hx, bi i˜ bi AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3 pooling un-pooling

Slide 40

Slide 40 text

Linear E-D CNN w/ Skipped Connection more redundant expression Learned filters y = ˜ BB>x = X i hx, bi i˜ bi AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6KUievGoaK3QrSWbzrah2eySzIpl8d948a94EVREj/4T03YP2joQePPePCbzgkQKg6775UxMTk3PzM7NFxYWl5ZXiqtr1yZONYcqj2WsbwJmQAoFVRQo4SbRwKJAQi3onvb12h1oI2J1hb0EGhFrKxEKztBSzeJRr+KjkC2gJ/Tk1sc4ofe0Qn2TRk1BfclUWwK936FBv9XDNndYqlksuWV3UHQceDkokbzOm8VXvxXzNAKFXDJj6p6bYCNjGgWX8FDwUwMJ413WhrqFikVgGtngzge6ZZkWDWNtn0I6YH87MhYZ04sCOxkx7JhRrU/+p9VTDA8bmVBJiqD4cFGYSoox7YdGW0IDR9mzgHEt7F8p7zDNONpoCzYEb/TkcXC9W/b2yu7Ffun4Mo9jjmyQTbJNPHJAjskZOSdVwskjeSZv5N15cl6cD+dzODrh5J518qec7x9ZY6R3

Slide 41

Slide 41 text

Deep Convolutional Framelets x = ˜ BB>x = X i hx, bi i˜ bi AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2 Perfect reconstruction Ye et al, SIIMS 2018; Ye et al, ICML 2019 Frame conditions w skipped connection w/o skipped connection

Slide 42

Slide 42 text

Deep Convolutional Framelets x = ˜ BB>x = X i hx, bi i˜ bi AAACJ3icbVBNSwMxEM36bf2qevQSLIIHKbsq6EWRevGoaK3QXZdsOq2h2eySzEpL6b/x4l/xIqiIHv0npu0etDoQePPePCbzolQKg6776UxMTk3PzM7NFxYWl5ZXiqtr1ybJNIcqT2SibyJmQAoFVRQo4SbVwOJIQi1qnw702j1oIxJ1hd0Ugpi1lGgKztBSYfG4c+SjkA2gFVq59TFJaYceUd9kcSioL5lqSaCdHRoNWj1qc4elwmLJLbvDon+Bl4MSyes8LL74jYRnMSjkkhlT99wUgx7TKLiEfsHPDKSMt1kL6hYqFoMJesM7+3TLMg3aTLR9CumQ/enosdiYbhzZyZjhnRnXBuR/Wj3D5mHQEyrNEBQfLWpmkmJCB6HRhtDAUXYtYFwL+1fK75hmHG20BRuCN37yX3C9W/b2yu7FfunkMo9jjmyQTbJNPHJATsgZOSdVwskDeSKv5M15dJ6dd+djNDrh5J518qucr29XoqR2 Perfect reconstruction Ye et al, SIAM J. Imaging Science, 2018 Frame conditions w skipped connection w/o skipped connection Frame Conditions for Pooling layers

Slide 43

Slide 43 text

Deep Convolution Framelet: Pros and Cons Deep Convolutional Framelet + Regularization Pros: data-driven, expressive Cons: non-inductive, transductive

Slide 44

Slide 44 text

Data-driven model Adaptive expansion Expressivity Inductive model Learning Kernel Machine No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No Yes No No Compressed sensing No Yes Yes No Yes Deep Convolutional Framelet + CS Yes Yes Yes No Yes Summary So Far

Slide 45

Slide 45 text

Ye et al, ICML, 2019 How to make it inductive? Role of Nonlinearities

Slide 46

Slide 46 text

Role of ReLUs? Generator for Multiple Expressions y = ˜ B(x)B(x)>x = X i hx, bi(x)i˜ bi(x) AAACM3icbVDLSgMxFM3Ud31VXboJFkFByowKuhGKbsSVotVCpw6Z9LYNZjJDckdaSv/JjT/iQhAXirj1H0zbWaj1QsLhPEjuCRMpDLrui5ObmJyanpmdy88vLC4tF1ZWr02cag4VHstYV0NmQAoFFRQooZpoYFEo4Sa8OxnoN/egjYjVFXYTqEespURTcIaWCgpn3SMfhWwAPd7qbA+vWx/jhHboEfVNGgWC+pKplgTa2aFhIAY2X4+YLDpig0LRLbnDoePAy0CRZHMeFJ78RszTCBRyyYypeW6C9R7TKLiEft5PDSSM37EW1CxULAJT7w137tNNyzRoM9b2KKRD9meixyJjulFonRHDtvmrDcj/tFqKzcN6T6gkRVB89FAzlRRjOiiQNoQGjrJrAeNa2L9S3maacbQ1520J3t+Vx8H1bsnbK7kX+8XyZVbHLFknG2SLeOSAlMkpOScVwskDeSZv5N15dF6dD+dzZM05WWaN/Brn6xvQsKgT ⌃l(x) = 2 6 6 6 4 1 0 · · · 0 0 2 · · · 0 . . . . . . ... . . . 0 0 · · · ml 3 7 7 7 5 AAACkXicbVHLSgMxFM2M7/qqunQTLIpuyowK6kIpuhHcKFoVmjpkMrdtMMkMSUYsQ//H73Hn35i2g4/qhQvnnnNfuYkzwY0Ngg/Pn5qemZ2bX6gsLi2vrFbX1u9NmmsGTZaKVD/G1IDgCpqWWwGPmQYqYwEP8fPFUH94AW14qu5sP4O2pF3FO5xR66io+kZueVfSJ7H7uodPMYmhy1URS2o1fx1gYoZqFOIdHDgnLEmtGQWEjJlxwv6kWCEvZfQNkt+MSwom+5btChkJNxxU8rVKVK0F9WBk+C8IS1BDpV1H1XeSpCyXoCwT1JhWGGS2XVBtORMwqJDcQEbZM+1Cy0FFJZh2MbroAG87JsGdVDtXFo/YnxUFlcb0Zewy3X49M6kNyf+0Vm47x+2Cqyy3oNh4UCcX2KZ4+D044RqYFX0HKNPc7YpZj2rKrPvEijtCOPnkv+B+vx4e1IObw1rjvDzHPNpEW2gXhegINdAlukZNxLwV79A79c78Df/Eb/hlru+VNRvol/lXn1/Lv9k= Input dependent {0,1} matrix --> Input adaptivity

Slide 47

Slide 47 text

Input Space Partitioning for Multiple Expressions A CNN performs automatic assignment of distinct linear representation depending on input

Slide 48

Slide 48 text

Geometry of Manifold Partitioning

Slide 49

Slide 49 text

Toy Examples: Two layer Perceptron with ReLUs

Slide 50

Slide 50 text

A final exam question from BiS400@KAIST

Slide 51

Slide 51 text

Expressivity of E-D CNN # of representation # of network elements

Slide 52

Slide 52 text

Expressivity of E-D CNN # of representation # of network elements # of channel

Slide 53

Slide 53 text

Expressivity of E-D CNN # of representation # of network elements # of channel Network depth

Slide 54

Slide 54 text

Expressivity of E-D CNN # of representation # of network elements # of channel Network depth Skipped connection

Slide 55

Slide 55 text

Inductive Bias in Partitioning X. Zhang and D. Wu, ICLR, 2020.

Slide 56

Slide 56 text

No content

Slide 57

Slide 57 text

Data-driven model Adaptive expans ion Expressivity Inductive model Learning Kernel Machine No No No Yes Yes Single layer perceptron Yes No No Yes Yes Frame No No No No No Compressed sensing No Yes Yes No Yes Deep Convolutional F ramelet + CS Yes Yes Yes No Yes Deep Learning Yes Yes Yes Yes Yes Deep Learning as an Ultimate Learning Machine

Slide 58

Slide 58 text

Ye Ye et al, ICML, 2019 More about ReLUs

Slide 59

Slide 59 text

Lipschitz Continuity K = max p Kp, Kp = k ˜ B(zp)B(zp)>k2 AAACIHicbZDLSgMxFIYz9V5voy7dBItQQcpMFeqmUOpGcKNgbaFTh0wmtaGZmZicEWv1Udz4Km5cKKI7fRrTy0KtBxI+/v8ckvMHUnANjvNpZaamZ2bn5heyi0vLK6v22vq5TlJFWY0mIlGNgGgmeMxqwEGwhlSMRIFg9aB7OPDr10xpnsRn0JOsFZHLmLc5JWAk3y4dl72I3PgSH/ty17tKSTggXMbenQdchAxX87e+3BndFx4k0rvzi76dcwrOsPAkuGPIoXGd+PaHFyY0jVgMVBCtm64jodUnCjgV7D7rpZpJQrvkkjUNxiRiutUfLniPt40S4naizIkBD9WfE30Sad2LAtMZEejov95A/M9rptA+aPV5LFNgMR091E4FhgQP0sIhV4yC6BkgVHHzV0w7RBEKJtOsCcH9u/IknBcL7l7BOd3PVarjOObRJtpCeeSiEqqgI3SCaoiiB/SEXtCr9Wg9W2/W+6g1Y41nNtCvsr6+AcwXoYQ= z1 AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbTbt0swm7E6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSKQw6LrfTmFldW19o7hZ2tre2d0r7x80TZxqxhsslrFuB9RwKRRvoEDJ24nmNAokbwWjm6nfeuTaiFg94DjhfkQHSoSCUbTS/VPP65UrbtWdgSwTLycVyFHvlb+6/ZilEVfIJDWm47kJ+hnVKJjkk1I3NTyhbEQHvGOpohE3fjY7dUJOrNInYaxtKSQz9fdERiNjxlFgOyOKQ7PoTcX/vE6K4ZWfCZWkyBWbLwpTSTAm079JX2jOUI4toUwLeythQ6opQ5tOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnRfn3fmYtxacfOYQ/sD5/AEPZo2k zp AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK9gPaUDbbSbt0swm7G6GG/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBssFrFqB1Sj4BIbhhuB7UQhjQKBrWB0M/Vbj6g0j+WDGSfoR3QgecgZNVa6f+olvXLFrbozkGXi5aQCOeq98le3H7M0QmmYoFp3PDcxfkaV4UzgpNRNNSaUjegAO5ZKGqH2s9mpE3JilT4JY2VLGjJTf09kNNJ6HAW2M6JmqBe9qfif10lNeOVnXCapQcnmi8JUEBOT6d+kzxUyI8aWUKa4vZWwIVWUGZtOyYbgLb68TJpnVe+86t5dVGrXeRxFOIJjOAUPLqEGt1CHBjAYwDO8wpsjnBfn3fmYtxacfOYQ/sD5/AFu4o3j Related to the generalizability Dependent on the Local Lipschitz

Slide 60

Slide 60 text

Backpropagation (Backprop, BP)

Slide 61

Slide 61 text

Key Step of BP Derivation: Backward Propagation

Slide 62

Slide 62 text

Rectified Linear Unit (ReLU)

Slide 63

Slide 63 text

ReLU makes back prop symmetric

Slide 64

Slide 64 text

Implication to Optimization Positive semidefinite à with sufficient small step size, cost always decreases

Slide 65

Slide 65 text

GEOMETRY DRIVEN DESIGN

Slide 66

Slide 66 text

Which Domain is Good for Learning? Han et al, IEEE Trans. Medical Imaging (in press), 2019 Lee et al, MRM (in press), 2019 Han et al, Medical Physics, 2020

Slide 67

Slide 67 text

Image Domain Learning is Essential? 73 Kravitz et al, Trends in Cognitive Sciences January 2013, Vol. 17, No. 1

Slide 68

Slide 68 text

k-Space Deep Learning Han et al, IEEE Trans. Medical Imaging, 2019 Lee et al, MRM, 2019

Slide 69

Slide 69 text

ALOHA CNN k-Space Deep Learning Han et al, IEEE TMI,2019 Jin et al, IEEE TCI,2016 ; Ye et al, IEEE TIT, 2017

Slide 70

Slide 70 text

K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS K-space learning Han et al, IEEE TMI , 2020

Slide 71

Slide 71 text

K-space Deep Learning (Radial R=6) Ground-truth Acceleration Image learning CS K-space learning Han et al, IEEE TMI , 2020

Slide 72

Slide 72 text

k-space Learning for EPI Ghost Correction Image domain loss L2 loss is calculated on the image domain k-space (with Ghost) e eo o … e e o o … ALOHA IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P Neural network k-space (with Ghost) e e o o … e e o o … IFT e e o o … Coil 1 … coil P Coil 1 … coil P Coil 1 … coil P k-space learning Network Input Network Label 34 Lee et al, MRM (in press), 2019

Slide 73

Slide 73 text

7T EPI result (R=2) ALOHA Ghost image Half ROI learning With Reference PEC-SENSE Proposed (Full ROI) GSR : 10.48% GSR : 9.71% GSR : 15.04% GSR : 8.80% GSR : 4.92% 49 Lee et al, MRM (in press), 2019

Slide 74

Slide 74 text

DBP Domain Deep Learning Han et al, Medical Physics 46 (12), e855-e872, 2020 Han et al, IEEE TMI, 2020 Differentiated Backprojection

Slide 75

Slide 75 text

• Ramp filtering • Back-projection • Differentiation • Back-projection • Hilbert transform Two Approaches for CT Reconstruction Zou, Y et al, PMB (2004). Backprojection Filtration (BPF) Filtered Backprojection (FBP)

Slide 76

Slide 76 text

DBP Domain ROI Tomography Interior (ROI) Tomography à 2-D Deconvolution problem Han et al, Medical Physics, 2019

Slide 77

Slide 77 text

DBP Domain Conebeam Artifact Removal Han et al, IEEE TMI, 2020 https://www.ndt.net/article/wcndt00/papers/idn730/idn730.htm Standard Method: FDK Algorithm

Slide 78

Slide 78 text

DBP Domain Conebeam Artifact Removal Exact Factorizationà 2-D Deconvolution problem F. Dennerlein et al, IEEE TMI, 2008

Slide 79

Slide 79 text

DBP Domain Conebeam Artifact Removal Han et al, IEEE TMI, 2020

Slide 80

Slide 80 text

Unsupervised Wavelet Directional Learning Song et al. "Unsupervised Denoising for Satellite Imagery using Wavelet Subband CycleGAN." arXiv:2002.09847 (2020).

Slide 81

Slide 81 text

Noise Patterns in Satellite Imagery

Slide 82

Slide 82 text

Wavelet Directional Residual Learning

Slide 83

Slide 83 text

Unsupervised Learning with CycleGAN Optimal transport geometry of cycleGAN @ SPACE Webinar, Tues, July 14th, 11:00am EST

Slide 84

Slide 84 text

Agricultural area

Slide 85

Slide 85 text

Cloud

Slide 86

Slide 86 text

Ocean

Slide 87

Slide 87 text

WHAT DOES A CNN LEARN?

Slide 88

Slide 88 text

Classical vs. Deep Learning for Inverse Problems Diagnosis Classical Regularized Recon (basis engineering) Deep Recon (no basis engineering) Basis Engineering

Slide 89

Slide 89 text

Questions?