Slide 1

Slide 1 text

Inverse Problems Regularization www.numerical-tours.com Gabriel Peyré

Slide 2

Slide 2 text

Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization

Slide 3

Slide 3 text

J ( f ) = || f ||2 W 1,2 = Z R2 ||r f ( x )||d x Sobolev semi-norm: Smooth and Cartoon Priors | f|2

Slide 4

Slide 4 text

J ( f ) = || f ||2 W 1,2 = Z R2 ||r f ( x )||d x Sobolev semi-norm: Total variation semi-norm: J ( f ) = || f ||TV = Z R2 ||r f ( x )||d x Smooth and Cartoon Priors | f| | f|2

Slide 5

Slide 5 text

J ( f ) = || f ||2 W 1,2 = Z R2 ||r f ( x )||d x Sobolev semi-norm: Total variation semi-norm: J ( f ) = || f ||TV = Z R2 ||r f ( x )||d x Smooth and Cartoon Priors | f| | f|2

Slide 6

Slide 6 text

Natural Image Priors

Slide 7

Slide 7 text

Discrete Priors

Slide 8

Slide 8 text

Discrete Priors

Slide 9

Slide 9 text

Discrete Differential Operators

Slide 10

Slide 10 text

Discrete Differential Operators

Slide 11

Slide 11 text

Laplacian Operator

Slide 12

Slide 12 text

Laplacian Operator

Slide 13

Slide 13 text

Function: ˜ f : x 2 R2 7! f ( x ) 2 R ˜ f(x + ") = ˜ f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜ f ( x ) = ( @1 ˜ f ( x ) , @2 ˜ f ( x )) 2 R2 Gradient: Images vs. Functionals

Slide 14

Slide 14 text

Function: ˜ f : x 2 R2 7! f ( x ) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜ f(i1/n, i2/n) rf[i] ⇡ r ˜ f(i/n) ˜ f(x + ") = ˜ f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜ f ( x ) = ( @1 ˜ f ( x ) , @2 ˜ f ( x )) 2 R2 Gradient: Images vs. Functionals

Slide 15

Slide 15 text

Function: ˜ f : x 2 R2 7! f ( x ) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜ f(i1/n, i2/n) Functional: J : f 2 RN 7! J(f) 2 R J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2 RN ) rf[i] ⇡ r ˜ f(i/n) ˜ f(x + ") = ˜ f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜ f ( x ) = ( @1 ˜ f ( x ) , @2 ˜ f ( x )) 2 R2 rJ : RN 7! RN Gradient: Images vs. Functionals

Slide 16

Slide 16 text

Function: ˜ f : x 2 R2 7! f ( x ) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜ f(i1/n, i2/n) Functional: J : f 2 RN 7! J(f) 2 R Sobolev: rJ(f) = (r⇤ r)f = f J(f) = 1 2 ||rf||2 J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2 RN ) rf[i] ⇡ r ˜ f(i/n) ˜ f(x + ") = ˜ f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜ f ( x ) = ( @1 ˜ f ( x ) , @2 ˜ f ( x )) 2 R2 rJ : RN 7! RN Gradient: Images vs. Functionals

Slide 17

Slide 17 text

rJ(f) = div ✓ rf ||rf|| ◆ If 8 n, rf[n] 6= 0, If 9n, rf [ n ] = 0, J not di↵erentiable at f . Total Variation Gradient ||rf|| rJ(f)

Slide 18

Slide 18 text

rJ(f) = div ✓ rf ||rf|| ◆ If 8 n, rf[n] 6= 0, Sub-di↵erential: If 9n, rf [ n ] = 0, J not di↵erentiable at f . Cu = ↵ 2 R2⇥N \ (u[n] = 0) ) (↵[n] = u[n]/||u[n]||) @J(f) = { div(↵) ; ||↵[n]|| 6 1 and ↵ 2 Crf } Total Variation Gradient ||rf|| rJ(f)

Slide 19

Slide 19 text

−10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 p x 2 + " 2 | x | Regularized Total Variation ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"

Slide 20

Slide 20 text

−10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 rJ"(f) = div ✓ rf ||rf||" ◆ p x 2 + " 2 | x | rJ" ⇠ /" when " ! +1 Regularized Total Variation ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||" rJ"(f)

Slide 21

Slide 21 text

Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization

Slide 22

Slide 22 text

f(k+1) = f(k) ⌧k rJ(f(k)) f(0) is given. Gradient Descent

Slide 23

Slide 23 text

and 0 < ⌧ < 2/L, then f(k) k!+1 ! f? a solution of min f J ( f ). If f is convex, C1 , rf is L -Lipschitz, Theorem: f(k+1) = f(k) ⌧k rJ(f(k)) f(0) is given. Gradient Descent

Slide 24

Slide 24 text

and 0 < ⌧ < 2/L, then f(k) k!+1 ! f? a solution of min f J ( f ). If f is convex, C1 , rf is L -Lipschitz, Theorem: f(k+1) = f(k) ⌧k rJ(f(k)) f(0) is given. Optimal step size: ⌧k = argmin ⌧2R+ J(f(k) ⌧rJ(f(k))) Proposition: One has hrJ(f(k+1)), rJ(f(k))i = 0 Gradient Descent

Slide 25

Slide 25 text

Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k)) Fixed step size ⌧k = ⌧ :

Slide 26

Slide 26 text

Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k)) Fixed step size ⌧k = ⌧ : Denote ft = f(k) for t = k⌧ , one obtains formally as ⌧ ! 0: 8 t > 0, @ft @t = rJ(ft) and f0 = f(0)

Slide 27

Slide 27 text

J ( f ) = R ||r f ( x )||d x Sobolev flow: @ft @t = ft Heat equation: Explicit solution: Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k)) Fixed step size ⌧k = ⌧ : Denote ft = f(k) for t = k⌧ , one obtains formally as ⌧ ! 0: 8 t > 0, @ft @t = rJ(ft) and f0 = f(0)

Slide 28

Slide 28 text

Total Variation Flow

Slide 29

Slide 29 text

@ft @t = rJ(ft) Noisy observations: y = f + w , w ⇠ N (0 , IdN ). and ft=0 = y Application: Denoising

Slide 30

Slide 30 text

Optimal choice of t: minimize ||ft f|| ! not accessible in practice. SNR( ft, f ) = 20 log10 ✓ ||f ft || ||f|| ◆ Optimal Parameter Selection t t

Slide 31

Slide 31 text

Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization

Slide 32

Slide 32 text

Inverse Problems

Slide 33

Slide 33 text

Inverse Problems

Slide 34

Slide 34 text

Inverse Problems

Slide 35

Slide 35 text

Inverse Problems

Slide 36

Slide 36 text

Inverse Problems

Slide 37

Slide 37 text

Inverse Problem Regularization

Slide 38

Slide 38 text

Inverse Problem Regularization

Slide 39

Slide 39 text

Inverse Problem Regularization

Slide 40

Slide 40 text

Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 (assuming 1 / 2 ker( )) Sobolev Regularization

Slide 41

Slide 41 text

Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 rE(f?) = 0 () ( ⇤ )f? = ⇤y Proposition: ! Large scale linear system. (assuming 1 / 2 ker( )) Sobolev Regularization

Slide 42

Slide 42 text

Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 rE(f?) = 0 () ( ⇤ )f? = ⇤y Proposition: ! Large scale linear system. Gradient descent: (assuming 1 / 2 ker( )) where ||A|| = max (A) ! Slow convergence. Sobolev Regularization Convergence: ⇥ < 2/||⇥ ⇥ ||

Slide 43

Slide 43 text

Mask M, = diagi (1i2M ) Example: Inpainting Figure 3 shows iterations of the algorithm 1 to solve the inpainting problem on a smooth image using a manifold prior with 2D linear patches, as defined in 16. This manifold together with the overlapping of the patches allow a smooth interpolation of the missing pixels. Measurements y Iter. #1 Iter. #3 Iter. #50 Fig. 3. Iterations of the inpainting algorithm on an uniformly regular image. 5 Manifold of Step Discontinuities In order to introduce some non-linearity in the manifold M, one needs to go log 10 (||f(k) f( )||/||f0 ||) k k E(f(k)) M ( f )[ i ] = ⇢ 0 if i 2 M, f [ i ] otherwise.

Slide 44

Slide 44 text

Symmetric linear system: Conjugate Gradient Ax = b () min x 2Rn E( x ) = 1 2 h Ax, x i h x, b i

Slide 45

Slide 45 text

Symmetric linear system: x (k+1) = argmin E( x ) s.t. x x (k) 2 span(rE( x (0)) , . . . , rE( x (k))) Intuition: Conjugate Gradient Ax = b () min x 2Rn E( x ) = 1 2 h Ax, x i h x, b i Proposition: 8 ` < k, hrE( x k) , rE( x `)i = 0

Slide 46

Slide 46 text

Symmetric linear system: Initialization: x (0) 2 RN , r (0) = b Ax (0) , p (0) = r (0) r (k) = hrE( x (k)) , d (k)i h Ad (k) , d (k)i d (k) = rE( x (k)) + || v (k)|| || v (k 1)||d (k 1) v (k) = rE( x (k)) = Ax (k) b x (k+1) = x (k) r (k) d (k) Iterations: x (k+1) = argmin E( x ) s.t. x x (k) 2 span(rE( x (0)) , . . . , rE( x (k))) Intuition: Conjugate Gradient Ax = b () min x 2Rn E( x ) = 1 2 h Ax, x i h x, b i Proposition: 8 ` < k, hrE( x k) , rE( x `)i = 0

Slide 47

Slide 47 text

TV" regularization: (assuming 1 / 2 ker( )) f? = argmin f2RN E(f) = 1 2 || f y|| + J"(f) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"

Slide 48

Slide 48 text

TV" regularization: (assuming 1 / 2 ker( )) f(k+1) = f(k) ⌧k rE(f(k)) rE(f) = ⇤( f y) + rJ"(f) rJ"(f) = div ✓ rf ||rf||" ◆ Convergence: requires ⌧ ⇠ ". Gradient descent: f? = argmin f2RN E(f) = 1 2 || f y|| + J"(f) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"

Slide 49

Slide 49 text

TV" regularization: (assuming 1 / 2 ker( )) f(k+1) = f(k) ⌧k rE(f(k)) rE(f) = ⇤( f y) + rJ"(f) rJ"(f) = div ✓ rf ||rf||" ◆ Convergence: requires ⌧ ⇠ ". Gradient descent: f? = argmin f2RN E(f) = 1 2 || f y|| + J"(f) Newton descent: f(k+1) = f(k) H 1 k rE(f(k)) where Hk = @2E"(f(k)) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"

Slide 50

Slide 50 text

k Large " TV vs. Sobolev Converge Small "

Slide 51

Slide 51 text

Observations y Sobolev Total variation Inpainting: Sobolev vs. TV

Slide 52

Slide 52 text

Noiseless problem: f? 2 argmin f J"(f) s.t. f 2 H Contraint: H = {f ; f = y}. f(k+1) = ProjH ⇣ f(k) ⌧k rJ"( f(k) ) ⌘ ProjH( f ) = argmin g=y ||g f||2 = f + ⇤( ⇤ ) 1(y f) Inpainting: ProjH( f )[ i ] = ⇢ f [ i ] if i 2 M, y [ i ] otherwise. Projected gradient descent: f(k) k!+1 ! f? a solution of ( ? ). (?) Projected Gradient Descent Proposition: If rJ" is L-Lipschitz and 0 < ⌧k < 2/L,

Slide 53

Slide 53 text

TV Priors: Non-quadratic better edge recovery. =) Conclusion Sobolev

Slide 54

Slide 54 text

TV Priors: Non-quadratic better edge recovery. =) Optimization Variational regularization: () – Gradient descent. – Newton. – Projected gradient. – Conjugate gradient. Non-smooth optimization ? Conclusion Sobolev