Slide 1

Slide 1 text

c J. Malick Des mathématiques appliquées, appliquables, et une application à l’imagerie Gabriel Peyré www.numerical-tours.com

Slide 2

Slide 2 text

Signals, Images and More

Slide 3

Slide 3 text

Signals, Images and More

Slide 4

Slide 4 text

Signals, Images and More

Slide 5

Slide 5 text

Signals, Images and More

Slide 6

Slide 6 text

Signals, Images and More

Slide 7

Slide 7 text

Signals, Images and More

Slide 8

Slide 8 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Sparse Inverse Problem Regularization

Slide 9

Slide 9 text

Orthogonal basis { m }m of L2([0, 1]d) Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions

Slide 10

Slide 10 text

Orthogonal basis { m }m of L2([0, 1]d) f = m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions

Slide 11

Slide 11 text

Orthogonal basis { m }m of L2([0, 1]d) f = m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions m

Slide 12

Slide 12 text

1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x 2jn 2j Position n, scale 2j, m = (n, j).

Slide 13

Slide 13 text

1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x 2jn 2j Position n, scale 2j, m = (n, j).

Slide 14

Slide 14 text

m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2) m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2

Slide 15

Slide 15 text

m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2) m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2 f(x) f, m1,m2 Fourier transform x m

Slide 16

Slide 16 text

3 elementary wavelets { H, V , D}. Orthogonal basis of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)

Slide 17

Slide 17 text

3 elementary wavelets { H, V , D}. Orthogonal basis of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)

Slide 18

Slide 18 text

wavelet f, k j,n Example of Wavelet Decomposition f(x) transform x (j, n, k)

Slide 19

Slide 19 text

Sparse Approximation in a Basis

Slide 20

Slide 20 text

Sparse Approximation in a Basis

Slide 21

Slide 21 text

Sparse Approximation in a Basis

Slide 22

Slide 22 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Sparse Inverse Problem Regularization

Slide 23

Slide 23 text

Compression by Transform-coding Image f f forward a[m] = ⇥f, m ⇤ R transform ˜ a[m]

Slide 24

Slide 24 text

Compression by Transform-coding Image f f forward a[m] = ⇥f, m ⇤ R transform Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]

Slide 25

Slide 25 text

Compression by Transform-coding Image f f forward a[m] = ⇥f, m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]

Slide 26

Slide 26 text

Compression by Transform-coding Image f f forward a[m] = ⇥f, m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]

Slide 27

Slide 27 text

Compression by Transform-coding Image f f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]

Slide 28

Slide 28 text

Compression by Transform-coding Image f f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z Zoom on f ˜ a[m] fR , R =0.2 bit/pixel

Slide 29

Slide 29 text

Compression by Transform-coding Image f f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R ) Theorem: Zoom on f ˜ a[m] fR , R =0.2 bit/pixel

Slide 30

Slide 30 text

Noise in Images

Slide 31

Slide 31 text

Noisy f Denoising

Slide 32

Slide 32 text

Noisy f Denoising thresh. f = N 1 m=0 f, m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m

Slide 33

Slide 33 text

Noisy f Denoising thresh. f = N 1 m=0 f, m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m In practice: T 3 for T = 2 log(N) Theorem: if ||f0 f0,M ||2 = O(M ), E(|| ˜ f f0 ||2) = O( 2 +1 )

Slide 34

Slide 34 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Sparse Inverse Problems Regularization

Slide 35

Slide 35 text

Recovering f0 2 RN from noisy observations: y = Kf0 + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems

Slide 36

Slide 36 text

Examples: Inpainting, super-resolution, . . . Recovering f0 2 RN from noisy observations: y = Kf0 + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems f0 K K

Slide 37

Slide 37 text

L1 Regularization coe cients x0 RN

Slide 38

Slide 38 text

L1 Regularization coe cients image x0 RN f0 = x0 RQ

Slide 39

Slide 39 text

L1 Regularization observations w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP

Slide 40

Slide 40 text

y = K f0 + w = x0 + w 2 RP Equivalent model: L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP

Slide 41

Slide 41 text

y = K f0 + w = x0 + w 2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP

Slide 42

Slide 42 text

y = K f0 + w = x0 + w 2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP Problems: 1w can “explose”. can even be non-invertible.

Slide 43

Slide 43 text

Inverse Problem Regularization observations y parameter Estimator: x(y) depends only on Observations: y = x0 + w 2 RP .

Slide 44

Slide 44 text

Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP .

Slide 45

Slide 45 text

J ( x0) Regularity of x0 Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . Choice of : tradeo ||w|| Noise level

Slide 46

Slide 46 text

J ( x0) Regularity of x0 x ( y ) 2 argmin x = y J ( x ) Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . No noise: 0+, minimize Choice of : tradeo ||w|| Noise level

Slide 47

Slide 47 text

J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors

Slide 48

Slide 48 text

Sparse regularization: x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors

Slide 49

Slide 49 text

Sparse regularization: Denoising in ortho-basis: K = Id, 2 = Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors

Slide 50

Slide 50 text

Sparse regularization: Denoising in ortho-basis: K = Id, 2 = Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Sparse Priors

Slide 51

Slide 51 text

Sparse regularization: Denoising in ortho-basis: K = Id, 2 = Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?

Slide 52

Slide 52 text

Sparse regularization: Denoising in ortho-basis: K = Id, 2 = Id, ⇤ = Id Non-orthogonal : NP-hard to solve. x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?

Slide 53

Slide 53 text

Image with 2 pixels: q = 0 J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior x1

Slide 54

Slide 54 text

Image with 2 pixels: q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior q priors: (convex for q 1) x1

Slide 55

Slide 55 text

Image with 2 pixels: q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 J1 (x) = m |xm | Convex Relaxation: L1 Prior Sparse 1 prior: q priors: (convex for q 1) x1

Slide 56

Slide 56 text

x argmin x=y m |xm | x x = y Noiseless Sparse Regularization Noiseless measurements: y = x0

Slide 57

Slide 57 text

x argmin x=y m |xm | x x = y x argmin x=y m |xm |2 Noiseless Sparse Regularization x x = y Noiseless measurements: y = x0

Slide 58

Slide 58 text

x argmin x=y m |xm | x x = y x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0

Slide 59

Slide 59 text

y = Kf0 + w (Kf)i = ⇢ 0 if i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem

Slide 60

Slide 60 text

y = Kf0 + w (Kf)i = ⇢ 0 if i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem f ? = x ? Recovered

Slide 61

Slide 61 text

Conclusion

Slide 62

Slide 62 text

Conclusion • Sparse regularization: denoising ! inverse problems.

Slide 63

Slide 63 text

• Mathematical challenges: – Theoretical performance guarantees. – Fast algorithms. Conclusion • Sparse regularization: denoising ! inverse problems.