Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Des mathématiques appliquées, appliquables, e...

Des mathématiques appliquées, appliquables, et une application à l’imagerie

Exposé en l'honneur de Michel Pierre

Gabriel Peyré

June 27, 2014
Tweet

More Decks by Gabriel Peyré

Other Decks in Research

Transcript

  1. c J. Malick Des mathématiques appliquées, appliquables, et une application

    à l’imagerie Gabriel Peyré www.numerical-tours.com
  2. Orthogonal basis { m }m of L2([0, 1]d) Continuous signal/image

    f L2([0, 1]d). Orthogonal Decompositions
  3. Orthogonal basis { m }m of L2([0, 1]d) f =

    m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions
  4. Orthogonal basis { m }m of L2([0, 1]d) f =

    m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions m
  5. 1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x

    2jn 2j Position n, scale 2j, m = (n, j).
  6. 1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x

    2jn 2j Position n, scale 2j, m = (n, j).
  7. m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2)

    m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2
  8. m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2)

    m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2 f(x) f, m1,m2 Fourier transform x m
  9. 3 elementary wavelets { H, V , D}. Orthogonal basis

    of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)
  10. 3 elementary wavelets { H, V , D}. Orthogonal basis

    of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)
  11. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R transform Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  12. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  13. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  14. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  15. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
  16. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R ) Theorem: Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
  17. Noisy f Denoising thresh. f = N 1 m=0 f,

    m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m
  18. Noisy f Denoising thresh. f = N 1 m=0 f,

    m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m In practice: T 3 for T = 2 log(N) Theorem: if ||f0 f0,M ||2 = O(M ), E(|| ˜ f f0 ||2) = O( 2 +1 )
  19. Recovering f0 2 RN from noisy observations: y = Kf0

    + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems
  20. Examples: Inpainting, super-resolution, . . . Recovering f0 2 RN

    from noisy observations: y = Kf0 + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems f0 K K
  21. y = K f0 + w = x0 + w

    2 RP Equivalent model: L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
  22. y = K f0 + w = x0 + w

    2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
  23. y = K f0 + w = x0 + w

    2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP Problems: 1w can “explose”. can even be non-invertible.
  24. Inverse Problem Regularization observations y parameter Example: variational methods Estimator:

    x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP .
  25. J ( x0) Regularity of x0 Inverse Problem Regularization observations

    y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . Choice of : tradeo ||w|| Noise level
  26. J ( x0) Regularity of x0 x ( y )

    2 argmin x = y J ( x ) Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . No noise: 0+, minimize Choice of : tradeo ||w|| Noise level
  27. J0( x ) = # { m ; xm 6=

    0} “Ideal” sparsity: Sparse Priors
  28. Sparse regularization: x? 2 argmin x 1 2 || y

    x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors
  29. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors
  30. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Sparse Priors
  31. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
  32. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id Non-orthogonal : NP-hard to solve. x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
  33. Image with 2 pixels: q = 0 J0 (x) =

    # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior x1
  34. Image with 2 pixels: q = 0 q = 1

    q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior q priors: (convex for q 1) x1
  35. Image with 2 pixels: q = 0 q = 1

    q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 J1 (x) = m |xm | Convex Relaxation: L1 Prior Sparse 1 prior: q priors: (convex for q 1) x1
  36. x argmin x=y m |xm | x x = y

    Noiseless Sparse Regularization Noiseless measurements: y = x0
  37. x argmin x=y m |xm | x x = y

    x argmin x=y m |xm |2 Noiseless Sparse Regularization x x = y Noiseless measurements: y = x0
  38. x argmin x=y m |xm | x x = y

    x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0
  39. y = Kf0 + w (Kf)i = ⇢ 0 if

    i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem
  40. y = Kf0 + w (Kf)i = ⇢ 0 if

    i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem f ? = x ? Recovered
  41. • Mathematical challenges: – Theoretical performance guarantees. – Fast algorithms.

    Conclusion • Sparse regularization: denoising ! inverse problems.