Des mathématiques appliquées, appliquables, et une application à l’imagerie

Des mathématiques appliquées, appliquables, et une application à l’imagerie

Exposé en l'honneur de Michel Pierre

E34ded36efe4b7abb12510d4e525fee8?s=128

Gabriel Peyré

June 27, 2014
Tweet

Transcript

  1. c J. Malick Des mathématiques appliquées, appliquables, et une application

    à l’imagerie Gabriel Peyré www.numerical-tours.com
  2. Signals, Images and More

  3. Signals, Images and More

  4. Signals, Images and More

  5. Signals, Images and More

  6. Signals, Images and More

  7. Signals, Images and More

  8. Overview • Approximation in an Ortho-Basis • Compression and Denoising

    • Sparse Inverse Problem Regularization
  9. Orthogonal basis { m }m of L2([0, 1]d) Continuous signal/image

    f L2([0, 1]d). Orthogonal Decompositions
  10. Orthogonal basis { m }m of L2([0, 1]d) f =

    m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions
  11. Orthogonal basis { m }m of L2([0, 1]d) f =

    m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions m
  12. 1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x

    2jn 2j Position n, scale 2j, m = (n, j).
  13. 1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x

    2jn 2j Position n, scale 2j, m = (n, j).
  14. m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2)

    m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2
  15. m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2)

    m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2 f(x) f, m1,m2 Fourier transform x m
  16. 3 elementary wavelets { H, V , D}. Orthogonal basis

    of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)
  17. 3 elementary wavelets { H, V , D}. Orthogonal basis

    of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)
  18. wavelet f, k j,n Example of Wavelet Decomposition f(x) transform

    x (j, n, k)
  19. Sparse Approximation in a Basis

  20. Sparse Approximation in a Basis

  21. Sparse Approximation in a Basis

  22. Overview • Approximation in an Ortho-Basis • Compression and Denoising

    • Sparse Inverse Problem Regularization
  23. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R transform ˜ a[m]
  24. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R transform Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  25. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  26. Compression by Transform-coding Image f f forward a[m] = ⇥f,

    m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  27. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
  28. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
  29. Compression by Transform-coding Image f f forward Dequantization: ˜ a[m]

    = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R ) Theorem: Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
  30. Noise in Images

  31. Noisy f Denoising

  32. Noisy f Denoising thresh. f = N 1 m=0 f,

    m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m
  33. Noisy f Denoising thresh. f = N 1 m=0 f,

    m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m In practice: T 3 for T = 2 log(N) Theorem: if ||f0 f0,M ||2 = O(M ), E(|| ˜ f f0 ||2) = O( 2 +1 )
  34. Overview • Approximation in an Ortho-Basis • Compression and Denoising

    • Sparse Inverse Problems Regularization
  35. Recovering f0 2 RN from noisy observations: y = Kf0

    + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems
  36. Examples: Inpainting, super-resolution, . . . Recovering f0 2 RN

    from noisy observations: y = Kf0 + w 2 RP K : RN ! RP with P ⌧ N (missing information) Inverse Problems f0 K K
  37. L1 Regularization coe cients x0 RN

  38. L1 Regularization coe cients image x0 RN f0 = x0

    RQ
  39. L1 Regularization observations w coe cients image K x0 RN

    f0 = x0 RQ y = Kf0 + w RP
  40. y = K f0 + w = x0 + w

    2 RP Equivalent model: L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
  41. y = K f0 + w = x0 + w

    2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
  42. y = K f0 + w = x0 + w

    2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP Problems: 1w can “explose”. can even be non-invertible.
  43. Inverse Problem Regularization observations y parameter Estimator: x(y) depends only

    on Observations: y = x0 + w 2 RP .
  44. Inverse Problem Regularization observations y parameter Example: variational methods Estimator:

    x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP .
  45. J ( x0) Regularity of x0 Inverse Problem Regularization observations

    y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . Choice of : tradeo ||w|| Noise level
  46. J ( x0) Regularity of x0 x ( y )

    2 argmin x = y J ( x ) Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . No noise: 0+, minimize Choice of : tradeo ||w|| Noise level
  47. J0( x ) = # { m ; xm 6=

    0} “Ideal” sparsity: Sparse Priors
  48. Sparse regularization: x? 2 argmin x 1 2 || y

    x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors
  49. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: Sparse Priors
  50. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Sparse Priors
  51. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
  52. Sparse regularization: Denoising in ortho-basis: K = Id, 2 =

    Id, ⇤ = Id Non-orthogonal : NP-hard to solve. x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
  53. Image with 2 pixels: q = 0 J0 (x) =

    # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior x1
  54. Image with 2 pixels: q = 0 q = 1

    q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 Convex Relaxation: L1 Prior q priors: (convex for q 1) x1
  55. Image with 2 pixels: q = 0 q = 1

    q = 2 q = 3/2 q = 1/2 Jq (x) = m |xm |q J0 (x) = # {m \ xm = 0} J0 (x) = 0 null image. J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 J1 (x) = m |xm | Convex Relaxation: L1 Prior Sparse 1 prior: q priors: (convex for q 1) x1
  56. x argmin x=y m |xm | x x = y

    Noiseless Sparse Regularization Noiseless measurements: y = x0
  57. x argmin x=y m |xm | x x = y

    x argmin x=y m |xm |2 Noiseless Sparse Regularization x x = y Noiseless measurements: y = x0
  58. x argmin x=y m |xm | x x = y

    x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0
  59. y = Kf0 + w (Kf)i = ⇢ 0 if

    i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem
  60. y = Kf0 + w (Kf)i = ⇢ 0 if

    i 2 ⌦, fi if i / 2 ⌦. Original f0 . Original Measures Inpainting Problem f ? = x ? Recovered
  61. Conclusion

  62. Conclusion • Sparse regularization: denoising ! inverse problems.

  63. • Mathematical challenges: – Theoretical performance guarantees. – Fast algorithms.

    Conclusion • Sparse regularization: denoising ! inverse problems.