Signal Processing Course: Compressed Sensing

Signal Processing Course: Compressed Sensing

E34ded36efe4b7abb12510d4e525fee8?s=128

Gabriel Peyré

January 01, 2012
Tweet

Transcript

  1. Compressive Sensing Gabriel Peyré www.numerical-tours.com

  2. Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical

    Guarantees •Fourier Domain Measurements
  3. Sampling: ˜ f L2([0, 1]d) f RN Idealization: acquisition device

    f[n] ⇡ ˜ f(n/N) Discretization
  4. Data aquisition: Sensors Pointwise Sampling and Smoothness ˜ f L2

    f RN f[i] = ˜ f(i/N)
  5. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Shannon

    interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)
  6. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural

    images are not smooth. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)
  7. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural

    images are not smooth. But can be compressed e ciently. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] 0,1,0,. . . h(t) = sin( t) t Sample and compress simultaneously? Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N) JPEG-2k
  8. Sampling and Periodization (a) (c) (d) (b) 1 0

  9. Sampling and Periodization: Aliasing (b) (c) (d) (a) 0 1

  10. Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical

    Guarantees •Fourier Domain Measurements
  11. ˜ f Single Pixel Camera (Rice)

  12. ˜ f P measures N micro-mirrors Single Pixel Camera (Rice)

    y[i] = f, i
  13. ˜ f P/N = 0.16 P/N = 0.02 P/N =

    1 P measures N micro-mirrors Single Pixel Camera (Rice) y[i] = f, i
  14. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2).
  15. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware , ... K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,
  16. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware , ... f Operator K K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,
  17. Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical

    Guarantees •Fourier Domain Measurements
  18. Need to solve y = Kf. More unknown than equations.

    dim(ker(K)) = N P is huge. Inversion and Sparsity f Operator K
  19. Need to solve y = Kf. More unknown than equations.

    dim(ker(K)) = N P is huge. Prior information: f is sparse in a basis { m }m . J (f) = Card {m \ | f, m | > } is small. Inversion and Sparsity f Operator K f, m f
  20. Image with 2 pixels: q = 0 Convex Relaxation: L1

    Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  21. Image with 2 pixels: Jq (f) = m | f,

    m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  22. Image with 2 pixels: Jq (f) = m | f,

    m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J1 (f) = m | f, m ⇥| Sparse 1 prior: J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  23. f0 RN sparse in ortho-basis Sparse CS Recovery x0 RN

    f0 RN
  24. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = Sparse CS Recovery x0 RN f0 RN
  25. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse CS Recovery x0 RN f0 RN
  26. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse recovery: min || x y|| ||w|| ||x||1 min x 1 2 || x y||2 + ||x||1 ||w|| Sparse CS Recovery x0 RN f0 RN
  27. = translation invariant wavelet frame Original f0 CS Simulation Example

  28. Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical

    Guarantees •Fourier Domain Measurements
  29. ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 +

    k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w||
  30. ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 +

    k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP [Candes 2009] x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w|| Theorem: If 2k 2 1, then where xk is the best k-term approximation of x0 . ||x0 x || C0 ⇥ k ||x0 xk ||1 + C1
  31. f (⇥) = 1 2⇤ ⇥ (⇥ b)+(a ⇥)+ Eigenvalues

    of I I with |I| = k are essentially in [a, b] a = (1 )2 and b = (1 )2 where = k/P When k = P + , the eigenvalue distribution tends to [Marcenko-Pastur] Large deviation inequality [Ledoux] Singular Values Distributions 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.4 0.6 0.8 P=200, k=50 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.2 0.4 0.6 0.8 P=200, k=50 P = 200, k = 10 f ( ) k = 30
  32. Link with coherence: k (k 1)µ( ) 2 = µ(

    ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥|
  33. Link with coherence: k (k 1)µ( ) For Gaussian matrices:

    2 = µ( ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P
  34. Link with coherence: k (k 1)µ( ) For Gaussian matrices:

    Stronger result: 2 = µ( ) RIP for Gaussian Matrices k C log(N/P)P Theorem: If then 2k 2 1 with high probability. µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P
  35. (1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))||

    ||2 Stability constant of A: smallest / largest eigenvalues of A A Numerics with RIP
  36. 2 1 (1 ⇥1 (A))|| ||2 ||A ||2 (1 +

    ⇥2 (A))|| ||2 Stability constant of A: Upper/lower RIC: i k = max |I|=k i ( I ) k = min( 1 k , 2 k ) k ˆ2 k ˆ2 k Monte-Carlo estimation: ˆ k k smallest / largest eigenvalues of A A N = 4000, P = 1000 Numerics with RIP
  37. (B ) x0 x0 1 2 2 3 3 1

    = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees
  38. (B ) x0 x0 1 2 2 3 3 1

    = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x0 solution of P0 ( x0 ) ⇥ x0 ⇤ (B ) x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees
  39. C(0,1,1) K(0,1,1) Ks = ( isi ) i R3 \

    i 0 2-D cones Cs = Ks 2-D quadrant L1 Recovery in 2-D 1 2 3 = ( i ) i R2 3 y x
  40. All Most RIP Sharp constants. No noise robustness. All x0

    such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:
  41. All Most RIP Sharp constants. No noise robustness. All x0

    such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Computation of “pathological” signals [Dossal, P, Fadili, 2010] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:
  42. Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical

    Guarantees •Fourier Domain Measurements
  43. Tomography and Fourier Measures

  44. Kf = ( ˆ f[!])!2⌦ Tomography and Fourier Measures Fourier

    slice theorem: ˆ p (⇥) = ˆ f(⇥ cos( ), ⇥ sin( )) 1D 2D Fourier k ˆ f = FFT2(f) Partial Fourier measurements: Equivalent to: {p k (t)}t R 0 k<K
  45. Regularized Inversion f⇥ = argmin f 1 2 |y[⇤] ˆ

    f[⇤]|2 + m |⇥f, ⇥m ⇤|. 1 regularization: Noisy measurements: ⇥ , y[ ] = ˆ f0 [ ] + w[ ]. Noise: w[⇥] N(0, ), white noise.
  46. MRI Imaging From [Lutsig et al.]

  47. Fourier sub-sampling pattern: randomization MRI Reconstruction High resolution Linear Sparsity

    Low resolution From [Lutsig et al.]
  48. Fourier sampling (Earth’s rotation) Linear reconstruction Radar Interferometry CARMA (USA)

  49. Gaussian matrices: intractable for large N. Random partial orthogonal matrix:

    { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements
  50. Gaussian matrices: intractable for large N. Random partial orthogonal matrix:

    { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements
  51. not universal: requires incoherence. Gaussian matrices: intractable for large N.

    Random partial orthogonal matrix: { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements Theorem: with high probability on , If M CP µ2 log(N)4 , then 2M 2 1 [Rudelson, Vershynin, 2006] = K
  52. dictionary Conclusion Sparsity: approximate signals with few atoms.

  53. Randomized sensors + sparse recovery. Number of measurements signal complexity.

    Compressed sensing ideas: CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.
  54. Randomized sensors + sparse recovery. Number of measurements signal complexity.

    Compressed sensing ideas: The devil is in the constants: Worse case analysis is problematic. Designing good signal models. CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.
  55. Dictionary learning: learning Some Hot Topics with 256 atoms learned

    on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS
  56. Dictionary learning: Analysis vs. synthesis: learning Js (f) = min

    f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x
  57. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x c = D f D
  58. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)
  59. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)
  60. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |) Nuclear