Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Signal Processing Course: Compressed Sensing

Gabriel Peyré
January 01, 2012

Signal Processing Course: Compressed Sensing

Gabriel Peyré

January 01, 2012
Tweet

More Decks by Gabriel Peyré

Other Decks in Research

Transcript

  1. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Shannon

    interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)
  2. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural

    images are not smooth. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)
  3. Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural

    images are not smooth. But can be compressed e ciently. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] 0,1,0,. . . h(t) = sin( t) t Sample and compress simultaneously? Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N) JPEG-2k
  4. ˜ f P/N = 0.16 P/N = 0.02 P/N =

    1 P measures N micro-mirrors Single Pixel Camera (Rice) y[i] = f, i
  5. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2).
  6. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware , ... K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,
  7. Physical hardware resolution limit: target resolution f RN . ˜

    f L2 f RN y RP micro mirrors array resolution CS hardware , ... f Operator K K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,
  8. Need to solve y = Kf. More unknown than equations.

    dim(ker(K)) = N P is huge. Inversion and Sparsity f Operator K
  9. Need to solve y = Kf. More unknown than equations.

    dim(ker(K)) = N P is huge. Prior information: f is sparse in a basis { m }m . J (f) = Card {m \ | f, m | > } is small. Inversion and Sparsity f Operator K f, m f
  10. Image with 2 pixels: q = 0 Convex Relaxation: L1

    Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  11. Image with 2 pixels: Jq (f) = m | f,

    m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  12. Image with 2 pixels: Jq (f) = m | f,

    m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J1 (f) = m | f, m ⇥| Sparse 1 prior: J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.
  13. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = Sparse CS Recovery x0 RN f0 RN
  14. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse CS Recovery x0 RN f0 RN
  15. (Discretized) sampling acquisition: f0 RN sparse in ortho-basis y =

    Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse recovery: min || x y|| ||w|| ||x||1 min x 1 2 || x y||2 + ||x||1 ||w|| Sparse CS Recovery x0 RN f0 RN
  16. ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 +

    k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w||
  17. ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 +

    k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP [Candes 2009] x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w|| Theorem: If 2k 2 1, then where xk is the best k-term approximation of x0 . ||x0 x || C0 ⇥ k ||x0 xk ||1 + C1
  18. f (⇥) = 1 2⇤ ⇥ (⇥ b)+(a ⇥)+ Eigenvalues

    of I I with |I| = k are essentially in [a, b] a = (1 )2 and b = (1 )2 where = k/P When k = P + , the eigenvalue distribution tends to [Marcenko-Pastur] Large deviation inequality [Ledoux] Singular Values Distributions 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.4 0.6 0.8 P=200, k=50 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.2 0.4 0.6 0.8 P=200, k=50 P = 200, k = 10 f ( ) k = 30
  19. Link with coherence: k (k 1)µ( ) 2 = µ(

    ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥|
  20. Link with coherence: k (k 1)µ( ) For Gaussian matrices:

    2 = µ( ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P
  21. Link with coherence: k (k 1)µ( ) For Gaussian matrices:

    Stronger result: 2 = µ( ) RIP for Gaussian Matrices k C log(N/P)P Theorem: If then 2k 2 1 with high probability. µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P
  22. (1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))||

    ||2 Stability constant of A: smallest / largest eigenvalues of A A Numerics with RIP
  23. 2 1 (1 ⇥1 (A))|| ||2 ||A ||2 (1 +

    ⇥2 (A))|| ||2 Stability constant of A: Upper/lower RIC: i k = max |I|=k i ( I ) k = min( 1 k , 2 k ) k ˆ2 k ˆ2 k Monte-Carlo estimation: ˆ k k smallest / largest eigenvalues of A A N = 4000, P = 1000 Numerics with RIP
  24. (B ) x0 x0 1 2 2 3 3 1

    = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees
  25. (B ) x0 x0 1 2 2 3 3 1

    = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x0 solution of P0 ( x0 ) ⇥ x0 ⇤ (B ) x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees
  26. C(0,1,1) K(0,1,1) Ks = ( isi ) i R3 \

    i 0 2-D cones Cs = Ks 2-D quadrant L1 Recovery in 2-D 1 2 3 = ( i ) i R2 3 y x
  27. All Most RIP Sharp constants. No noise robustness. All x0

    such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:
  28. All Most RIP Sharp constants. No noise robustness. All x0

    such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Computation of “pathological” signals [Dossal, P, Fadili, 2010] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:
  29. Kf = ( ˆ f[!])!2⌦ Tomography and Fourier Measures Fourier

    slice theorem: ˆ p (⇥) = ˆ f(⇥ cos( ), ⇥ sin( )) 1D 2D Fourier k ˆ f = FFT2(f) Partial Fourier measurements: Equivalent to: {p k (t)}t R 0 k<K
  30. Regularized Inversion f⇥ = argmin f 1 2 |y[⇤] ˆ

    f[⇤]|2 + m |⇥f, ⇥m ⇤|. 1 regularization: Noisy measurements: ⇥ , y[ ] = ˆ f0 [ ] + w[ ]. Noise: w[⇥] N(0, ), white noise.
  31. Gaussian matrices: intractable for large N. Random partial orthogonal matrix:

    { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements
  32. Gaussian matrices: intractable for large N. Random partial orthogonal matrix:

    { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements
  33. not universal: requires incoherence. Gaussian matrices: intractable for large N.

    Random partial orthogonal matrix: { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements Theorem: with high probability on , If M CP µ2 log(N)4 , then 2M 2 1 [Rudelson, Vershynin, 2006] = K
  34. Randomized sensors + sparse recovery. Number of measurements signal complexity.

    Compressed sensing ideas: CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.
  35. Randomized sensors + sparse recovery. Number of measurements signal complexity.

    Compressed sensing ideas: The devil is in the constants: Worse case analysis is problematic. Designing good signal models. CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.
  36. Dictionary learning: learning Some Hot Topics with 256 atoms learned

    on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS
  37. Dictionary learning: Analysis vs. synthesis: learning Js (f) = min

    f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x
  38. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x c = D f D
  39. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)
  40. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)
  41. Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D

    f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |) Nuclear