Slide 1

Slide 1 text

Compressive Sensing Gabriel Peyré www.numerical-tours.com

Slide 2

Slide 2 text

Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical Guarantees •Fourier Domain Measurements

Slide 3

Slide 3 text

Sampling: ˜ f L2([0, 1]d) f RN Idealization: acquisition device f[n] ⇡ ˜ f(n/N) Discretization

Slide 4

Slide 4 text

Data aquisition: Sensors Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 5

Slide 5 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 6

Slide 6 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural images are not smooth. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 7

Slide 7 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural images are not smooth. But can be compressed e ciently. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] 0,1,0,. . . h(t) = sin( t) t Sample and compress simultaneously? Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N) JPEG-2k

Slide 8

Slide 8 text

Sampling and Periodization (a) (c) (d) (b) 1 0

Slide 9

Slide 9 text

Sampling and Periodization: Aliasing (b) (c) (d) (a) 0 1

Slide 10

Slide 10 text

Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical Guarantees •Fourier Domain Measurements

Slide 11

Slide 11 text

˜ f Single Pixel Camera (Rice)

Slide 12

Slide 12 text

˜ f P measures N micro-mirrors Single Pixel Camera (Rice) y[i] = f, i

Slide 13

Slide 13 text

˜ f P/N = 0.16 P/N = 0.02 P/N = 1 P measures N micro-mirrors Single Pixel Camera (Rice) y[i] = f, i

Slide 14

Slide 14 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2).

Slide 15

Slide 15 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware , ... K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,

Slide 16

Slide 16 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware , ... f Operator K K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,

Slide 17

Slide 17 text

Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical Guarantees •Fourier Domain Measurements

Slide 18

Slide 18 text

Need to solve y = Kf. More unknown than equations. dim(ker(K)) = N P is huge. Inversion and Sparsity f Operator K

Slide 19

Slide 19 text

Need to solve y = Kf. More unknown than equations. dim(ker(K)) = N P is huge. Prior information: f is sparse in a basis { m }m . J (f) = Card {m \ | f, m | > } is small. Inversion and Sparsity f Operator K f, m f

Slide 20

Slide 20 text

Image with 2 pixels: q = 0 Convex Relaxation: L1 Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.

Slide 21

Slide 21 text

Image with 2 pixels: Jq (f) = m | f, m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.

Slide 22

Slide 22 text

Image with 2 pixels: Jq (f) = m | f, m ⇥|q q = 0 q = 1 q = 2 q = 3/2 q = 1/2 Convex Relaxation: L1 Prior J1 (f) = m | f, m ⇥| Sparse 1 prior: J0 (f) = # {m \ ⇥f, m ⇤ = 0} q priors: (convex for q 1) J0 (f) = 0 null image. J0 (f) = 1 sparse image. J0 (f) = 2 non-sparse image.

Slide 23

Slide 23 text

f0 RN sparse in ortho-basis Sparse CS Recovery x0 RN f0 RN

Slide 24

Slide 24 text

(Discretized) sampling acquisition: f0 RN sparse in ortho-basis y = Kf0 + w = K (x0 ) + w = Sparse CS Recovery x0 RN f0 RN

Slide 25

Slide 25 text

(Discretized) sampling acquisition: f0 RN sparse in ortho-basis y = Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse CS Recovery x0 RN f0 RN

Slide 26

Slide 26 text

(Discretized) sampling acquisition: f0 RN sparse in ortho-basis y = Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N(0, P 1/2) i.i.d. drawn from the Gaussian matrix ensemble Sparse recovery: min || x y|| ||w|| ||x||1 min x 1 2 || x y||2 + ||x||1 ||w|| Sparse CS Recovery x0 RN f0 RN

Slide 27

Slide 27 text

= translation invariant wavelet frame Original f0 CS Simulation Example

Slide 28

Slide 28 text

Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical Guarantees •Fourier Domain Measurements

Slide 29

Slide 29 text

⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 + k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w||

Slide 30

Slide 30 text

⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 + k )||x||2 Restricted Isometry Constants: 1 recovery: CS with RIP [Candes 2009] x⇥ argmin || x y|| ||x||1 where y = x0 + w ||w|| Theorem: If 2k 2 1, then where xk is the best k-term approximation of x0 . ||x0 x || C0 ⇥ k ||x0 xk ||1 + C1

Slide 31

Slide 31 text

f (⇥) = 1 2⇤ ⇥ (⇥ b)+(a ⇥)+ Eigenvalues of I I with |I| = k are essentially in [a, b] a = (1 )2 and b = (1 )2 where = k/P When k = P + , the eigenvalue distribution tends to [Marcenko-Pastur] Large deviation inequality [Ledoux] Singular Values Distributions 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.4 0.6 0.8 P=200, k=50 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 P=200, k=10 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 P=200, k=30 0.2 0.4 0.6 0.8 P=200, k=50 P = 200, k = 10 f ( ) k = 30

Slide 32

Slide 32 text

Link with coherence: k (k 1)µ( ) 2 = µ( ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥|

Slide 33

Slide 33 text

Link with coherence: k (k 1)µ( ) For Gaussian matrices: 2 = µ( ) RIP for Gaussian Matrices µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P

Slide 34

Slide 34 text

Link with coherence: k (k 1)µ( ) For Gaussian matrices: Stronger result: 2 = µ( ) RIP for Gaussian Matrices k C log(N/P)P Theorem: If then 2k 2 1 with high probability. µ( ) = max i=j | i, j ⇥| µ( ) log(PN)/P

Slide 35

Slide 35 text

(1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))|| ||2 Stability constant of A: smallest / largest eigenvalues of A A Numerics with RIP

Slide 36

Slide 36 text

2 1 (1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))|| ||2 Stability constant of A: Upper/lower RIC: i k = max |I|=k i ( I ) k = min( 1 k , 2 k ) k ˆ2 k ˆ2 k Monte-Carlo estimation: ˆ k k smallest / largest eigenvalues of A A N = 4000, P = 1000 Numerics with RIP

Slide 37

Slide 37 text

(B ) x0 x0 1 2 2 3 3 1 = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees

Slide 38

Slide 38 text

(B ) x0 x0 1 2 2 3 3 1 = ( i ) i R2 3 B = {x \ ||x||1 } = ||x0 ||1 x0 solution of P0 ( x0 ) ⇥ x0 ⇤ (B ) x argmin x=y ||x||1 (P0 (y)) Noiseless recovery: y x Polytopes-based Guarantees

Slide 39

Slide 39 text

C(0,1,1) K(0,1,1) Ks = ( isi ) i R3 \ i 0 2-D cones Cs = Ks 2-D quadrant L1 Recovery in 2-D 1 2 3 = ( i ) i R2 3 y x

Slide 40

Slide 40 text

All Most RIP Sharp constants. No noise robustness. All x0 such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:

Slide 41

Slide 41 text

All Most RIP Sharp constants. No noise robustness. All x0 such that ||x0 ||0 Call (P/N)P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N)P are identifiable. Call (1/4) 0.065 Cmost (1/4) 0.25 [Donoho] Computation of “pathological” signals [Dossal, P, Fadili, 2010] Polytope Noiseless Recovery 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Counting faces of random polytopes:

Slide 42

Slide 42 text

Overview •Shannon’s World •Compressive Sensing Acquisition •Compressive Sensing Recovery •Theoretical Guarantees •Fourier Domain Measurements

Slide 43

Slide 43 text

Tomography and Fourier Measures

Slide 44

Slide 44 text

Kf = ( ˆ f[!])!2⌦ Tomography and Fourier Measures Fourier slice theorem: ˆ p (⇥) = ˆ f(⇥ cos( ), ⇥ sin( )) 1D 2D Fourier k ˆ f = FFT2(f) Partial Fourier measurements: Equivalent to: {p k (t)}t R 0 k

Slide 45

Slide 45 text

Regularized Inversion f⇥ = argmin f 1 2 |y[⇤] ˆ f[⇤]|2 + m |⇥f, ⇥m ⇤|. 1 regularization: Noisy measurements: ⇥ , y[ ] = ˆ f0 [ ] + w[ ]. Noise: w[⇥] N(0, ), white noise.

Slide 46

Slide 46 text

MRI Imaging From [Lutsig et al.]

Slide 47

Slide 47 text

Fourier sub-sampling pattern: randomization MRI Reconstruction High resolution Linear Sparsity Low resolution From [Lutsig et al.]

Slide 48

Slide 48 text

Fourier sampling (Earth’s rotation) Linear reconstruction Radar Interferometry CARMA (USA)

Slide 49

Slide 49 text

Gaussian matrices: intractable for large N. Random partial orthogonal matrix: { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements

Slide 50

Slide 50 text

Gaussian matrices: intractable for large N. Random partial orthogonal matrix: { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements

Slide 51

Slide 51 text

not universal: requires incoherence. Gaussian matrices: intractable for large N. Random partial orthogonal matrix: { } orthogonal basis. Fast measurements: (e.g. Fourier basis) Mutual incoherence: µ = ⌅ Nmax ,m |⇥⇥ , m ⇤| [1, ⌅ N] Kf = (h'!, fi)!2⌦ where | ⌦ | = P uniformly random. Structured Measurements Theorem: with high probability on , If M CP µ2 log(N)4 , then 2M 2 1 [Rudelson, Vershynin, 2006] = K

Slide 52

Slide 52 text

dictionary Conclusion Sparsity: approximate signals with few atoms.

Slide 53

Slide 53 text

Randomized sensors + sparse recovery. Number of measurements signal complexity. Compressed sensing ideas: CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.

Slide 54

Slide 54 text

Randomized sensors + sparse recovery. Number of measurements signal complexity. Compressed sensing ideas: The devil is in the constants: Worse case analysis is problematic. Designing good signal models. CS is about designing new hardware. dictionary Conclusion Sparsity: approximate signals with few atoms.

Slide 55

Slide 55 text

Dictionary learning: learning Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS

Slide 56

Slide 56 text

Dictionary learning: Analysis vs. synthesis: learning Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x

Slide 57

Slide 57 text

Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Image f = x Coe cients x c = D f D

Slide 58

Slide 58 text

Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D f||1 Js (f) = min f= x ||x||1 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)

Slide 59

Slide 59 text

Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |)

Slide 60

Slide 60 text

Dictionary learning: Analysis vs. synthesis: learning Ja (f) = ||D f||1 Js (f) = min f= x ||x||1 |x1 | + (x2 2 + x2 3 )1 2 Some Hot Topics with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). duced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. MAIRAL et al.: SPARSE REPRESENTATION FOR COLOR IMAGE RESTORATION 57 Fig. 2. Dictionaries with 256 atoms learned on a generic database of natural images, with two different sizes of patches. Note the large number of color-less atoms. Since the atoms can have negative values, the vectors are presented scaled and shifted to the [0,255] range per channel: (a) 5 5 3 patches; (b) 8 8 3 patches. Fig. 3. Examples of color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). Color artifacts are reduced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. In (b), one observes a bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when (false contours), which is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. color artifacts while reconstructing a damaged version of the image (a) without the improvement here proposed ( in the new metric). uced with our proposed technique ( in our proposed new metric). Both images have been denoised with the same global dictionary. bias effect in the color from the castle and in some part of the water. What is more, the color of the sky is piecewise constant when ch is another artifact our approach corrected. (a) Original. (b) Original algorithm, dB. (c) Proposed algorithm, dB. ing Image; (b) resulting dictionary; (b) is the dictionary learned in the image in (a). The dictionary is more colored than the global one. EPRESENTATION FOR COLOR IMAGE RESTORATION Fig. 7. Data set used for evaluating denoising experiments. TABLE I DENOISING ALGORITHM WITH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVID RESULTS ARE THOSE GIVEN BY MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE O CALE K-SVD ALGORITHM [2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS RAINED DICTIONARY. THE BOTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 IT OR IMAGE RESTORATION 61 g. 7. Data set used for evaluating denoising experiments. TABLE I TH 256 ATOMS OF SIZE 7 7 3 FOR AND 6 6 3 FOR . EACH CASE IS DIVIDED IN FOUR Y MCAULEY AND AL [28] WITH THEIR “3 3 MODEL.” THE TOP-RIGHT RESULTS ARE THOSE OBTAINED BY 2] ON EACH CHANNEL SEPARATELY WITH 8 8 ATOMS. THE BOTTOM-LEFT ARE OUR RESULTS OBTAINED OTTOM-RIGHT ARE THE IMPROVEMENTS OBTAINED WITH THE ADAPTIVE APPROACH WITH 20 ITERATIONS. H GROUP. AS CAN BE SEEN, OUR PROPOSED TECHNIQUE CONSISTENTLY PRODUCES THE BEST RESULTS Other sparse priors: Image f = x Coe cients x c = D f D |x1 | + |x2 | max(|x1 |, |x2 |) Nuclear