Slide 1

Slide 1 text

Sparsity and Compressed Sensing Gabriel Peyré www.numerical-tours.com

Slide 2

Slide 2 text

Signals, Images and More

Slide 3

Slide 3 text

Signals, Images and More

Slide 4

Slide 4 text

Signals, Images and More

Slide 5

Slide 5 text

Signals, Images and More

Slide 6

Slide 6 text

Signals, Images and More

Slide 7

Slide 7 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Compressed Sensing

Slide 8

Slide 8 text

Orthogonal basis { m }m of L2([0, 1]d) Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions

Slide 9

Slide 9 text

Orthogonal basis { m }m of L2([0, 1]d) f = m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions

Slide 10

Slide 10 text

Orthogonal basis { m }m of L2([0, 1]d) f = m f, m m ||f|| = |f(x)|2dx = m | f, m ⇥|2 Continuous signal/image f L2([0, 1]d). Orthogonal Decompositions m

Slide 11

Slide 11 text

1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x 2jn 2j Position n, scale 2j, m = (n, j).

Slide 12

Slide 12 text

1-D Wavelet Basis Wavelets: j,n (x) = 1 2j/2 x 2jn 2j Position n, scale 2j, m = (n, j).

Slide 13

Slide 13 text

m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2) m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2

Slide 14

Slide 14 text

m1,m2 Basis { m1,m2 (x1, x2 )}m1,m2 of L2([0, 1]2) m1,m2 (x1, x2 ) = m1 (x1 ) m2 (x2 ) tensor product f(x) f, m1,m2 Fourier transform 2-D Fourier Basis Basis { m (x)}m of L2([0, 1]) m1 m2 x m

Slide 15

Slide 15 text

3 elementary wavelets { H, V , D}. Orthogonal basis of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)

Slide 16

Slide 16 text

3 elementary wavelets { H, V , D}. Orthogonal basis of L2([0, 1]2): k j,n (x) = 2 j (2 jx n) k=H,V,D j<0,2j n [0,1]2 2-D Wavelet Basis V (x) H(x) D(x)

Slide 17

Slide 17 text

wavelet f, k j,n Example of Wavelet Decomposition f(x) transform x (j, n, k)

Slide 18

Slide 18 text

Discrete Computations Discrete orthogonal basis { m } of CN . f = m f, m m

Slide 19

Slide 19 text

Fast Fourier Transform (FFT), O(N log(N)) operations. Discrete Computations Discrete orthogonal basis { m } of CN . m [n] = 1 N e2i N nm f = m f, m m

Slide 20

Slide 20 text

Fast Fourier Transform (FFT), O(N log(N)) operations. Fast Wavelet Transform, O(N) operations. Discrete Wavelet basis: no closed-form expression. Discrete Computations Discrete orthogonal basis { m } of CN . m [n] = 1 N e2i N nm f = m f, m m

Slide 21

Slide 21 text

Sparse Approximation in a Basis

Slide 22

Slide 22 text

Sparse Approximation in a Basis

Slide 23

Slide 23 text

Sparse Approximation in a Basis

Slide 24

Slide 24 text

Best basis Fastest error decay ||f fM ||2 log(||f fM ||) log(M) Efficiency of Transforms Fourier DCT Local DCT Wavelets

Slide 25

Slide 25 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Compressed Sensing

Slide 26

Slide 26 text

JPEG-2000 vs. JPEG, 0.2bit/pixel

Slide 27

Slide 27 text

Compression by Transform-coding Image f Zoom on f f forward a[m] = ⇥f, m ⇤ R transform

Slide 28

Slide 28 text

Compression by Transform-coding Image f Zoom on f f forward a[m] = ⇥f, m ⇤ R transform Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z

Slide 29

Slide 29 text

Compression by Transform-coding Image f Zoom on f f forward a[m] = ⇥f, m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z

Slide 30

Slide 30 text

Compression by Transform-coding Image f Zoom on f f forward a[m] = ⇥f, m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z

Slide 31

Slide 31 text

Compression by Transform-coding Image f Zoom on f f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z

Slide 32

Slide 32 text

Compression by Transform-coding Image f Zoom on f f , R =0.2 bit/pixel f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z

Slide 33

Slide 33 text

Compression by Transform-coding Image f Zoom on f f , R =0.2 bit/pixel f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z “Theorem:” ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R )

Slide 34

Slide 34 text

Noise in Images

Slide 35

Slide 35 text

Denoising

Slide 36

Slide 36 text

Denoising thresh. f = N 1 m=0 f, m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m

Slide 37

Slide 37 text

Denoising thresh. f = N 1 m=0 f, m ⇥ m ˜ f = | f, m ⇥|>T f, m ⇥ m In practice: T 3 for T = 2 log(N) Theorem: if ||f0 f0,M ||2 = O(M ), E(|| ˜ f f0 ||2) = O( 2 +1 )

Slide 38

Slide 38 text

Overview • Approximation in an Ortho-Basis • Compression and Denoising • Compressed Sensing

Slide 39

Slide 39 text

f[n] f0 (n/N) Sampling: ˜ f L2([0, 1]d) f RN Idealization: acquisition device Discretization

Slide 40

Slide 40 text

Data aquisition: Sensors Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 41

Slide 41 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 42

Slide 42 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural images are not smooth. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] h(t) = sin( t) t Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N)

Slide 43

Slide 43 text

Data aquisition: Sensors ˜ f(t) = i f[i]h(Nt i) Natural images are not smooth. But can be compressed e ciently. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] 0,1,0,. . . h(t) = sin( t) t Sample and compress simultaneously? Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N) JPEG-2k

Slide 44

Slide 44 text

Single Pixel Camera (Rice) y[i] = f0, i ⇥

Slide 45

Slide 45 text

Single Pixel Camera (Rice) y[i] = f0, i ⇥ f0, N = 2562 f , P/N = 0.16 f , P/N = 0.02

Slide 46

Slide 46 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2).

Slide 47

Slide 47 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware , ... K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,

Slide 48

Slide 48 text

Physical hardware resolution limit: target resolution f RN . ˜ f L2 f RN y RP micro mirrors array resolution CS hardware , ... f Operator K K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,

Slide 49

Slide 49 text

Need to solve y = Kf. More unknown than equations. dim(ker(K)) = N P is huge. Inversion and Sparsity f Operator K

Slide 50

Slide 50 text

Need to solve y = Kf. More unknown than equations. dim(ker(K)) = N P is huge. Prior information: f is sparse in a basis { m }m . J (f) = Card {m \ | f, m | > } is small. Inversion and Sparsity f Operator K f, m f

Slide 51

Slide 51 text

0 reconstruction: Minimize subject to Kf = y y = f f, 1 f, 2 CS Reconstruction J0 (f) = Card {m \ f, m = 0}

Slide 52

Slide 52 text

0 reconstruction: Minimize subject to Kf = y NP-hard to solve. y = f f, 1 f, 2 CS Reconstruction J0 (f) = Card {m \ f, m = 0}

Slide 53

Slide 53 text

0 reconstruction: Minimize subject to Kf = y 1 reconstruction: m | f, m | Polynomial-time algorithms. NP-hard to solve. y = f f, 1 f, 2 CS Reconstruction J0 (f) = Card {m \ f, m = 0} Minimize subject to Kf = y

Slide 54

Slide 54 text

Theorem: [Candes, Romberg, Tao, Donoho, 2004] If f is k-sparse, i.e. J0 (f) k If P C log(N/k)k then 1-CS reconstruction is exact. Theoretical Performance Guaranties

Slide 55

Slide 55 text

Theorem: [Candes, Romberg, Tao, Donoho, 2004] If f is k-sparse, i.e. J0 (f) k If P C log(N/k)k then 1-CS reconstruction is exact. Extensions to: noisy observation y = Kf + approximate sparsity f = fk sparse + Theoretical Performance Guaranties

Slide 56

Slide 56 text

Theorem: [Candes, Romberg, Tao, Donoho, 2004] If f is k-sparse, i.e. J0 (f) k If P C log(N/k)k then 1-CS reconstruction is exact. Extensions to: noisy observation y = Kf + approximate sparsity f = fk sparse + Research problem: optimal value of C ? for N/k = 4, C log(N/k) 5. “CS is 5 less e cient than JPEG-2k” Theoretical Performance Guaranties

Slide 57

Slide 57 text

Conclusion

Slide 58

Slide 58 text

Conclusion

Slide 59

Slide 59 text

random acquisition. optimization for reconstruction. #measures sparsity Conclusion • Compressed sensing.