O(N) operations. Discrete Wavelet basis: no closed-form expression. Discrete Computations Discrete orthogonal basis { m } of CN . m [n] = 1 N e2i N nm f = m f, m m
a[m] = ⇥f, m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z
a[m] = ⇥f, m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z
Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z
R =0.2 bit/pixel f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z
R =0.2 bit/pixel f forward Dequantization: ˜ a[m] = sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] Quantized q[m] bin T q[m] Z ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R ) Theorem:
2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP Problems: 1w can “explose”. can even be non-invertible.
y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . Choice of : tradeo ||w|| Noise level
2 argmin x = y J ( x ) Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . No noise: 0+, minimize Choice of : tradeo ||w|| Noise level
Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Sparse Priors y ?y
Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y ? x ? x ?
Id, ⇤ = Id Non-orthogonal : NP-hard to solve. x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y ? x ? x ?
x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0
|| x = y|| y = x0 + w Noisy measurements: x argmin x RQ 1 2 ||y x||2 + ||x||1 x argmin || x y|| ||x||1 Noisy Sparse Regularization Nesterov multi-steps schemes. see [Daubechies et al], [Pesquet et al], etc x
images are not smooth. But can be compressed e ciently. Shannon interpolation: if Supp( ˆ ˜ f) [ N , N ] 0,1,0,. . . h(t) = sin( t) t Sample and compress simultaneously? Pointwise Sampling and Smoothness ˜ f L2 f RN f[i] = ˜ f(i/N) JPEG-2k
f L2 f RN y RP micro mirrors array resolution CS hardware , ... f Operator K K CS Hardware Model CS is about designing hardware: input signals ˜ f L2(R2). , ,
dim(ker(K)) = N P is huge. Prior information: f is sparse in a basis { m }m . J (f) = Card {m \ | f, m | > } is small. Inversion and Sparsity f Operator K f, m f
i.e. J0 (f) k If P C log(N/k)k then 1-CS reconstruction is exact. Extensions to: noisy observation y = Kf + approximate sparsity f = fk sparse + Theoretical Performance Guaranties
i.e. J0 (f) k If P C log(N/k)k then 1-CS reconstruction is exact. Extensions to: noisy observation y = Kf + approximate sparsity f = fk sparse + Research problem: optimal value of C ? for N/k = 4, C log(N/k) 5. “CS is 5 less e cient than JPEG-2k” Theoretical Performance Guaranties
Regularization-based denoising: x? = argmin x 2RN 1 2 || x y ||2 + J ( x ) Denoising: y = x0 + w 2 RN , K = Id. Sparse regularization: J ( x ) = P m | xm |q (where |a|0 = (a)) Sparse Regularization Denoising
Regularization-based denoising: x? = argmin x 2RN 1 2 || x y ||2 + J ( x ) Denoising: y = x0 + w 2 RN , K = Id. Sparse regularization: J ( x ) = P m | xm |q x ? m = S q T ( xm) (where |a|0 = (a)) Sparse Regularization Denoising
) = 1 2 || y x ||2 + || x ||1 E ( x, ˜ x ) = E ( x ) 1 2 || ( x ˜ x )||2 + 1 2 ⌧ || x ˜ x ||2 E (· , ˜ x ) x ˜ x S ⌧ (u) ⌧ < 1/|| ⇤ || Surrogate Functionals Surrogate functional: E(·)
) = 1 2 || y x ||2 + || x ||1 E ( x, ˜ x ) = E ( x ) 1 2 || ( x ˜ x )||2 + 1 2 ⌧ || x ˜ x ||2 Proof: E ( x, ˜ x ) / 1 2 || u x ||2 + || x ||1+ cst. argmin x E ( x, ˜ x ) = S ⌧ ( u ) Theorem: E (· , ˜ x ) x ˜ x S ⌧ (u) ⌧ < 1/|| ⇤ || where u = ˜ x ⌧ ⇤( ˜ x y ) Surrogate Functionals Surrogate functional: E(·)
x, x ( ` )) x (`+1) = S 1 ⌧ ( u (`)) Remark: x (`) 7! u (`) is a gradient descent of || x y ||2 . S 1 `⌧ is the proximal step of associated to || x ||1. Initialize x (0), set ` = 0. x (0) x (1) x (2) u (`) = x (`) ⌧ ⇤( x (`) y ) Iterative Thresholding x E(·)
x (`) ! x ?. Algorithm: x ( ` +1) = argmin x E ( x, x ( ` )) x (`+1) = S 1 ⌧ ( u (`)) Remark: x (`) 7! u (`) is a gradient descent of || x y ||2 . S 1 `⌧ is the proximal step of associated to || x ||1. Initialize x (0), set ` = 0. x (0) x (1) x (2) u (`) = x (`) ⌧ ⇤( x (`) y ) Iterative Thresholding x E(·)