m ⇤ R coding transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
m ⇤ R coding decoding q[m] Z transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
= sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ˜ a[m]
= sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
= sign(q[m]) |q[m] + 1 2 ⇥ T a[m] = ⇥f, m ⇤ R coding decoding q[m] Z ˜ a[m] dequantization transform backward fR = m IT ˜ a[m] m transform Entropic coding: use statistical redundancy (many 0’s). Quantization: q[m] = sign(a[m]) |a[m]| T ⇥ Z ˜ a[m] T T 2T 2T a[m] bin T q[m] Z ||f fM ||2 = O(M ) =⇥ ||f fR ||2 = O(log (R)R ) Theorem: Zoom on f ˜ a[m] fR , R =0.2 bit/pixel
2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP
2 RP Equivalent model: If is invertible: 1 y = x0 + 1 w L1 Regularization observations = K ⇥ ⇥ RP N w coe cients image K x0 RN f0 = x0 RQ y = Kf0 + w RP Problems: 1w can “explose”. can even be non-invertible.
y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . Choice of : tradeo ||w|| Noise level
2 argmin x = y J ( x ) Inverse Problem Regularization observations y parameter Example: variational methods Estimator: x(y) depends only on x ( y ) 2 argmin x 2RN 1 2 || y x ||2 + J ( x ) Data fidelity Regularity Observations: y = x0 + w 2 RP . No noise: 0+, minimize Choice of : tradeo ||w|| Noise level
Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Sparse Priors
Id, ⇤ = Id x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
Id, ⇤ = Id Non-orthogonal : NP-hard to solve. x? 2 argmin x 1 2 || y x ||2 + J0( x ) J0( x ) = # { m ; xm 6= 0} “Ideal” sparsity: where ˜ x = ⇤ y = {h y, m i}m, T 2 = 2 min x || ⇤ y x ||2 + T 2 J0( x ) = P m |˜ xm xm |2 + T 2 ( xm ) Solution: x ? m = ⇢ xm if | ˜ xm | > T, 0 otherwise. Sparse Priors y ?y x ?
x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0