if x , f(x) if x / . K : RQ RP Denoising: K = Id Q, P = Q. Inpainting: set of missing pixels, P = Q | |. Super-resolution: Kf = (f k) , P = Q/ . Inverse Problems K y = K f0 + w RP
tradeo ||w|| Regularity of f0 J(f0 ) Noise level f argmin f RQ 1 2 ||y Kf||2 + J(f) Prior model: J : RQ R assigns a score to images. Inverse Problem Regularization Data fidelity Regularity
minimize Choice of : tradeo ||w|| Regularity of f0 J(f0 ) Noise level f argmin f RQ 1 2 ||y Kf||2 + J(f) Prior model: J : RQ R assigns a score to images. f argmin f RQ,Kf=y J(f) Inverse Problem Regularization Data fidelity Regularity
, n) scale orientation position m = ei ·, m frequency Fourier: DCT, Curvelets, bandlets, . . . Q N Dictionary = ( m ) m RQ N , N Q. Redundant Dictionaries = 2 = 1
Image f = x Coe cients x Wavelets: m = (2 jR x n) m = (j, , n) scale orientation position m = ei ·, m frequency Fourier: DCT, Curvelets, bandlets, . . . Q N Dictionary = ( m ) m RQ N , N Q. Redundant Dictionaries = = 2 = 1
= # {m \ xm = 0} Sparse approximation: f = x where Orthogonal : = = Id N xm = f0, m if | f0, m | > T, 0 otherwise. ST f = ST (f0 ) argmin x 2RN || f0 x ||2 + T 2 J0( x ) Sparse Priors Image f0 Coe cients x
= # {m \ xm = 0} Sparse approximation: f = x where Orthogonal : = = Id N xm = f0, m if | f0, m | > T, 0 otherwise. ST Non-orthogonal : NP-hard. f = ST (f0 ) argmin x 2RN || f0 x ||2 + T 2 J0( x ) Sparse Priors Image f0 Coe cients x
x argmin x=y m |xm |2 Noiseless Sparse Regularization Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet]. x x = y Noiseless measurements: y = x0
|| x = y|| y = x0 + w Noisy measurements: x argmin x RQ 1 2 ||y x||2 + ||x||1 x argmin || x y|| ||x||1 Noisy Sparse Regularization Nesterov multi-steps schemes. see [Daubechies et al], [Pesquet et al], etc x
||⇥f||2 ˆ f (⇥) = ˆ h(⇥) |ˆ h(⇥)|2 + |⇥|2 ˆ y(⇥) Sobolev regularization: = translation invariant wavelets. x argmin x 1 2 ||h ( x) y||2 + ||x||1 f = x where Sparsity Image De-blurring Original f0 y = h f0 + w Sparsity regularization: SNR=24.7dB Sobolev SNR=22.7dB
Regularization-based denoising: x? = argmin x 2RN 1 2 || x y ||2 + J ( x ) Denoising: y = x0 + w 2 RN , K = Id. Sparse regularization: J ( x ) = P m | xm |q (where |a|0 = (a)) Sparse Regularization Denoising
Regularization-based denoising: x? = argmin x 2RN 1 2 || x y ||2 + J ( x ) Denoising: y = x0 + w 2 RN , K = Id. Sparse regularization: J ( x ) = P m | xm |q x ? m = S q T ( xm) (where |a|0 = (a)) Sparse Regularization Denoising
) = 1 2 || y x ||2 + || x ||1 E ( x, ˜ x ) = E ( x ) 1 2 || ( x ˜ x )||2 + 1 2 ⌧ || x ˜ x ||2 E (· , ˜ x ) x ˜ x S ⌧ (u) ⌧ < 1/|| ⇤ || Surrogate Functionals Surrogate functional: E(·)
) = 1 2 || y x ||2 + || x ||1 E ( x, ˜ x ) = E ( x ) 1 2 || ( x ˜ x )||2 + 1 2 ⌧ || x ˜ x ||2 Proof: E ( x, ˜ x ) / 1 2 || u x ||2 + || x ||1+ cst. argmin x E ( x, ˜ x ) = S ⌧ ( u ) Theorem: where u = x ⌧ ⇤( x ˜ x ) E (· , ˜ x ) x ˜ x S ⌧ (u) ⌧ < 1/|| ⇤ || Surrogate Functionals Surrogate functional: E(·)
x, x ( ` )) x (`+1) = S 1 ⌧ ( u (`)) u (`) = x (`) ⌧ ⇤( x (`) ⌧y ) Remark: x (`) 7! u (`) is a gradient descent of || x y ||2 . S 1 `⌧ is the proximal step of associated to || x ||1. Initialize x (0), set ` = 0. x (0) x (1) x (2) Iterative Thresholding x E(·)
x (`) ! x ?. Algorithm: x ( ` +1) = argmin x E ( x, x ( ` )) x (`+1) = S 1 ⌧ ( u (`)) u (`) = x (`) ⌧ ⇤( x (`) ⌧y ) Remark: x (`) 7! u (`) is a gradient descent of || x y ||2 . S 1 `⌧ is the proximal step of associated to || x ||1. Initialize x (0), set ` = 0. x (0) x (1) x (2) Iterative Thresholding x E(·)
||) k Not strictly convex = no convergence speed. Convergence Study k Energy: E(f) = 1 2 ||h ⇥ f y||2 + m |f[m]|. Sparse deconvolution: f = argmin f RN E(f).