Slide 1

Slide 1 text

Régularisation de problèmes inverses Analyse unifiée de la robustesse Samuel VAITER CNRS, CEREMADE, Université Paris-Dauphine, France Travaux en collaboration avec M. GOLBABAEE, G. PEYRÉ et J. FADILI

Slide 2

Slide 2 text

Linear Inverse Problems inpainting denoising super-resolution Forward model y = x0 + w observations noise input operator

Slide 3

Slide 3 text

The Variational Approach x argmin x RN 1 2 ||y x||2 2 + J(x) Data fidelity Regularity

Slide 4

Slide 4 text

The Variational Approach x argmin x RN 1 2 ||y x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1

Slide 5

Slide 5 text

The Variational Approach x argmin x RN 1 2 ||y x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1 sparsity analysis-sparsity group-sparsity nuclear norm Tikhonov Total Variation Anti-sparse Polyhedral L1 + TV ... atomic norm decomposable norm Candidate J

Slide 6

Slide 6 text

Objectives Model selection performance x0 x w Prior model J

Slide 7

Slide 7 text

Objectives Model selection performance x0 x w Prior model J

Slide 8

Slide 8 text

Objectives Model selection performance x0 x w Prior model J How close ? in term of SNR in term of features

Slide 9

Slide 9 text

Union of Linear Models Union of models: T T linear spaces

Slide 10

Slide 10 text

Union of Linear Models Union of models: T T linear spaces T sparsity

Slide 11

Slide 11 text

Union of Linear Models Union of models: T T linear spaces block sparsity T sparsity

Slide 12

Slide 12 text

Union of Linear Models Union of models: T T linear spaces block sparsity T sparsity analysis sparsity

Slide 13

Slide 13 text

Union of Linear Models Union of models: T T linear spaces block sparsity low rank T sparsity analysis sparsity

Slide 14

Slide 14 text

Union of Linear Models Union of models: T T linear spaces block sparsity low rank Objective Encode T in a function T sparsity analysis sparsity

Slide 15

Slide 15 text

Gauges 1 J(x) J : RN R+ convex J( x) = J(x), 0

Slide 16

Slide 16 text

Gauges 1 J(x) J : RN R+ convex C C = {x : J(x) 1} J( x) = J(x), 0

Slide 17

Slide 17 text

Gauges 1 J(x) J : RN R+ convex C C = {x : J(x) 1} Geometry of C Union of Models (T )T T x T x x 0 T0 x x 0 T0 x 0 x T T0 ||x||1 |x1|+||x2,3|| ||x|| ||x|| J( x) = J(x), 0

Slide 18

Slide 18 text

Subdifferential |x| 0

Slide 19

Slide 19 text

Subdifferential |x| 0

Slide 20

Slide 20 text

J(x) = RN : x , J(x ) J(x)+ , x x Subdifferential |x| 0

Slide 21

Slide 21 text

J(x) = RN : x , J(x ) J(x)+ , x x Subdifferential |x| 0 |·|(0) = [ 1,1] x = 0, |·|(x) = {sign(x)}

Slide 22

Slide 22 text

From the Subdifferential to the Model J(x) x 0 J(x) x 0

Slide 23

Slide 23 text

From the Subdifferential to the Model J(x) x 0 J(x) x 0 Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)

Slide 24

Slide 24 text

From the Subdifferential to the Model J(x) x 0 J(x) x 0 ex = ProjTx ( J(x)) ex ex ex = sign(x) Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)

Slide 25

Slide 25 text

Regularizations and their Models J(x) = ||x||1 ex = sign(x) Tx = : supp( ) supp(x) x x J(x) = b ||xb|| ex = (N (xb))b B Tx = : supp( ) supp(x) x N (xb) = xb/||xb|| J(x) = ||x||∗ ex =UV Tx = : U V = 0 x x =UΛV ∗ J(x) = ||x||∞ ex = |I| 1 sign(x) Tx = : I sign(xI ) x x I = {i : |xi | = ||x||∞}

Slide 26

Slide 26 text

Dual Certificates and Model Selection x argmin x RN 1 2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough

Slide 27

Slide 27 text

Dual Certificates and Model Selection x argmin x RN 1 2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x

Slide 28

Slide 28 text

Dual Certificates and Model Selection x argmin x RN 1 2 ||y x||2 2 + J(x) Hypothesis: 0 = ( + Tx0 ) ex0 Minimal norm pre-certificate: Tx = Tx0 and ||x x0|| = O(||w||) If 0 ¯ D,||w|| small enough and ||w||, then x is the unique solution. Moreover, [V. et al. 2013] 1: [Fuchs 2004] 1 2: [Bach 2008] Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x

Slide 29

Slide 29 text

Example: Sparse Deconvolution x = i xi (· i) J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0

Slide 30

Slide 30 text

Example: Sparse Deconvolution x = i xi (· i) J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0 I = j : x0[j] = 0 || 0,Ic || < 1 0 ¯ D support recovery || 0,Ic || 1 2 20

Slide 31

Slide 31 text

Example: 1D TV Denoising J(x) = || x||1 = Id I = {i : ( x0)i = 0} x0

Slide 32

Slide 32 text

Example: 1D TV Denoising J(x) = || x||1 = Id I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability

Slide 33

Slide 33 text

Example: 1D TV Denoising J(x) = || x||1 = Id I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability x0 || 0,Ic || = 1 2-stability only

Slide 34

Slide 34 text

Conclusion Gauges: encode linear models as singular points

Slide 35

Slide 35 text

Conclusion Gauges: encode linear models as singular points Certificates: guarantees of model selection / 2 robustness (see poster 208 for a pure robustness result)

Slide 36

Slide 36 text

Conclusion Merci de votre attention ! Gauges: encode linear models as singular points Certificates: guarantees of model selection / 2 robustness (see poster 208 for a pure robustness result)