Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Regularizations of Inverse Problems
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
Samuel Vaiter
September 06, 2013
Science
0
46
Regularizations of Inverse Problems
GRETSI'13, Quartz, Brest, September 2013.
Samuel Vaiter
September 06, 2013
Tweet
Share
More Decks by Samuel Vaiter
See All by Samuel Vaiter
Low Complexity Regularizations: A ''Localization'' Result
svaiter
0
66
A First Look at Proximal Methods
svaiter
0
210
Fast Distributed Total Variation
svaiter
0
110
Low Complexity Regularizations: A Localization Result
svaiter
0
68
Low Complexity Regularizations: a Localization Result
svaiter
0
120
Model Selection with Partly Smooth Functions
svaiter
0
42
Low Complexity Models: Robustness
svaiter
0
67
Low Complexity Models: Robustness and Sensivity
svaiter
0
38
Recovery Guarantees for Low Complexity Models
svaiter
1
77
Other Decks in Science
See All in Science
データベース10: 拡張実体関連モデル
trycycle
PRO
0
1.1k
中央大学AI・データサイエンスセンター 2025年第6回イブニングセミナー 『知能とはなにか ヒトとAIのあいだ』
tagtag
PRO
0
120
Celebrate UTIG: Staff and Student Awards 2025
utig
0
790
2025-05-31-pycon_italia
sofievl
0
140
論文紹介 音源分離:SCNET SPARSE COMPRESSION NETWORK FOR MUSIC SOURCE SEPARATION
kenmatsu4
0
520
データマイニング - ノードの中心性
trycycle
PRO
0
330
LayerXにおける業務の完全自動運転化に向けたAI技術活用事例 / layerx-ai-jsai2025
shimacos
5
21k
baseballrによるMLBデータの抽出と階層ベイズモデルによる打率の推定 / TokyoR118
dropout009
2
750
HajimetenoLT vol.17
hashimoto_kei
1
170
安心・効率的な医療現場の実現へ ~オンプレAI & ノーコードワークフローで進める業務改革~
siyoo
0
450
やるべきときにMLをやる AIエージェント開発
fufufukakaka
2
1.1k
機械学習 - DBSCAN
trycycle
PRO
0
1.5k
Featured
See All Featured
Self-Hosted WebAssembly Runtime for Runtime-Neutral Checkpoint/Restore in Edge–Cloud Continuum
chikuwait
0
330
How Fast Is Fast Enough? [PerfNow 2025]
tammyeverts
3
450
Marketing Yourself as an Engineer | Alaka | Gurzu
gurzu
0
130
How to train your dragon (web standard)
notwaldorf
97
6.5k
Leveraging LLMs for student feedback in introductory data science courses - posit::conf(2025)
minecr
0
140
Stewardship and Sustainability of Urban and Community Forests
pwiseman
0
110
Sharpening the Axe: The Primacy of Toolmaking
bcantrill
46
2.7k
Measuring Dark Social's Impact On Conversion and Attribution
stephenakadiri
1
130
Large-scale JavaScript Application Architecture
addyosmani
515
110k
Reality Check: Gamification 10 Years Later
codingconduct
0
2k
Avoiding the “Bad Training, Faster” Trap in the Age of AI
tmiket
0
76
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.6k
Transcript
Régularisation de problèmes inverses Analyse unifiée de la robustesse Samuel
VAITER CNRS, CEREMADE, Université Paris-Dauphine, France Travaux en collaboration avec M. GOLBABAEE, G. PEYRÉ et J. FADILI
Linear Inverse Problems inpainting denoising super-resolution Forward model y =
x0 + w observations noise input operator
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1 sparsity analysis-sparsity group-sparsity nuclear norm Tikhonov Total Variation Anti-sparse Polyhedral L1 + TV ... atomic norm decomposable norm Candidate J
Objectives Model selection performance x0 x w Prior model J
Objectives Model selection performance x0 x w Prior model J
Objectives Model selection performance x0 x w Prior model J
How close ? in term of SNR in term of features
Union of Linear Models Union of models: T T linear
spaces
Union of Linear Models Union of models: T T linear
spaces T sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity T sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity T sparsity analysis sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity low rank T sparsity analysis sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity low rank Objective Encode T in a function T sparsity analysis sparsity
Gauges 1 J(x) J : RN R+ convex J( x)
= J(x), 0
Gauges 1 J(x) J : RN R+ convex C C
= {x : J(x) 1} J( x) = J(x), 0
Gauges 1 J(x) J : RN R+ convex C C
= {x : J(x) 1} Geometry of C Union of Models (T )T T x T x x 0 T0 x x 0 T0 x 0 x T T0 ||x||1 |x1|+||x2,3|| ||x|| ||x|| J( x) = J(x), 0
Subdifferential |x| 0
Subdifferential |x| 0
J(x) = RN : x , J(x ) J(x)+ ,
x x Subdifferential |x| 0
J(x) = RN : x , J(x ) J(x)+ ,
x x Subdifferential |x| 0 |·|(0) = [ 1,1] x = 0, |·|(x) = {sign(x)}
From the Subdifferential to the Model J(x) x 0 J(x)
x 0
From the Subdifferential to the Model J(x) x 0 J(x)
x 0 Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)
From the Subdifferential to the Model J(x) x 0 J(x)
x 0 ex = ProjTx ( J(x)) ex ex ex = sign(x) Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)
Regularizations and their Models J(x) = ||x||1 ex = sign(x)
Tx = : supp( ) supp(x) x x J(x) = b ||xb|| ex = (N (xb))b B Tx = : supp( ) supp(x) x N (xb) = xb/||xb|| J(x) = ||x||∗ ex =UV Tx = : U V = 0 x x =UΛV ∗ J(x) = ||x||∞ ex = |I| 1 sign(x) Tx = : I sign(xI ) x x I = {i : |xi | = ||x||∞}
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: 0 = ( + Tx0 ) ex0 Minimal norm pre-certificate: Tx = Tx0 and ||x x0|| = O(||w||) If 0 ¯ D,||w|| small enough and ||w||, then x is the unique solution. Moreover, [V. et al. 2013] 1: [Fuchs 2004] 1 2: [Bach 2008] Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x
Example: Sparse Deconvolution x = i xi (· i) J(x)
= ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0
Example: Sparse Deconvolution x = i xi (· i) J(x)
= ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0 I = j : x0[j] = 0 || 0,Ic || < 1 0 ¯ D support recovery || 0,Ic || 1 2 20
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability x0 || 0,Ic || = 1 2-stability only
Conclusion Gauges: encode linear models as singular points
Conclusion Gauges: encode linear models as singular points Certificates: guarantees
of model selection / 2 robustness (see poster 208 for a pure robustness result)
Conclusion Merci de votre attention ! Gauges: encode linear models
as singular points Certificates: guarantees of model selection / 2 robustness (see poster 208 for a pure robustness result)