Lock in $30 Savings on PRO—Offer Ends Soon! ⏳
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Regularizations of Inverse Problems
Search
Samuel Vaiter
September 06, 2013
Science
0
44
Regularizations of Inverse Problems
GRETSI'13, Quartz, Brest, September 2013.
Samuel Vaiter
September 06, 2013
Tweet
Share
More Decks by Samuel Vaiter
See All by Samuel Vaiter
Low Complexity Regularizations: A ''Localization'' Result
svaiter
0
66
A First Look at Proximal Methods
svaiter
0
210
Fast Distributed Total Variation
svaiter
0
110
Low Complexity Regularizations: A Localization Result
svaiter
0
68
Low Complexity Regularizations: a Localization Result
svaiter
0
120
Model Selection with Partly Smooth Functions
svaiter
0
42
Low Complexity Models: Robustness
svaiter
0
67
Low Complexity Models: Robustness and Sensivity
svaiter
0
38
Recovery Guarantees for Low Complexity Models
svaiter
1
77
Other Decks in Science
See All in Science
Agent開発フレームワークのOverviewとW&B Weaveとのインテグレーション
siyoo
0
390
データマイニング - ウェブとグラフ
trycycle
PRO
0
210
mOrganic™ Holdings, LLC.
hyperlocalnetwork
0
210
白金鉱業Meetup_Vol.20 効果検証ことはじめ / Introduction to Impact Evaluation
brainpadpr
2
1.4k
AI(人工知能)の過去・現在・未来 —AIは人間を超えるのか—
tagtag
0
130
生成AIと学ぶPythonデータ分析再入門-Pythonによるクラスタリング・可視化をサクサク実施-
datascientistsociety
PRO
4
1.9k
KH Coderチュートリアル(スライド版)
koichih
1
54k
主成分分析に基づく教師なし特徴抽出法を用いたコラーゲン-グリコサミノグリカンメッシュの遺伝子発現への影響
tagtag
0
130
イロレーティングを活用した関東大学サッカーの定量的実力評価 / A quantitative performance evaluation of Kanto University Football Association using Elo rating
konakalab
0
140
baseballrによるMLBデータの抽出と階層ベイズモデルによる打率の推定 / TokyoR118
dropout009
2
630
academist Prize 4期生 研究トーク延長戦!「美は世界を救う」っていうけど、どうやって?
jimpe_hitsuwari
0
450
生成検索エンジン最適化に関する研究の紹介
ynakano
2
1.5k
Featured
See All Featured
How to Ace a Technical Interview
jacobian
281
24k
Art, The Web, and Tiny UX
lynnandtonic
304
21k
Sharpening the Axe: The Primacy of Toolmaking
bcantrill
46
2.6k
Responsive Adventures: Dirty Tricks From The Dark Corners of Front-End
smashingmag
254
22k
Fight the Zombie Pattern Library - RWD Summit 2016
marcelosomers
234
17k
ReactJS: Keep Simple. Everything can be a component!
pedronauck
666
130k
GitHub's CSS Performance
jonrohan
1032
470k
Easily Structure & Communicate Ideas using Wireframe
afnizarnur
194
17k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
37
2.6k
Large-scale JavaScript Application Architecture
addyosmani
515
110k
Build The Right Thing And Hit Your Dates
maggiecrowley
38
3k
How to Create Impact in a Changing Tech Landscape [PerfNow 2023]
tammyeverts
55
3.1k
Transcript
Régularisation de problèmes inverses Analyse unifiée de la robustesse Samuel
VAITER CNRS, CEREMADE, Université Paris-Dauphine, France Travaux en collaboration avec M. GOLBABAEE, G. PEYRÉ et J. FADILI
Linear Inverse Problems inpainting denoising super-resolution Forward model y =
x0 + w observations noise input operator
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1
The Variational Approach x argmin x RN 1 2 ||y
x||2 2 + J(x) Data fidelity Regularity J(x) = || x||2 J(x) = || x||1 sparsity analysis-sparsity group-sparsity nuclear norm Tikhonov Total Variation Anti-sparse Polyhedral L1 + TV ... atomic norm decomposable norm Candidate J
Objectives Model selection performance x0 x w Prior model J
Objectives Model selection performance x0 x w Prior model J
Objectives Model selection performance x0 x w Prior model J
How close ? in term of SNR in term of features
Union of Linear Models Union of models: T T linear
spaces
Union of Linear Models Union of models: T T linear
spaces T sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity T sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity T sparsity analysis sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity low rank T sparsity analysis sparsity
Union of Linear Models Union of models: T T linear
spaces block sparsity low rank Objective Encode T in a function T sparsity analysis sparsity
Gauges 1 J(x) J : RN R+ convex J( x)
= J(x), 0
Gauges 1 J(x) J : RN R+ convex C C
= {x : J(x) 1} J( x) = J(x), 0
Gauges 1 J(x) J : RN R+ convex C C
= {x : J(x) 1} Geometry of C Union of Models (T )T T x T x x 0 T0 x x 0 T0 x 0 x T T0 ||x||1 |x1|+||x2,3|| ||x|| ||x|| J( x) = J(x), 0
Subdifferential |x| 0
Subdifferential |x| 0
J(x) = RN : x , J(x ) J(x)+ ,
x x Subdifferential |x| 0
J(x) = RN : x , J(x ) J(x)+ ,
x x Subdifferential |x| 0 |·|(0) = [ 1,1] x = 0, |·|(x) = {sign(x)}
From the Subdifferential to the Model J(x) x 0 J(x)
x 0
From the Subdifferential to the Model J(x) x 0 J(x)
x 0 Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)
From the Subdifferential to the Model J(x) x 0 J(x)
x 0 ex = ProjTx ( J(x)) ex ex ex = sign(x) Tx= VectHull( J(x)) Tx Tx Tx = : supp( ) supp(x)
Regularizations and their Models J(x) = ||x||1 ex = sign(x)
Tx = : supp( ) supp(x) x x J(x) = b ||xb|| ex = (N (xb))b B Tx = : supp( ) supp(x) x N (xb) = xb/||xb|| J(x) = ||x||∗ ex =UV Tx = : U V = 0 x x =UΛV ∗ J(x) = ||x||∞ ex = |I| 1 sign(x) Tx = : I sign(xI ) x x I = {i : |xi | = ||x||∞}
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x
Dual Certificates and Model Selection x argmin x RN 1
2 ||y x||2 2 + J(x) Hypothesis: 0 = ( + Tx0 ) ex0 Minimal norm pre-certificate: Tx = Tx0 and ||x x0|| = O(||w||) If 0 ¯ D,||w|| small enough and ||w||, then x is the unique solution. Moreover, [V. et al. 2013] 1: [Fuchs 2004] 1 2: [Bach 2008] Ker Tx0 = {0} J regular enough ¯ D = Im ri( J(x0)) Tight dual certificates: x = x0 J(x) x
Example: Sparse Deconvolution x = i xi (· i) J(x)
= ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0
Example: Sparse Deconvolution x = i xi (· i) J(x)
= ||x||1 Increasing : reduces correlation. reduces resolution. x0 x0 I = j : x0[j] = 0 || 0,Ic || < 1 0 ¯ D support recovery || 0,Ic || 1 2 20
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability
Example: 1D TV Denoising J(x) = || x||1 = Id
I = {i : ( x0)i = 0} x0 +1 1 0 = div( 0) where j I,( 0)j = 0 x0 I J || 0,Ic || < 1 Support stability x0 || 0,Ic || = 1 2-stability only
Conclusion Gauges: encode linear models as singular points
Conclusion Gauges: encode linear models as singular points Certificates: guarantees
of model selection / 2 robustness (see poster 208 for a pure robustness result)
Conclusion Merci de votre attention ! Gauges: encode linear models
as singular points Certificates: guarantees of model selection / 2 robustness (see poster 208 for a pure robustness result)