A Sparse Tour
of Imaging Sciences
Gabriel Peyré
www.numerical-tours.com
Slide 2
Slide 2 text
Signals, Images and More
Slide 3
Slide 3 text
Signals, Images and More
Slide 4
Slide 4 text
Signals, Images and More
Slide 5
Slide 5 text
Signals, Images and More
Slide 6
Slide 6 text
Signals, Images and More
Slide 7
Slide 7 text
Overview
• Approximation in an Ortho-Basis
• Compression and Denoising
• Sparse Inverse Problem Regularization
• Compressed Sensing
• Iterative Soft Thresholding Algorithm
Slide 8
Slide 8 text
Orthogonal basis { m
}m
of L2([0, 1]d)
Continuous signal/image f L2([0, 1]d).
Orthogonal Decompositions
Slide 9
Slide 9 text
Orthogonal basis { m
}m
of L2([0, 1]d)
f =
m
f, m m
||f|| = |f(x)|2dx =
m
| f, m
⇥|2
Continuous signal/image f L2([0, 1]d).
Orthogonal Decompositions
Slide 10
Slide 10 text
Orthogonal basis { m
}m
of L2([0, 1]d)
f =
m
f, m m
||f|| = |f(x)|2dx =
m
| f, m
⇥|2
Continuous signal/image f L2([0, 1]d).
Orthogonal Decompositions
m
Slide 11
Slide 11 text
1-D Wavelet Basis
Wavelets:
j,n
(x) =
1
2j/2
x 2jn
2j
Position n, scale 2j, m = (n, j).
Slide 12
Slide 12 text
1-D Wavelet Basis
Wavelets:
j,n
(x) =
1
2j/2
x 2jn
2j
Position n, scale 2j, m = (n, j).
m1,m2
Basis { m1,m2
(x1, x2
)}m1,m2
of L2([0, 1]2)
m1,m2
(x1, x2
) =
m1
(x1
)
m2
(x2
)
tensor
product
f(x) f, m1,m2
Fourier
transform
2-D Fourier Basis
Basis { m
(x)}m
of L2([0, 1])
m1
m2
x
m
Slide 15
Slide 15 text
3 elementary wavelets { H, V , D}.
Orthogonal basis of L2([0, 1]2):
k
j,n
(x) = 2 j (2 jx n)
k=H,V,D
j<0,2j n [0,1]2
2-D Wavelet Basis
V (x)
H(x) D(x)
Slide 16
Slide 16 text
3 elementary wavelets { H, V , D}.
Orthogonal basis of L2([0, 1]2):
k
j,n
(x) = 2 j (2 jx n)
k=H,V,D
j<0,2j n [0,1]2
2-D Wavelet Basis
V (x)
H(x) D(x)
Slide 17
Slide 17 text
wavelet
f, k
j,n
Example of Wavelet Decomposition
f(x)
transform
x
(j, n, k)
Slide 18
Slide 18 text
Discrete Computations
Discrete orthogonal basis { m
} of CN .
f =
m
f, m m
Slide 19
Slide 19 text
Fast Fourier Transform (FFT), O(N log(N)) operations.
Discrete Computations
Discrete orthogonal basis { m
} of CN .
m
[n] =
1
N
e2i
N
nm
f =
m
f, m m
Slide 20
Slide 20 text
Fast Fourier Transform (FFT), O(N log(N)) operations.
Fast Wavelet Transform, O(N) operations.
Discrete Wavelet basis: no closed-form expression.
Discrete Computations
Discrete orthogonal basis { m
} of CN .
m
[n] =
1
N
e2i
N
nm
f =
m
f, m m
Slide 21
Slide 21 text
Sparse Approximation in a Basis
Slide 22
Slide 22 text
Sparse Approximation in a Basis
Slide 23
Slide 23 text
Sparse Approximation in a Basis
Slide 24
Slide 24 text
Best basis Fastest error decay ||f fM
||2
log(||f fM
||)
log(M)
Efficiency of Transforms
Fourier DCT
Local DCT Wavelets
Slide 25
Slide 25 text
Overview
• Approximation in an Ortho-Basis
• Compression and Denoising
• Sparse Inverse Problem Regularization
• Compressed Sensing
• Iterative Soft Thresholding Algorithm
Slide 26
Slide 26 text
JPEG-2000 vs. JPEG, 0.2bit/pixel
Slide 27
Slide 27 text
Compression by Transform-coding
Image f Zoom on f
f
forward
a[m] = ⇥f, m
⇤ R
transform
Slide 28
Slide 28 text
Compression by Transform-coding
Image f Zoom on f
f
forward
a[m] = ⇥f, m
⇤ R
transform
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
Slide 29
Slide 29 text
Compression by Transform-coding
Image f Zoom on f
f
forward
a[m] = ⇥f, m
⇤ R coding
transform
Entropic coding: use statistical redundancy (many 0’s).
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
Slide 30
Slide 30 text
Compression by Transform-coding
Image f Zoom on f
f
forward
a[m] = ⇥f, m
⇤ R coding
decoding
q[m] Z
transform
Entropic coding: use statistical redundancy (many 0’s).
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
Slide 31
Slide 31 text
Compression by Transform-coding
Image f Zoom on f
f
forward
Dequantization: ˜
a[m] = sign(q[m]) |q[m] +
1
2
⇥
T
a[m] = ⇥f, m
⇤ R coding
decoding
q[m] Z
˜
a[m] dequantization
transform
Entropic coding: use statistical redundancy (many 0’s).
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
Slide 32
Slide 32 text
Compression by Transform-coding
Image f Zoom on f f , R =0.2 bit/pixel
f
forward
Dequantization: ˜
a[m] = sign(q[m]) |q[m] +
1
2
⇥
T
a[m] = ⇥f, m
⇤ R coding
decoding
q[m] Z
˜
a[m] dequantization
transform
backward
fR
=
m IT
˜
a[m]
m
transform
Entropic coding: use statistical redundancy (many 0’s).
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
Slide 33
Slide 33 text
Compression by Transform-coding
Image f Zoom on f f , R =0.2 bit/pixel
f
forward
Dequantization: ˜
a[m] = sign(q[m]) |q[m] +
1
2
⇥
T
a[m] = ⇥f, m
⇤ R coding
decoding
q[m] Z
˜
a[m] dequantization
transform
backward
fR
=
m IT
˜
a[m]
m
transform
Entropic coding: use statistical redundancy (many 0’s).
Quantization: q[m] = sign(a[m])
|a[m]|
T
⇥
Z
˜
a[m]
T T 2T
2T a[m]
Quantized q[m]
bin T
q[m] Z
||f fM
||2 = O(M ) =⇥ ||f fR
||2 = O(log (R)R )
Theorem:
Slide 34
Slide 34 text
Noise in Images
Slide 35
Slide 35 text
Denoising
Slide 36
Slide 36 text
Denoising
thresh.
f =
N 1
m=0
f, m
⇥ m
˜
f =
| f, m
⇥|>T
f, m
⇥ m
Slide 37
Slide 37 text
Denoising
thresh.
f =
N 1
m=0
f, m
⇥ m
˜
f =
| f, m
⇥|>T
f, m
⇥ m
In practice:
T 3
for T = 2 log(N)
Theorem: if ||f0 f0,M
||2 = O(M ),
E(|| ˜
f f0
||2) = O( 2
+1 )
Slide 38
Slide 38 text
Overview
• Approximation in an Ortho-Basis
• Compression and Denoising
• Sparse Inverse Problems Regularization
• Compressed Sensing
• Iterative Soft Thresholding Algorithm
Slide 39
Slide 39 text
Recovering
f0
2 RN
from noisy observations:
y = Kf0 + w 2 RP
:
RN ! RP
with
P ⌧ N
(missing information)
Inverse Problems
Slide 40
Slide 40 text
Examples: Inpainting, super-resolution, . . .
Recovering
f0
2 RN
from noisy observations:
y = Kf0 + w 2 RP
:
RN ! RP
with
P ⌧ N
(missing information)
f0
K
Inverse Problems
K
Slide 41
Slide 41 text
Kf = (p✓k
)k
Inverse Problems in Medical Imaging
Slide 42
Slide 42 text
Magnetic resonance imaging (MRI):
Kf = (p✓k
)k
Kf = ( ˆ
f(!))!2⌦
Inverse Problems in Medical Imaging
ˆ
x
Slide 43
Slide 43 text
Magnetic resonance imaging (MRI):
Other examples: MEG, EEG, . . .
Kf = (p✓k
)k
Kf = ( ˆ
f(!))!2⌦
Inverse Problems in Medical Imaging
ˆ
x
L1 Regularization
observations
w
coe cients image
K
x0
RN f0
= x0
RQ y = Kf0
+ w RP
Slide 47
Slide 47 text
y
= K
f0 +
w
=
x0 +
w
2 RP
Equivalent model:
L1 Regularization
observations
= K ⇥ ⇥ RP N
w
coe cients image
K
x0
RN f0
= x0
RQ y = Kf0
+ w RP
Slide 48
Slide 48 text
y
= K
f0 +
w
=
x0 +
w
2 RP
Equivalent model:
If is invertible: 1
y
=
x0 + 1
w
L1 Regularization
observations
= K ⇥ ⇥ RP N
w
coe cients image
K
x0
RN f0
= x0
RQ y = Kf0
+ w RP
Slide 49
Slide 49 text
y
= K
f0 +
w
=
x0 +
w
2 RP
Equivalent model:
If is invertible: 1
y
=
x0 + 1
w
L1 Regularization
observations
= K ⇥ ⇥ RP N
w
coe cients image
K
x0
RN f0
= x0
RQ y = Kf0
+ w RP
Problems:
1w
can “explose”.
can even be non-invertible.
Slide 50
Slide 50 text
Inverse Problem Regularization
observations
y
parameter
Estimator: x(y) depends only on
Observations:
y
=
x0 +
w
2 RP .
Slide 51
Slide 51 text
Inverse Problem Regularization
observations
y
parameter
Example: variational methods
Estimator: x(y) depends only on
x
(
y
) 2 argmin
x
2RN
1
2
||
y x
||2 +
J
(
x
)
Data fidelity Regularity
Observations:
y
=
x0 +
w
2 RP .
Slide 52
Slide 52 text
J
(
x0)
Regularity of x0
Inverse Problem Regularization
observations
y
parameter
Example: variational methods
Estimator: x(y) depends only on
x
(
y
) 2 argmin
x
2RN
1
2
||
y x
||2 +
J
(
x
)
Data fidelity Regularity
Observations:
y
=
x0 +
w
2 RP .
Choice of : tradeo
||w||
Noise level
Slide 53
Slide 53 text
J
(
x0)
Regularity of x0
x
(
y
) 2 argmin
x
=
y
J
(
x
)
Inverse Problem Regularization
observations
y
parameter
Example: variational methods
Estimator: x(y) depends only on
x
(
y
) 2 argmin
x
2RN
1
2
||
y x
||2 +
J
(
x
)
Data fidelity Regularity
Observations:
y
=
x0 +
w
2 RP .
No noise: 0+, minimize
Choice of : tradeo
||w||
Noise level
Slide 54
Slide 54 text
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
Sparse Priors
Slide 55
Slide 55 text
Sparse regularization: x? 2 argmin
x
1
2
||
y x
||2 +
J0(
x
)
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
Sparse Priors
Slide 56
Slide 56 text
Sparse regularization:
Denoising in ortho-basis:
K = Id, 2 = Id, ⇤ = Id
x? 2 argmin
x
1
2
||
y x
||2 +
J0(
x
)
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
Sparse Priors
y
Slide 57
Slide 57 text
Sparse regularization:
Denoising in ortho-basis:
K = Id, 2 = Id, ⇤ = Id
x? 2 argmin
x
1
2
||
y x
||2 +
J0(
x
)
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
where ˜
x
= ⇤
y
= {h
y, m
i}m, T
2 = 2
min
x
|| ⇤
y x
||2 +
T
2
J0(
x
) =
P
m
|˜
xm xm
|2 +
T
2 (
xm
)
Sparse Priors
y ?y
Slide 58
Slide 58 text
Sparse regularization:
Denoising in ortho-basis:
K = Id, 2 = Id, ⇤ = Id
x? 2 argmin
x
1
2
||
y x
||2 +
J0(
x
)
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
where ˜
x
= ⇤
y
= {h
y, m
i}m, T
2 = 2
min
x
|| ⇤
y x
||2 +
T
2
J0(
x
) =
P
m
|˜
xm xm
|2 +
T
2 (
xm
)
Solution: x
?
m =
⇢
xm if
|
˜
xm
| >
T,
0 otherwise.
Sparse Priors
y ?y ?
x
?
x
?
Slide 59
Slide 59 text
Sparse regularization:
Denoising in ortho-basis:
K = Id, 2 = Id, ⇤ = Id
Non-orthogonal : NP-hard to solve.
x? 2 argmin
x
1
2
||
y x
||2 +
J0(
x
)
J0(
x
) = # {
m
;
xm
6= 0}
“Ideal” sparsity:
where ˜
x
= ⇤
y
= {h
y, m
i}m, T
2 = 2
min
x
|| ⇤
y x
||2 +
T
2
J0(
x
) =
P
m
|˜
xm xm
|2 +
T
2 (
xm
)
Solution: x
?
m =
⇢
xm if
|
˜
xm
| >
T,
0 otherwise.
Sparse Priors
y ?y ?
x
?
x
?
x argmin
x=y m
|xm
|
x
x =
y
Noiseless Sparse Regularization
Noiseless measurements: y = x0
Slide 64
Slide 64 text
x argmin
x=y m
|xm
|
x
x =
y
x argmin
x=y m
|xm
|2
Noiseless Sparse Regularization
x
x =
y
Noiseless measurements: y = x0
Slide 65
Slide 65 text
x argmin
x=y m
|xm
|
x
x =
y
x argmin
x=y m
|xm
|2
Noiseless Sparse Regularization
Convex linear program.
Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”.
Douglas-Rachford splitting, see [Combettes, Pesquet].
x
x =
y
Noiseless measurements: y = x0
Slide 66
Slide 66 text
Regularization
Data fidelity
y = x0
+ w
Noisy measurements:
x argmin
x RQ
1
2
||y x||2 + ||x||1
Noisy Sparse Regularization
Slide 67
Slide 67 text
Regularization
Data fidelity
Equivalence
||
x =
y||
y = x0
+ w
Noisy measurements:
x argmin
x RQ
1
2
||y x||2 + ||x||1
x argmin
|| x y||
||x||1
Noisy Sparse Regularization
x
Slide 68
Slide 68 text
Iterative soft thresholding
Forward-backward splitting
Algorithms:
Regularization
Data fidelity
Equivalence
||
x =
y||
y = x0
+ w
Noisy measurements:
x argmin
x RQ
1
2
||y x||2 + ||x||1
x argmin
|| x y||
||x||1
Noisy Sparse Regularization
Nesterov multi-steps schemes.
see [Daubechies et al], [Pesquet et al], etc
x
Slide 69
Slide 69 text
K
y = Kf0
+ w
Measures:
(Kf)i =
⇢
0 if i 2 ⌦,
fi if i /
2 ⌦.
Inpainting Problem
Slide 70
Slide 70 text
Overview
• Approximation in an Ortho-Basis
• Compression and Denoising
• Sparse Inverse Problem Regularization
• Compressed Sensing
• Iterative Soft Thresholding Algorithm
Slide 71
Slide 71 text
f[n] f0
(n/N)
Sampling:
˜
f L2([0, 1]d) f RN
Idealization:
acquisition
device
Discretization
Slide 72
Slide 72 text
Data aquisition:
Sensors
Pointwise Sampling and Smoothness
˜
f L2 f RN
f[i] = ˜
f(i/N)
Slide 73
Slide 73 text
Data aquisition:
Sensors
˜
f(t) =
i
f[i]h(Nt i)
Shannon interpolation: if Supp(
ˆ
˜
f) [ N , N ]
h(t) =
sin( t)
t
Pointwise Sampling and Smoothness
˜
f L2 f RN
f[i] = ˜
f(i/N)
Slide 74
Slide 74 text
Data aquisition:
Sensors
˜
f(t) =
i
f[i]h(Nt i)
Natural images are not smooth.
Shannon interpolation: if Supp(
ˆ
˜
f) [ N , N ]
h(t) =
sin( t)
t
Pointwise Sampling and Smoothness
˜
f L2 f RN
f[i] = ˜
f(i/N)
Slide 75
Slide 75 text
Data aquisition:
Sensors
˜
f(t) =
i
f[i]h(Nt i)
Natural images are not smooth.
But can be compressed e ciently.
Shannon interpolation: if Supp(
ˆ
˜
f) [ N , N ]
0,1,0,. . .
h(t) =
sin( t)
t
Sample and compress simultaneously?
Pointwise Sampling and Smoothness
˜
f L2 f RN
f[i] = ˜
f(i/N)
JPEG-2k
Slide 76
Slide 76 text
˜
f
Single Pixel Camera (Rice)
Slide 77
Slide 77 text
˜
f
P measures N micro-mirrors
Single Pixel Camera (Rice)
y[i] = f, i
Slide 78
Slide 78 text
˜
f
P/N = 0.16 P/N = 0.02
P/N = 1
P measures N micro-mirrors
Single Pixel Camera (Rice)
y[i] = f, i
Slide 79
Slide 79 text
Physical hardware resolution limit: target resolution f RN .
˜
f L2 f RN y RP
micro
mirrors
array
resolution
CS hardware
K
CS Hardware Model
CS is about designing hardware: input signals ˜
f L2(R2).
Slide 80
Slide 80 text
Physical hardware resolution limit: target resolution f RN .
˜
f L2 f RN y RP
micro
mirrors
array
resolution
CS hardware
,
...
K
CS Hardware Model
CS is about designing hardware: input signals ˜
f L2(R2).
,
,
Slide 81
Slide 81 text
Physical hardware resolution limit: target resolution f RN .
˜
f L2 f RN y RP
micro
mirrors
array
resolution
CS hardware
,
...
f
Operator K
K
CS Hardware Model
CS is about designing hardware: input signals ˜
f L2(R2).
,
,
Slide 82
Slide 82 text
Need to solve y = Kf.
More unknown than equations.
dim(ker(K)) = N P is huge.
Inversion and Sparsity
f
Operator K
Slide 83
Slide 83 text
Need to solve y = Kf.
More unknown than equations.
dim(ker(K)) = N P is huge.
Prior information: f is sparse in a basis { m
}m
.
J (f) = Card {m \ | f, m
| > } is small.
Inversion and Sparsity
f
Operator K
f, m
f
Slide 84
Slide 84 text
0 reconstruction:
Minimize
subject to Kf = y
CS Reconstruction
J0
(f) = Card {m \ f, m
= 0}
Slide 85
Slide 85 text
0 reconstruction:
Minimize
subject to Kf = y
NP-hard to solve.
CS Reconstruction
J0
(f) = Card {m \ f, m
= 0}
Slide 86
Slide 86 text
0 reconstruction:
Minimize
subject to Kf = y
1 reconstruction:
m
| f, m
|
Polynomial-time algorithms.
NP-hard to solve.
CS Reconstruction
J0
(f) = Card {m \ f, m
= 0}
Minimize
subject to Kf = y
y =
f
f, 1
f, 2
Slide 87
Slide 87 text
Theorem: [Candes, Romberg, Tao, Donoho, 2004]
If f is k-sparse, i.e. J0
(f) k
If P C log(N/k)k
then 1-CS reconstruction is exact.
Theoretical Performance Guaranties
Slide 88
Slide 88 text
Theorem: [Candes, Romberg, Tao, Donoho, 2004]
If f is k-sparse, i.e. J0
(f) k
If P C log(N/k)k
then 1-CS reconstruction is exact.
Extensions to:
noisy observation y = Kf +
approximate sparsity f = fk sparse
+
Theoretical Performance Guaranties
Slide 89
Slide 89 text
Theorem: [Candes, Romberg, Tao, Donoho, 2004]
If f is k-sparse, i.e. J0
(f) k
If P C log(N/k)k
then 1-CS reconstruction is exact.
Extensions to:
noisy observation y = Kf +
approximate sparsity f = fk sparse
+
Research problem: optimal value of C ?
for N/k = 4, C log(N/k) 5.
“CS is 5 less e cient than JPEG-2k”
Theoretical Performance Guaranties
Slide 90
Slide 90 text
Overview
• Approximation in an Ortho-Basis
• Compression and Denoising
• Sparse Inverse Problems Regularization
• Compressed Sensing
• Iterative Soft Thresholding Algorithm
Slide 91
Slide 91 text
Orthogonal-basis: ⇤ = IdN
,
x
= ⇤
f
.
Regularization-based denoising:
x? = argmin
x
2RN
1
2
||
x y
||2 +
J
(
x
)
Denoising: y = x0 + w
2 RN
,
K
= Id.
Sparse regularization: J
(
x
) =
P
m
|
xm
|q
(where |a|0 = (a))
Sparse Regularization Denoising
Slide 92
Slide 92 text
Orthogonal-basis: ⇤ = IdN
,
x
= ⇤
f
.
Regularization-based denoising:
x? = argmin
x
2RN
1
2
||
x y
||2 +
J
(
x
)
Denoising: y = x0 + w
2 RN
,
K
= Id.
Sparse regularization: J
(
x
) =
P
m
|
xm
|q
x
?
m
=
S
q
T
(
xm)
(where |a|0 = (a))
Sparse Regularization Denoising
Slide 93
Slide 93 text
Sparse regularization:
x? 2 argmin
x
2RN
E
(
x
) =
1
2
||
y x
||2 + ||
x
||1
E
(
x,
˜
x
) =
E
(
x
)
1
2
|| (
x
˜
x
)||2 +
1
2
⌧
||
x
˜
x
||2
E
(·
,
˜
x
)
x
˜
x
S ⌧ (u)
⌧ < 1/|| ⇤ ||
Surrogate Functionals
Surrogate functional:
E(·)
Slide 94
Slide 94 text
Sparse regularization:
x? 2 argmin
x
2RN
E
(
x
) =
1
2
||
y x
||2 + ||
x
||1
E
(
x,
˜
x
) =
E
(
x
)
1
2
|| (
x
˜
x
)||2 +
1
2
⌧
||
x
˜
x
||2
Proof: E
(
x,
˜
x
) / 1
2
||
u x
||2 + ||
x
||1+ cst.
argmin
x
E
(
x,
˜
x
) =
S ⌧
(
u
)
Theorem:
E
(·
,
˜
x
)
x
˜
x
S ⌧ (u)
⌧ < 1/|| ⇤ ||
where
u
= ˜
x ⌧
⇤( ˜
x y
)
Surrogate Functionals
Surrogate functional:
E(·)
Slide 95
Slide 95 text
Algorithm: x
(
`
+1) = argmin
x
E
(
x, x
(
`
))
x
(`+1) =
S
1
⌧
(
u
(`))
Initialize
x
(0), set
`
= 0.
x
(0)
x
(1)
x
(2)
u
(`) =
x
(`)
⌧
⇤(
x
(`)
y
)
Iterative Thresholding
x
E(·)
Slide 96
Slide 96 text
Algorithm: x
(
`
+1) = argmin
x
E
(
x, x
(
`
))
x
(`+1) =
S
1
⌧
(
u
(`))
Remark:
x
(`) 7!
u
(`)
is a gradient descent of
||
x y
||2
.
S
1
`⌧ is the proximal step of associated to
||
x
||1.
Initialize
x
(0), set
`
= 0.
x
(0)
x
(1)
x
(2)
u
(`) =
x
(`)
⌧
⇤(
x
(`)
y
)
Iterative Thresholding
x
E(·)
Slide 97
Slide 97 text
Theorem:
if
⌧ <
2
/
|| ⇤ ||, then
x
(`) !
x
?.
Algorithm: x
(
`
+1) = argmin
x
E
(
x, x
(
`
))
x
(`+1) =
S
1
⌧
(
u
(`))
Remark:
x
(`) 7!
u
(`)
is a gradient descent of
||
x y
||2
.
S
1
`⌧ is the proximal step of associated to
||
x
||1.
Initialize
x
(0), set
`
= 0.
x
(0)
x
(1)
x
(2)
u
(`) =
x
(`)
⌧
⇤(
x
(`)
y
)
Iterative Thresholding
x
E(·)
Slide 98
Slide 98 text
Conclusion
Slide 99
Slide 99 text
Conclusion
Slide 100
Slide 100 text
random acquisition.
optimization for reconstruction.
#measures sparsity
Conclusion
• Compressed sensing.