Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Romain Petit

S³ Seminar
November 16, 2023

Romain Petit

(University of Genoa, Italy)

Title — Reconstruction of piecewise constant images via total (gradient) variation regularization

Abstract — In this talk, I will consider the reconstruction of some unknown image from possibly noisy linear measurements. I will focus on a variational reconstruction method that uses a specific regularizer: the total (gradient) variation. It is well known that minimizing this functional produces piecewise constant reconstructions. It is therefore natural to study the case where the unknown image has precisely this structure. I will present two works on this topic, which are collaborations with Yohann De Castro and Vincent Duval. The first concerns a noise robustness result, stating that, in a low noise regime, the reconstruction is also piecewise constant, and one exactly recovers the number of shapes in the unknown image. The second is about introducing a new numerical method for solving the inverse problem. Its main feature is that it does not rely on the introduction of a fixed spatial discretization (e.g. a pixel grid), and builds a sequence of iterates that are linear combinations of indicator functions.

Bio
Romain Petit is a postdoctoral researcher at the University of Genoa (Italy), hosted by Giovanni S. Alberti. He received the PhD degree in mathematics from Université Paris Dauphine, under the supervision of Yohann De Castro and Vincent Duval. His main research interests are inverse problems in imaging and (grid-free) sparse estimation.

S³ Seminar

November 16, 2023
Tweet

More Decks by S³ Seminar

Other Decks in Research

Transcript

  1. Total (gradient) variation regularization: exact support recovery and grid-free numerical

    methods Romain Petit, joint work with Yohann De Castro and Vincent Duval Paris-Saclay Signal Seminar, 16 November 2023
  2. Inverse problems in imaging Unknown image u0 : R2 →

    R Obs. y0 = Φu0 ∈ H Noisy obs. y = y0 + w 1
  3. Inverse problems in imaging Unknown image u0 : R2 →

    R Inverse problem Recover u0 from y Obs. y0 = Φu0 ∈ H Noisy obs. y = y0 + w 1
  4. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| 2
  5. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  6. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  7. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y solution (small λ) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  8. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y solution (small λ) solution (large λ) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  9. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  10. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  11. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple The sparse objects associated to TV are the piecewise constant functions Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  12. Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997]

    Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple The sparse objects associated to TV are the piecewise constant functions Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2
  13. Considered problems im. inconnue u0 obs. y0 = Φu0 obs.

    bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 3
  14. Considered problems im. inconnue u0 obs. y0 = Φu0 obs.

    bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) 3
  15. Considered problems im. inconnue u0 obs. y0 = Φu0 obs.

    bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions
  16. Considered problems im. inconnue u0 obs. y0 = Φu0 obs.

    bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant
  17. Considered problems unknown im. u0 obs. y0 = Φu0 noisy

    obs. y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant
  18. Considered problems unknown im. u0 obs. y0 = Φu0 noisy

    obs. y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant • Im(Φ∗) ⊂ C1(R2) 3
  19. First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u)

    + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 5
  20. First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u)

    + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 Proposition [Chambolle et al., 2016, Iglesias et al., 2018] If λn → 0 and wn = O(λn) (+ source cond.) then un → u0 (strictly in BV(R2)) and U(t) n ∆U(t) 0 → 0 and ∂U(t) n Hausdorff − − − − − − − A ∂U(t) 0 with U(t) = {u ≥ t} if t ≥ 0 {u ≤ t} otherwise. 5
  21. First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u)

    + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 u0 un U(t) n , t ∈ R 5
  22. The prescribed curvature problem Optimality condition (regularized pb.) If u

    solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6
  23. The prescribed curvature problem Optimality condition (regularized pb.) If u

    solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6
  24. The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞

    Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6
  25. The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞

    Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6
  26. The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞

    Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u Convergence of curvature functionals If λn → 0 and wn λn → 0 (+ source cond.) then ηλn,wn → η0 6
  27. The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞

    Per(E) − E η (Q(η)) Optimality condition (constrained pb.) We have that • ∀t > 0, {u0 ≥ t} solves (Q(+η0)) • ∀t < 0, {u0 ≤ t} solves (Q(−η0)) for some η0 u Convergence of curvature functionals If λn → 0 and wn λn → 0 (+ source cond.) then ηλn,wn → η0 6
  28. Stability of the prescribed curvature problem Assumptions • η close

    to η0 in L2(R2) and C1(R2) • (Q(η0)) has finitely many minimizers, all strictly stable 7
  29. Stability of the prescribed curvature problem Assumptions • η close

    to η0 in L2(R2) and C1(R2) • (Q(η0)) has finitely many minimizers, all strictly stable Proposition (informal) Around each sol. E of (Q(η0)) there is ex- actly one sol. F of (Q(η)) and ∂F = (Id + ϕνE )(∂E) with ϕ ∈ C2(∂E) J0 J [ ] [ ] 7
  30. Exact support recovery Assumptions • u0 = N i=1 ai

    1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. 8
  31. Exact support recovery Assumptions • u0 = N i=1 ai

    1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. u0 uλ,w 8
  32. Exact support recovery Assumptions • u0 = N i=1 ai

    1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. u0 uλ,w Theorem There exists α, λ0 ∈ R∗ + s.t., if λ ≤ λ0 and w /λ ≤ α, then • uλ,w = N i=1 bi 1Fi with ∂Fi = (Id + ϕi νE )(∂E) • bi → ai and ϕi C2(∂Ei ) → 0 when λ, w → 0 8
  33. Numerical verif. of the non-degenerate source cond. Φu = h

    u with h(x) = exp(− x 2/(2σ2)) github.com/rpetit/2023-support-recovery-tv
  34. Numerical verif. of the non-degenerate source cond. Φu = h

    u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) github.com/rpetit/2023-support-recovery-tv
  35. Numerical verif. of the non-degenerate source cond. Φu = h

    u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) github.com/rpetit/2023-support-recovery-tv
  36. Numerical verif. of the non-degenerate source cond. Φu = h

    u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) • If signe(a1) = −signe(a2): need |R1 − R2| > ∆ ∆ github.com/rpetit/2023-support-recovery-tv
  37. Numerical verif. of the non-degenerate source cond. Φu = h

    u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) • If signe(a1) = −signe(a2): need |R1 − R2| > ∆ • If signe(a1) = signe(a2): super-resolution ∆ github.com/rpetit/2023-support-recovery-tv 9
  38. Numerical resolution of (Pλ (y)) Solve min u∈L2(R2) 1 2

    Φu − y 2 + λ TV(u) (Pλ(y)) Fixed grid approximation u = i j uij 1Cij 10
  39. Discretizations of the total variation (images: [Tabti et al., 2018])

    Anisotropic • ij |(Dx u)ij | + |(Dy u)ij | • Sharp edges, grid bias Isotropic • ij (Dx u)2 ij + (Dy u)2 ij • Blur State of the art review: [Chambolle and Pock, 2021] 11
  40. Numerical representation of simple images Fixed grid • O 1/h2

    pixels • O (1/h) “relevant” pixels • u → TV(u) convex 12
  41. Numerical representation of simple images Fixed grid • O 1/h2

    pixels • O (1/h) “relevant” pixels • u → TV(u) convex Boundary discretization • More efficient for simple img. • Numerically more involved • E → TV(1E ) “non convex” 12
  42. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] 13
  43. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] 13
  44. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13
  45. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13
  46. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13
  47. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13
  48. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ Iterates are linear combinations of indicator functions of simple sets 13
  49. Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ

    Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 • Loc. opt. (a, E) → F i ai 1Ei [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ Iterates are linear combinations of indicator functions of simple sets 13
  50. Numerical results github.com/rpetit/tvsfw u[1] (before sliding) −2 0 2 u[1]

    u0 − u[1] Supp(Du[1]), Supp(Du0 ) u[2] (before sliding) −2 0 2 u[2] u0 − u[2] Supp(Du[2]), Supp(Du0 ) u[3] (before sliding) −2 0 2 u[3] u0 − u[3] Supp(Du[3]), Supp(Du0 ) 15
  51. Numerical results github.com/rpetit/tvsfw Left to right: observations, signal, ours, isotropic

    TV, “Condat’s” TV Signal u0 Isotropic TV Ours ˆ uλ,w Observations “Condat’s” TV Supp(Du0), Supp(Dˆ uλ,w ) 16
  52. Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w

    − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. 17
  53. Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w

    − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) 17
  54. Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w

    − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) • “Continuous” TV reg. benchmark 17
  55. Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w

    − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) • “Continuous” TV reg. benchmark • Applications (cell im., piecewise homog. textures) 17
  56. Perspectives (ii) OT reg. for dynamic inverse problems [Bredies et

    al., 2022] • Recover t → i ai (t)δxi (t) • Support stability, implicit bias, fast reconstruction? 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.5 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 1.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.5 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 1.0 18
  57. References i Boyd, N., Schiebinger, G., and Recht, B. (2017).

    The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems. SIAM Journal on Optimization, 27(2):616–639. Boyer, C., Chambolle, A., Castro, Y. D., Duval, V., de Gournay, F., and Weiss, P. (2019). On Representer Theorems and Convex Regularization. SIAM Journal on Optimization, 29(2):1260–1281. Bredies, K. and Carioni, M. (2019). Sparsity of solutions for variational inverse problems with finite-dimensional data. Calculus of Variations and Partial Differential Equations, 59(1):14. 19
  58. References ii Bredies, K., Carioni, M., Fanzon, S., and Romero,

    F. (2022). A Generalized Conditional Gradient Method for Dynamic Inverse Problems with Optimal Transport Regularization. Foundations of Computational Mathematics. Bredies, K. and Pikkarainen, H. K. (2013). Inverse problems in spaces of measures. ESAIM: Control, Optimisation and Calculus of Variations, 19(1):190–218. Carlier, G., Comte, M., and Peyr´ e, G. (2009). Approximation of maximal Cheeger sets by projection. ESAIM: Mathematical Modelling and Numerical Analysis, 43(1):139–150. 20
  59. References iii Chambolle, A., Duval, V., Peyr´ e, G., and

    Poon, C. (2016). Geometric properties of solutions to the total variation denoising problem. Inverse Problems, 33(1):015002. Chambolle, A. and Lions, P.-L. (1997). Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188. Chambolle, A. and Pock, T. (2021). Approximating the total variation with finite differences or finite elements. In Bonito, A. and Nochetto, R. H., editors, Handbook of Numerical Analysis, volume 22 of Geometric Partial Differential Equations - Part II, pages 383–417. Elsevier. 21
  60. References iv Denoyelle, Q., Duval, V., Peyre, G., and Soubies,

    E. (2019). The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy. Inverse Problems. Fleming, W. H. (1957). Functions with generalized gradient and generalized surfaces. Annali di Matematica Pura ed Applicata, 44(1):93–103. Iglesias, J. A., Mercier, G., and Scherzer, O. (2018). A note on convergence of solutions of total variation regularized linear inverse problems. Inverse Problems, 34(5):055011. Rudin, L. I., Osher, S., and Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268. 22
  61. References v Tabti, S., Rabin, J., and Elmoata, A. (2018).

    Symmetric Upwind Scheme for Discrete Weighted Total Variation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1827–1831. 23
  62. Two-step approximation of generalized Cheeger sets Fixed grid initialization [Carlier

    et al., 2009] Solve min u∈Eh ηh, u s.t. TVh(u) ≤ 1 Shape gradient algorithm • θn ∈ Argmax θ∈Θ lim →0+ 1 [J ((Id + θ)(En)) − J (En)] • En+1 = (Id + n θn)(En) Implementation: github.com/rpetit/PyCheeger 24