Slide 1

Slide 1 text

Total (gradient) variation regularization: exact support recovery and grid-free numerical methods Romain Petit, joint work with Yohann De Castro and Vincent Duval Paris-Saclay Signal Seminar, 16 November 2023

Slide 2

Slide 2 text

Inverse problems in imaging Unknown image u0 : R2 → R 1

Slide 3

Slide 3 text

Inverse problems in imaging Unknown image u0 : R2 → R Obs. y0 = Φu0 ∈ H 1

Slide 4

Slide 4 text

Inverse problems in imaging Unknown image u0 : R2 → R Obs. y0 = Φu0 ∈ H Noisy obs. y = y0 + w 1

Slide 5

Slide 5 text

Inverse problems in imaging Unknown image u0 : R2 → R Inverse problem Recover u0 from y Obs. y0 = Φu0 ∈ H Noisy obs. y = y0 + w 1

Slide 6

Slide 6 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| 2

Slide 7

Slide 7 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 8

Slide 8 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 9

Slide 9 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y solution (small λ) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 10

Slide 10 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] The total (gradient) variation TV(u) def. = sup − R2 u divφ φ ∈ C∞ c (R2, R2), φ ∞ ≤ 1 “ = ” R2 |∇u| noisy obs. y solution (small λ) solution (large λ) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 11

Slide 11 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 12

Slide 12 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 13

Slide 13 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple The sparse objects associated to TV are the piecewise constant functions Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 14

Slide 14 text

Variational approach [Rudin et al., 1992, Chambolle and Lions, 1997] Representer th. [Boyer et al., 2019, Bredies and Carioni, 2019] + [Fleming, 1957] Some sol. of (Pλ(y)) are linear combinations of 1E with E simple The sparse objects associated to TV are the piecewise constant functions Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 2

Slide 15

Slide 15 text

Considered problems im. inconnue u0 obs. y0 = Φu0 obs. bruit´ ees y = y0 + w 3

Slide 16

Slide 16 text

Considered problems im. inconnue u0 obs. y0 = Φu0 obs. bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 3

Slide 17

Slide 17 text

Considered problems im. inconnue u0 obs. y0 = Φu0 obs. bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) 3

Slide 18

Slide 18 text

Considered problems im. inconnue u0 obs. y0 = Φu0 obs. bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions

Slide 19

Slide 19 text

Considered problems im. inconnue u0 obs. y0 = Φu0 obs. bruit´ ees y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant

Slide 20

Slide 20 text

Considered problems unknown im. u0 obs. y0 = Φu0 noisy obs. y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant

Slide 21

Slide 21 text

Considered problems unknown im. u0 obs. y0 = Φu0 noisy obs. y = y0 + w Noise robustness Conv. of sol. to (Pλ(y0 +w)) when λ, w → 0 Reconstruction algorithm Numerical resolution of (Pλ(y)) Assumptions • u0 piecewise constant • Im(Φ∗) ⊂ C1(R2) 3

Slide 22

Slide 22 text

Noise robustness: exact support recovery

Slide 23

Slide 23 text

What kind of convergence? 4

Slide 24

Slide 24 text

First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u) + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 5

Slide 25

Slide 25 text

First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u) + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 Proposition [Chambolle et al., 2016, Iglesias et al., 2018] If λn → 0 and wn = O(λn) (+ source cond.) then un → u0 (strictly in BV(R2)) and U(t) n ∆U(t) 0 → 0 and ∂U(t) n Hausdorff − − − − − − − A ∂U(t) 0 with U(t) = {u ≥ t} if t ≥ 0 {u ≤ t} otherwise. 5

Slide 26

Slide 26 text

First convergence result (Pλ (y0 + w)) min u∈L2(R2) TV(u) + 1 2λ Φu −(y0 +w) 2 (P0 (y0 )) min u∈L2(R2) TV(u) s.t. Φu = y0 u0 un U(t) n , t ∈ R 5

Slide 27

Slide 27 text

The prescribed curvature problem Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6

Slide 28

Slide 28 text

The prescribed curvature problem Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6

Slide 29

Slide 29 text

The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞ Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6

Slide 30

Slide 30 text

The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞ Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u 6

Slide 31

Slide 31 text

The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞ Per(E) − E η (Q(η)) Optimality condition (regularized pb.) If u solves (Pλ (y0 + w)) then • ∀t > 0, {u ≥ t} solves (Q(+ηλ,w )) • ∀t < 0, {u ≤ t} solves (Q(−ηλ,w )) for some ηλ,w u Convergence of curvature functionals If λn → 0 and wn λn → 0 (+ source cond.) then ηλn,wn → η0 6

Slide 32

Slide 32 text

The prescribed curvature problem Prescribed curvature problem min E⊂R2, |E|<+∞ Per(E) − E η (Q(η)) Optimality condition (constrained pb.) We have that • ∀t > 0, {u0 ≥ t} solves (Q(+η0)) • ∀t < 0, {u0 ≤ t} solves (Q(−η0)) for some η0 u Convergence of curvature functionals If λn → 0 and wn λn → 0 (+ source cond.) then ηλn,wn → η0 6

Slide 33

Slide 33 text

Stability of the prescribed curvature problem Assumptions • η close to η0 in L2(R2) and C1(R2) • (Q(η0)) has finitely many minimizers, all strictly stable 7

Slide 34

Slide 34 text

Stability of the prescribed curvature problem Assumptions • η close to η0 in L2(R2) and C1(R2) • (Q(η0)) has finitely many minimizers, all strictly stable Proposition (informal) Around each sol. E of (Q(η0)) there is ex- actly one sol. F of (Q(η)) and ∂F = (Id + ϕνE )(∂E) with ϕ ∈ C2(∂E) J0 J [ ] [ ] 7

Slide 35

Slide 35 text

Exact support recovery Assumptions • u0 = N i=1 ai 1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. 8

Slide 36

Slide 36 text

Exact support recovery Assumptions • u0 = N i=1 ai 1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. u0 uλ,w 8

Slide 37

Slide 37 text

Exact support recovery Assumptions • u0 = N i=1 ai 1Ei with Ei simple and ∂Ei ∩ ∂Ej = ∅ for every i = j • non-degenerate source cond. + injectivity cond. u0 uλ,w Theorem There exists α, λ0 ∈ R∗ + s.t., if λ ≤ λ0 and w /λ ≤ α, then • uλ,w = N i=1 bi 1Fi with ∂Fi = (Id + ϕi νE )(∂E) • bi → ai and ϕi C2(∂Ei ) → 0 when λ, w → 0 8

Slide 38

Slide 38 text

Numerical verif. of the non-degenerate source cond. Φu = h u with h(x) = exp(− x 2/(2σ2)) github.com/rpetit/2023-support-recovery-tv

Slide 39

Slide 39 text

Numerical verif. of the non-degenerate source cond. Φu = h u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) github.com/rpetit/2023-support-recovery-tv

Slide 40

Slide 40 text

Numerical verif. of the non-degenerate source cond. Φu = h u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) github.com/rpetit/2023-support-recovery-tv

Slide 41

Slide 41 text

Numerical verif. of the non-degenerate source cond. Φu = h u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) • If signe(a1) = −signe(a2): need |R1 − R2| > ∆ ∆ github.com/rpetit/2023-support-recovery-tv

Slide 42

Slide 42 text

Numerical verif. of the non-degenerate source cond. Φu = h u with h(x) = exp(− x 2/(2σ2)) Deconvolution of a disk: u0 = a 1B(0,R) Condition satisfied if σ ≤ σ0(R) Deconvolution of radial images: u0 = a1 1B(0,R1) + a2 1B(0,R2) • If signe(a1) = −signe(a2): need |R1 − R2| > ∆ • If signe(a1) = signe(a2): super-resolution ∆ github.com/rpetit/2023-support-recovery-tv 9

Slide 43

Slide 43 text

Numerical resolution: a grid-free approach

Slide 44

Slide 44 text

Numerical resolution of (Pλ (y)) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) 10

Slide 45

Slide 45 text

Numerical resolution of (Pλ (y)) Solve min u∈L2(R2) 1 2 Φu − y 2 + λ TV(u) (Pλ(y)) Fixed grid approximation u = i j uij 1Cij 10

Slide 46

Slide 46 text

Discretizations of the total variation (images: [Tabti et al., 2018]) Anisotropic • ij |(Dx u)ij | + |(Dy u)ij | • Sharp edges, grid bias Isotropic • ij (Dx u)2 ij + (Dy u)2 ij • Blur State of the art review: [Chambolle and Pock, 2021] 11

Slide 47

Slide 47 text

Numerical representation of simple images Fixed grid • O 1/h2 pixels • O (1/h) “relevant” pixels • u → TV(u) convex 12

Slide 48

Slide 48 text

Numerical representation of simple images Fixed grid • O 1/h2 pixels • O (1/h) “relevant” pixels • u → TV(u) convex Boundary discretization • More efficient for simple img. • Numerically more involved • E → TV(1E ) “non convex” 12

Slide 49

Slide 49 text

Frank-Wolfe based algorithm Algorithm [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] 13

Slide 50

Slide 50 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] 13

Slide 51

Slide 51 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] 13

Slide 52

Slide 52 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13

Slide 53

Slide 53 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13

Slide 54

Slide 54 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13

Slide 55

Slide 55 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ 13

Slide 56

Slide 56 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ Iterates are linear combinations of indicator functions of simple sets 13

Slide 57

Slide 57 text

Frank-Wolfe based algorithm Algorithm • ηk = − 1 λ Φ∗(Φuk − y) • Ek+1 ∈ Argmax E simple 1 P(E) E ηk • uk+1 = αk uk + βk 1Ek+1 • Loc. opt. (a, E) → F i ai 1Ei [Bredies and Pikkarainen, 2013] [Boyd et al., 2017, Denoyelle et al., 2019] Generalized Cheeger pb. Max. E⊂R2 1 P(E) E η s.t. |E| < +∞, 0 < P(E) < +∞ Iterates are linear combinations of indicator functions of simple sets 13

Slide 58

Slide 58 text

Numerical results github.com/rpetit/tvsfw Unknown image u0 Observations y = Φu0 + w 14

Slide 59

Slide 59 text

Numerical results github.com/rpetit/tvsfw u[1] (before sliding) −2 0 2 u[1] u0 − u[1] Supp(Du[1]), Supp(Du0 ) u[2] (before sliding) −2 0 2 u[2] u0 − u[2] Supp(Du[2]), Supp(Du0 ) u[3] (before sliding) −2 0 2 u[3] u0 − u[3] Supp(Du[3]), Supp(Du0 ) 15

Slide 60

Slide 60 text

Numerical results github.com/rpetit/tvsfw Left to right: observations, signal, ours, isotropic TV, “Condat’s” TV 16

Slide 61

Slide 61 text

Numerical results github.com/rpetit/tvsfw Left to right: observations, signal, ours, isotropic TV, “Condat’s” TV Signal u0 Isotropic TV Ours ˆ uλ,w Observations “Condat’s” TV Supp(Du0), Supp(Dˆ uλ,w ) 16

Slide 62

Slide 62 text

Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w − u0 Du0, Dˆ uλ,w 17

Slide 63

Slide 63 text

Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. 17

Slide 64

Slide 64 text

Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) 17

Slide 65

Slide 65 text

Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) • “Continuous” TV reg. benchmark 17

Slide 66

Slide 66 text

Perspectives (i) u0 Φu0 + w ˆ uλ,w ˆ uλ,w − u0 Du0, Dˆ uλ,w Recovery guarantees • Non-degenerate source cond. • Implicit bias of TV reg. Numerical resolution • Robust sliding step (topology changes) • “Continuous” TV reg. benchmark • Applications (cell im., piecewise homog. textures) 17

Slide 67

Slide 67 text

Perspectives (ii) OT reg. for dynamic inverse problems [Bredies et al., 2022] • Recover t → i ai (t)δxi (t) • Support stability, implicit bias, fast reconstruction? 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.5 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 1.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.0 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 0.5 0.0 0.5 1.0 x1 0.0 0.5 1.0 x2 w0 t at time t = 1.0 18

Slide 68

Slide 68 text

References i Boyd, N., Schiebinger, G., and Recht, B. (2017). The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems. SIAM Journal on Optimization, 27(2):616–639. Boyer, C., Chambolle, A., Castro, Y. D., Duval, V., de Gournay, F., and Weiss, P. (2019). On Representer Theorems and Convex Regularization. SIAM Journal on Optimization, 29(2):1260–1281. Bredies, K. and Carioni, M. (2019). Sparsity of solutions for variational inverse problems with finite-dimensional data. Calculus of Variations and Partial Differential Equations, 59(1):14. 19

Slide 69

Slide 69 text

References ii Bredies, K., Carioni, M., Fanzon, S., and Romero, F. (2022). A Generalized Conditional Gradient Method for Dynamic Inverse Problems with Optimal Transport Regularization. Foundations of Computational Mathematics. Bredies, K. and Pikkarainen, H. K. (2013). Inverse problems in spaces of measures. ESAIM: Control, Optimisation and Calculus of Variations, 19(1):190–218. Carlier, G., Comte, M., and Peyr´ e, G. (2009). Approximation of maximal Cheeger sets by projection. ESAIM: Mathematical Modelling and Numerical Analysis, 43(1):139–150. 20

Slide 70

Slide 70 text

References iii Chambolle, A., Duval, V., Peyr´ e, G., and Poon, C. (2016). Geometric properties of solutions to the total variation denoising problem. Inverse Problems, 33(1):015002. Chambolle, A. and Lions, P.-L. (1997). Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188. Chambolle, A. and Pock, T. (2021). Approximating the total variation with finite differences or finite elements. In Bonito, A. and Nochetto, R. H., editors, Handbook of Numerical Analysis, volume 22 of Geometric Partial Differential Equations - Part II, pages 383–417. Elsevier. 21

Slide 71

Slide 71 text

References iv Denoyelle, Q., Duval, V., Peyre, G., and Soubies, E. (2019). The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy. Inverse Problems. Fleming, W. H. (1957). Functions with generalized gradient and generalized surfaces. Annali di Matematica Pura ed Applicata, 44(1):93–103. Iglesias, J. A., Mercier, G., and Scherzer, O. (2018). A note on convergence of solutions of total variation regularized linear inverse problems. Inverse Problems, 34(5):055011. Rudin, L. I., Osher, S., and Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268. 22

Slide 72

Slide 72 text

References v Tabti, S., Rabin, J., and Elmoata, A. (2018). Symmetric Upwind Scheme for Discrete Weighted Total Variation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1827–1831. 23

Slide 73

Slide 73 text

Two-step approximation of generalized Cheeger sets Fixed grid initialization [Carlier et al., 2009] Solve min u∈Eh ηh, u s.t. TVh(u) ≤ 1 Shape gradient algorithm • θn ∈ Argmax θ∈Θ lim →0+ 1 [J ((Id + θ)(En)) − J (En)] • En+1 = (Id + n θn)(En) Implementation: github.com/rpetit/PyCheeger 24

Slide 74

Slide 74 text

Topology changes during the local descent 25