Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Jean-François Giovannelli

Jean-François Giovannelli

(Prof. à l'université de Bordeaux (IMS Lab))

https://s3-seminar.github.io/seminars/jean-francois-giovannelli

Title — Segmentation-déconvolution d'images texturées: gestion des incertitudes par une approche bayésienne hiérarchique et un échantillonnage stochastique

Abstract — La présentation concerne la déconvolution-segmentation conjointe pour des images présentant des texturées orientées. Les images sont constituées de régions présentant des patchs de textures appartenant à un ensemble de K classes prédéfinies. Chaque classe est modélisée par un champ gaussien piloté par une densité spectrale de puissance paramétrique de paramètres inconnus. Par ailleurs, les labels de classes sont modélisés par un champ de Potts de paramètre est également inconnu. La méthode repose sur une description hiérarchique et une stratégie d'estimation conjointement des labels, des K images texturées, ainsi que des hyperparamètres: niveaux du bruit et des images ainsi que paramètres de texture et du champ de Potts. La stratégie permet de définir des estimateurs optimaux au sens d'un risque joint: maximiseur ou moyenne a posteriori selon les paramètres. Ils sont évalués numériquement à partir d'échantillons de loi a posteriori, eux-mêmes obtenus par un algorithme de Gibbs par bloc. Deux des étapes sont délicates: (1) le tirage des images texturées, gaussiennes de grande dimension, est réalisé par un algorithme de Perturbation-Optimization [a] et (2) le tirage des paramètres des images texturées obtenu par une étape de Fisher Metropolis-Hastings [b]. On donnera plusieurs illustrations numériques notamment en terme de quantification des incertitudes. Le travail est publié dans [c]. [a] F. Orieux, O. Féron and J.-F. Giovannelli, "Sampling high-dimensional Gaussian distributions for general linear inverse problems", Signal Processing Letters, May 2012. [b] C. Vacar, J.-F. Giovannelli, Y. Berthoumieu, "Langevin and Hessian with Fisher approximation stochastic sampling for parameter estimation of structured covariance" ICASSP 2011. [b'] M. Girolami, B. Calderhead, "Riemannian manifold Hamiltonian Monte Carlo", Journal of the Royal Statistical Society, 2011. [c] C. Vacar and J.-F. Giovannelli, "Unsupervised joint deconvolution and segmentation method for textured images: A Bayesian approach and an advanced sampling algorithm", EURASIP Journal on Advances in Signal Processing, 2019

Biography — Jean-François Giovannelli was born in Beziers, France, in 1966. He received the Dipl. Ing. degree from the Ecole Nationale Supérieure de l'Electronique et de ses Applications, Cergy, France, in 1990, and the Ph.D. degree and the H.D.R. degree in signal-image processing from the Universite Paris-Sud, Orsay, France, in 1995 and 2005, respectively. From 1997 to 2008, he was an Assistant Professor with the Universite Paris-Sud and a Researcher with the Laboratoire des Signaux et Systemes, Groupe Problèmes Inverses. He is currently a Professor with the Universite de Bordeaux, France and a Researcher with the Laboratoire de l'Integration du Matériau au Système, Groupe Signal-Image, France. His research focuses on inverse problems in signal and image processing, mainly unsupervised and myopic problems. From a methodological standpoint, the developed regularization methods are both deterministic (penalty, constraints,...) and Bayesian. Regarding the numerical algorithms, the work relies on optimization and stochastic sampling. His application fields essentially concern astronomical, medical, proteomics, radars and geophysical imaging.

C3bc10b8a72ed3c3bfd843793b8a9868?s=128

S³ Seminar

July 09, 2019
Tweet

Transcript

  1. Unsupervised deconvolution-segmentation of textured images Bayesian approach: optimal strategy and

    stochastic sampling Inversion-segmentation “` a la Djafari” Jean-Fran¸ cois Giovannelli Joint work with Cornelia Vacar Groupe Signal – Image Laboratoire de l’Int´ egration du Mat´ eriau au Syst` eme Universit´ e de Bordeaux – CNRS – BINP Acknowledgment: GIE CentraleSup´ elec for financial support Djafariales 2019, 09-th of July 1 / 43
  2. Inversion: generalities x H + y ε Direct model /

    Inverse problem Direct model — Do degradations: noise, blur, mixing,. . . y = H(x) + ε = Hx + ε = h x + ε Inverse problem — Undo: denoising, deblurring, unmixing,. . . x = ϕ(y) 2 / 43
  3. Inversion: standard question y = H(x) + ε = Hx

    + ε = h x + ε x H + y ε x = ϕ(y) Restoration, deconvolution, inter / extra-polation Issue: inverse problems Difficulty: ill-posed problems, i.e., lack of information Methodology: regularisation, i.e., information compensation Specificity of the inversion methods Compromise and tuning parameters 3 / 43
  4. Inversion: advanced questions y = H(x) + ε = Hx

    + ε = h x + ε x, , γ H, η + y ε, γ x, , γ, η = ϕ(y) Additional estimation problems Hyperparameters: unsupervised, self-adaptivity, self-tuned Instrument parameters: myopic, self-calibrated (also. blind) Hidden variables: edges, regions / labels, peaks,. . . : augmented Different models for image, noise, response,. . . : model selection 4 / 43
  5. Various fields, modalities, problems,. . . Fields Medical: diagnosis, prognosis,

    theranostics,. . . Astronomy, geology, hydrology,. . . Thermodynamics, fluid mechanics, transport phenomena,. . . Remote sensing, airborne imaging,. . . Surveillance, security,. . . Non destructive evaluation, control,. . . Computer vision, under bad conditions,. . . Photography, games, recreational activities, leisures,. . . . . . Knowledge, health, leisure,. . . Aerospace, aeronautics, transport, energy, industry,. . . 5 / 43
  6. Various fields, modalities, problems,. . . Modalities Magnetic Resonance Imaging

    Tomography (X-ray, optical wavelength, tera-Hertz,. . . ) Thermography,. . . Echography, Doppler echography,. . . Ultrasonic imaging, sound,. . . Microscopy, atomic force microscopy Interferometry (radio, optical, coherent,. . . ) Multi-spectral and hyper-spectral,. . . Holography Polarimetry: optical and other Synthetic aperture radars . . . Essentially “wave ↔ matter” interaction 6 / 43
  7. Various fields, modalities, problems,. . . “Signal – Image” problems

    Denoising Deconvolution Inverse Radon Fourier synthesis Resolution enhancement, super-resolution Inter / extra-polation, inpainting / outpainting Component unmixing / source separation . . . And also: Segmentation, labels and contours Detection of impulsions, salient points,. . . Classification, clustering,. . . . . . And self-calibration, self-adaptivity,. . . Model selection. . . 7 / 43
  8. Some historical landmarks Fitting: Gauss ∼ 1800 Quadratic approaches and

    linear filtering ∼ 60 Phillips, Twomey, Tikhonov Kalman Hunt (and Wiener ∼ 40) Extension: discrete hidden variables ∼ 80 Kormylo & Mendel (impulsions, peaks,. . . ) Geman & Geman (lines, contours, edges,. . . ) Besag, Graffigne, Descombes (regions, labels,. . . ) Convex penalties (also hidden variables,. . . ) ∼ 90 L2 − L1 , Huber, hyperbolic: Sauer, Blanc-F´ eraud, Idier. . . L1 : Alliney-Ruzinsky, Taylor ∼ 79, Yarlagadda ∼ 85 . . . And. . . L1 -boom ∼ 2005 Back to more complex models ∼ 2000 Problems: unsupervised, semi-blind / blind, latent / hidden variables Models: stochastic and hierarchical models Methodology: Bayesian approaches and optimality Algorithms: stochastic sampling (MCMC, Metropolis-Hastings,. . . ) 8 / 43
  9. Arri` ere plan, application ici Imagerie de mat´ eriau, de

    zones agricoles,. . . Texture orient´ ees, p´ eriodiques Inversion, d´ econvolution-segmentation,. . . D´ eveloppement autodidacte, non-supervis´ e,. . . Quantification d’incertitudes [Th` ese Olivier Regniers, 2014] 9 / 43
  10. Addressed problem in this talk , xk , θk ,

    γε = ϕ(y) , xk , θk T , H + y ε, γε Image specificities Piecewise homogeneous: label Textured images: parameters θk for k = 1, . . . , K Observation: triple complication 1 Convolution 2 Missing data, truncation, mask 3 Noise 10 / 43
  11. Outline Image model Textured images, orientation Piecewise homogeneous images, labels

    Observation system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 11 / 43
  12. Texture model: stationary Gauss Random Field Original image x ∼

    N( 0, Rx ), in CP Parametric covariance Rx = Rx (γx , θ) Natural parametrization: Rx (γx , θ) = γ−1 x P −1 x (θ) Parameters: scale γx and shape θ f (x|θ, γx ) = (2π)−P γP x det Px (θ) exp −γx x†Px (θ)x Whittle (circulant) case Matrix Px (θ) ←→ eigenvalues λp (θ) ←→ field inverse PSD f (x|θ, γx ) = (2π)−1 γx λp (θ) exp −γx λp (θ) |◦ xp|2 Separablilty w.r.t. the Fourier coefficients ◦ xp Precision parameter of the Fourier coefficients ◦ xp : γx λp (θ) Any PSD, e.g., Gaussian, Laplacian, Lorentzian,. . . more complex,. . . . . . and K such models (PSD): xk for k = 1, . . . , K 12 / 43
  13. Examples: Power Spectral Density and texture Laplacian PSD θ =

    (ν0 x , ν0 y ) , (ωx , ωy ) : central frequency and widths λ−1(νx , νy , θ) = exp − |νx − ν0 x | ωx + |νy − ν0 y | ωy 13 / 43
  14. Labels: a Markov field Usual Potts model: favors large homogeneous

    regions Piecewise homogeneous image P pixels in K classes (K is given) Labels p for p = 1, . . . , P with discrete value in {1, . . . K} Count pairs of identical neighbour, ”parcimony of a gradient” ν( ) = p∼q δ( p ; q ) = ” − Grad 0 ” ∼: four nearest neighbours relation δ: Kronecker function Probability law (exponential family) Pr [ |β] = C(β)−1 exp [ βν( ) ] β: ”correlation” parameter (mean number / size of the regions) C(β): normalization constant Various extensions: neighbour, interaction 14 / 43
  15. Labels: a Potts field Example of realizations: Ising (K =

    2) β = 0 β = 0.3 β = 0.6 β = 0.7 β = 0.8 β = 0.9 β = 1 Example of realizations for K = 3 β = 0 β = 0.8 β = 1.1 15 / 43
  16. Labels: a Potts field Partition function Pr [ |β] =

    C(β)−1 exp [ βν( ) ] Partition function (normalizing coefficient) C(β) = ∈{1,...K}P exp [ βν( ) ] ¯ C(β) = log [C(β)] Crucial in order to estimate β No closed-form expression (except for K = 2, P = +∞) Enormous summation over the KP configurations 16 / 43
  17. Partition: an expectation computed as an empirical mean Distribution and

    partition Pr [ |β] = C(β)−1 exp [βν( )] with C(β) = exp [βν( )] A well-known result [Mac Kay] for exponential family: C (β) = ν( ) exp [βν( )] Yields the log-partition derivative: ¯ C (β) = ν( ) C(β)−1 exp [βν( )] = E ν( ) Approximated by an empirical mean ¯ C (β) 1 Q ν( (q)) where the (q) are realizations of the field (given β) Only few weeks of computations. . . but once for all ! 17 / 43
  18. Partition Log Partition 0 0.5 1 1.5 2 2.5 3

    0 0.5 1 1.5 2 2.5 3 First Der. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 Second Der. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 10 20 30 40 50 Parameter β 18 / 43
  19. Image formation model Image x writes x = K k=1

    Sk ( ) xk xk for k = 1, . . . , K: textured images (previous models) Sk ( ) for k = 1, . . . , K: binary diagonal indicator of region k Sk ( ) = diag sk ( 1 ), . . . sk ( P ) sk ( p ) = δ( p ; k) = 1 if the pixel p is in the class k 0 if not x1 x2 x3 S1( ) x1 S2( ) x2 S3( ) x3 x 19 / 43
  20. Outline Image model Textured images, orientation Piecewise homogeneous images Observation

    system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 20 / 43
  21. Observation model Observation: triple complication Convolution: low-pass filter H Missing

    data: truncation matrix T , size M × P Noise: ε accounts for measure and model errors , xk , θk T , H + y ε, γε y = T Hx + ε = ym = [Hx] m + εm for observed pixels Nothing for missing pixels 21 / 43
  22. Noise Usual model Gaussian Zero-mean White and homogeneous Precision γε

    f (ε|γε ) = N(ε; 0, γ−1 ε IM ) = π−M γM ε exp −γε ε 2 Possible advanced models Non gaussian (e.g., Cauchy) Poisson Correlated, but. . . . . . 22 / 43
  23. Hyperparameters Correlation parameters: Potts and textures Model poorly informative No

    simple conjugate prior Potts: Uniform prior on [0, B0 ], e.g., B0 = 3 f (β) = U[0,B0] (β) Textures: Uniform prior on [θm k , θM k ] f (θk ) = U[θm k ,θM k ] (θk ) Precision parameter Model poorly informative Conjugate prior: Gamma with parameter a0, b0 Nominal value (expected value) γ = 1 and very large variance f (γ) = G(γ; a0, b0 ) = ba0 0 Γ(a0 ) γa0−1 exp [−b0γ] 1+ (γ) 23 / 43
  24. Outline Image model Textured images, orientation Piecewise homogeneous images Observation

    system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 24 / 43
  25. Hierarchy and distributions y γε a0, b0 β B0 x1

    θ1, γ1 a0, b0, . . . . . . . . . xK θK , γK a0, b0, . . . Total joint distribution f (y, ,x1..K , θ1..K , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) And then: “Total joint distribution” Likelihood Marginal distributions Posterior and conditional posteriors 25 / 43
  26. Optimal estimation / decision function Usual Bayesian strategy: cost, risk,

    optimum Estimation / decision function ϕ : RM −→ P = R, C, K y −→ ϕ (y) = p Cost function C : P × P −→ R (p , p ) −→ C [ p , p ] Risk as a mean cost under the joint law R(ϕ) = EY ,P { C( P , ϕ(Y ) ) } Optimal estimation / decision function ϕopt = arg min ϕ R(ϕ) 26 / 43
  27. Optimal estimation / decision function Continuous parameters: estimation Quadratic cost

    C [ p , p ] = p − p 2 Optimal estimation function ≡ Posterior Mean ϕ (y) = p = EP |Y { P } = p p π(p|y) dp Discrete parameters: decision Binary cost C [ p , p ] = 1 − δ (p , p ) = 0 for correct decision 1 for erroneous decision Optimal decision function ≡ Posterior Maximizer ϕ (y) = p = arg max p π(p|y) 27 / 43
  28. Posterior estimate / decision and computations Numerical computations (convergent) 1

    – For n = 1, 2, . . . , N, sample [ , x1..K , θ1..K , γ1..K , γε , β ](n) under π( , x1..K , θ1..K , γ1..K , γε , β|y) 2 – Compute. . . 2-a . . . empirical mean [ x1..K , θ1..K , γ1..K , γε, β ] 1 N n [ x1..K , θ1..K , γ1..K , γε, β ](n) 2-b . . . empirical marginal maximiser p arg max k 1 N n δ( (n) p , k) As a bonus: Exploration and knowledge of the posterior Posterior variances / probabilities and uncertainties Marginal distributions . . . and model selection 28 / 43
  29. Posterior sampling Sampling π( , x1..K , θ1..K , γ1..K

    , γε , β|y) Impossible directly Gibbs algorithm: sub-problems Standard Inverse cumulative density function Metropolis-Hastings Gibbs loop: Draw iteratively γε under π(γε |y, , x1..K , θ1..K , γ1..K , β) γk under π(γk |y, , x1..K , θ1..K , γl , l = k, γε , β) for k = 1, . . . K under π( |y, x1..K , θ1..K , γ1..K , γε , β) xk under π(xk |y, , xl , l = k, θ1..K , γ1..K , γε , β) for k = 1, . . . K θk under π(θk |y, , x1..K , θl , l = k, γ1..K , γε , β) for k = 1, . . . K β under π(β|y, , x1..K , θ1..K , γ1..K , γε ) 29 / 43
  30. Sampling the noise parameter γε f (y, ,x1..K , θ1..K

    , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional density for γε π(γε | ) ∝ f (y| , x1..K , γε ) f (γε ) = γM ε exp −γε y − T Hx 2 γa0−1 ε exp [−b0γε ] 1+ (γε ) = γa0+M−1 ε exp −γε b0 + y − T Hx 2 1+ (γε ) It is a Gamma distribution γε ∼ G (a, b) a = a0 + M b = b0 + y − T Hx 2 30 / 43
  31. Sampling the texture scale parameters γk f (y, ,x1..K ,

    θ1..K , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional density for γk π(γk | ) ∝ f (xk |θk , γk ) f (γk ) = γP k exp −γk x† k Px (θk )xk γa0−1 k exp [−b0γk ] 1+ (γk ) = γa0+P−1 k exp −γk b0 + x† k Px (θk )xk 1+ (γk ) It is also a Gamma distribution γk ∼ G (a, b) a = a0 + P b = b0 + x† k Px (θk )xk = b0 + λp (θk ) |◦ xp|2 31 / 43
  32. Sampling the labels f (y, ,x1..K , θ1..K , γ1..K

    , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional probability for the set of labels Pr [ | ] ∝ f (y| , x1..K , γε ) Pr [ |β] ∝ exp −γε y − T H Sk ( ) xk 2 exp [βν( )] Conditional categorical probability for one label p π [ p = k| ] ∝    observed: exp −γε |yp − · · · |2 + βNp,k unobserved: exp [ βNp,k ] Joint structure: no convolution case Conditional independance Parallel sampling (two subsets: ebony and ivory) 32 / 43
  33. Sampling the textured images xk (1) f (y, ,x1..K ,

    θ1..K , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional density for the textured image xk π(xk | ) ∝ f (y| , x1..K , γε ) f (xk |θk , γk ) ∝ exp −γε y − T H Sk ( ) xk 2 exp −γk x† k Px (θk )xk Gaussian distribution C−1 k = γε Sk ( )H†T tT HSk ( ) + γk Px (θk ) mk = γε Ck Sk ( )H†T t ¯ yk ¯ yk = y − T H k =k Sk ( ) xk 33 / 43
  34. Sampling the textured images xk (2) Gaussian distribution C−1 k

    = γε Sk ( )H†T tT HSk ( ) + γk Px (θk ) mk = γε Ck Sk ( )H†T t ¯ yk Standard approaches Covariance factorization C = LLt Precision factorisation C−1 = LLt Diagonalization C = P ∆P t et C−1 = P ∆−1P t Parallel Gibbs sampling Large dimension Linear system solvers Optimization: Quadratic criterion minimization Perturbation – Optimization 1 P: produce a adequately perturbed criterion 2 O: minimize the perturbed criterion . . . 34 / 43
  35. Sampling texture parameters θk (1) f (y, ,x1..K , θ1..K

    , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional density for the texture parameters θk π(θk | ) ∝ f (xk |θk , γk ) f (θk ) ∝ exp −γk x† k Px (θk )xk U[θm k ,θM k ] (θk ) ∝ λp (θk ) exp −γx λp (θk ) |◦ xp|2 U[θm k ,θM k ] (θk ) Metropolis-Hastings: Propose and accept or not Independant or not, e.g., random walk Metropolis-adjusted Langevin algorithm Directional algorithms Gradient Hessien, Fisher matrix . . . . . . 35 / 43
  36. Sampling texture parameters θk (2) Principe de Metropolis-Hastings ind´ ependant

    Simuler θ sous f . . . . . . en simulant θ sous g Algorithme it´ eratif produisant des θ(n) Initialiser It´ erer, pour n = 1, 2, . . . , Simuler θp sous la loi g(θ) Calculer la probabilit´ e p = min 1 ; f (θp) f (θ(n−1)) g(θ(n−1)) g(θp) Acceptation / conservation θ(n) = θp accepte avec la probabilit´ e p θ(n) = θ(n−1) conserve avec la probabilit´ e 1 − p Choix de la densit´ e de proposition Ind´ ependant, marche al´ eatoire,. . . Langevin, Hamiltonien,. . . D´ eriv´ ees, Hessien, Fisher 36 / 43
  37. Sampling the correlation parameter β f (y, ,x1..K , θ1..K

    , γ1..K , γε , β) = f (y| , x1..K , γε ) Pr [ |β] f (xk |θk , γk ) f (γε ) f (β) f (θk ) f (γk ) Conditional density for the correlation parameter β π(β| ) ∝ Pr [ |β] f (β) ∝ C(β)−1 exp [βν( )] U[0,B0] (β) Sampling itself Partition function C(β) pre-computed (previous part) Conditional cdf F(β) through numerical integration / interpolation Inverse the cdf to generate a sample Sample u ∼ U[0,1] (u) Compute β = F−1(u) 37 / 43
  38. Outline Image model Textured images, orientation Piecewise homogeneous images Observation

    system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 38 / 43
  39. Numerical illustration: problem A first toy example −2 −1 0

    1 2 −2 −1 0 1 2 True label True image x Observation y Parameters P = 256 × 256, K = 3 No convolution here Missing : 20 % Noise level: γε = 10 (standard deviation: 0.3, SNR: 10dB) 39 / 43
  40. Numerical results: parameters Simulated chains 0 50 100 0 5

    10 0 50 100 0 1 2 Noise parameter γε Potts parameter β Quantitative assessment Parameter γε β True value 10.0 − Estimate 10.2 1.19 Computation time: one minute 40 / 43
  41. Numerical results: classification True label Estimated Misclassification Probability Performances Correct

    classification, including unoberved pixels Only about 150 misclassifications, i.e., less than 1% Remark: maximizers of the marginal posteriors Quantification of errors Probabilities (marginal) Indication/warning of misclassification 41 / 43
  42. Numerical results: restored image −2 −1 0 1 2 −2

    −1 0 1 2 −2 −1 0 1 2 True x Observation y Estimated x Performances Correct restauration of textures Correct restauration of edges (thanks to correct classification) Including interpolation of missing pixels Quantification of errors . . . onging work. . . Posterior standard deviation, credibility intervals 42 / 43
  43. Conclusion Synthesis Addressed problem: segmentation Piecewise textured images Triple difficulty:

    missing data + noise + convolution Including all hyperparameter estimation Bayesian approach Optimal estimation / decision Convergent computation Numerical evaluation Perspectives Ongoing: inversion-segmentation (e.g., convolution, Radon,. . . ) Non-Gaussian noise: Latent variables (e.g., Cauchy), Poisson,. . . Correlated, structured, textured noise Myopic problem: estimation of instrument parameters Model selection, choice of K Application to real data 43 / 43