Slide 1

Slide 1 text

Leveraging Data-driven Low-dimensional Signal Representations to Solve Inverse Problems Ricardo Borsoi CNRS, University of Lorraine, Nancy Seminaire S3 - The Paris-Saclay Signal Seminar R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 1 / 48

Slide 2

Slide 2 text

I. Inverse problems and model-based solutions II. Data-driven approaches III. Blind inverse problems and application to hyperspectral unmixing IV. Conclusions R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 2 / 48

Slide 3

Slide 3 text

I. Inverse problems and model-based solutions R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 3 / 48

Slide 4

Slide 4 text

Data acquisition Inverse problems (IPs) consist in recovering signals of interest x from measurements y generated according to a forward model y “ AΘpxq , (1) § AΘ is the forward operator [Arridge et al., 2019]. § Θ define how measurements are generated § y might contain noise R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 4 / 48

Slide 5

Slide 5 text

Two categories: IPs § “standard” IPs, in which the parameters Θ are known: Standard IP Given y and Θ, recover x. loooooomoooooon IP Recover the desired signals from (possibly noisy) measurements. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 5 / 48

Slide 6

Slide 6 text

Two categories: BIPs § “blind” IPs, in which the parameters Θ have to be recovered: Blind IP Given y, recover x and Θ. looooooooooomooooooooooon BIP Recover a physically meaningful reduced representation of the data R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 6 / 48

Slide 7

Slide 7 text

Examples of IPs § Image deconvolution [Asim et al., 2020] § Compressed sensing [Candes et al., 2006] R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 7 / 48

Slide 8

Slide 8 text

Examples of blind IPs § Hyperspectral unmixing [Borsoi et al., 2021] § Spectrum cartography [Shrestha et al., 2022] R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 8 / 48

Slide 9

Slide 9 text

Are IPs easy to solve? The IPs we encounter are typically ill-posed: § The solution is not unique. § The solution is unstable to small changes in the data y. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 9 / 48

Slide 10

Slide 10 text

How are IPs solved, classics Regularizations are used to obtain a stable and accurate solution. Penalize “less desirable” solutions min x › ›y ´ AΘpxq› › 2 2 looooooomooooooon Data fidelity ` Rpxq lo omo on Regularization (2) Probabilistic interpretation max x log p`yˇ ˇx; Θ˘ loooooomoooooon Likelihood ` log ppxq looomooon Prior (3) R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 10 / 48

Slide 11

Slide 11 text

The role of the regularization How can we design the regularization Rpxq to better exploit prior information about the solutions? § Classical regularization approaches were concerned with the stability of the solution. § Modern approaches are use this to favor more desirable solutions. § Recent work exploit data-driven approaches. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 11 / 48

Slide 12

Slide 12 text

Tikhonov regularization The Tikhonov regularization penalizes solutions that have a large norm with a weight λ: x˚ “ arg min x › ›y ´ AΘ `x˘› › 2 2 ` λ}x}2 2 . (4) § The solution x˚ is stable as we perturb y. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 12 / 48

Slide 13

Slide 13 text

Leveraging sparsity Many natural signals (e.g., images) are sparse when represented in an appropriate basis: penalize solutions which are dense. min x › ›y ´ AΘ `x˘› › 2 2 ` λ}x}sp . (5) § }x}sp is a sparsity-promoting penalty, such as the L0 seminorm or the L1 norm relaxation § Sufficiently sparse x can be recovered for adequate operators AΘ and large enough y [Candes et al., 2006]. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 13 / 48

Slide 14

Slide 14 text

II. Data-driven approaches R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 14 / 48

Slide 15

Slide 15 text

Machine learning (ML) for inverse problems R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 15 / 48 Recent work explored ML to obtain data-driven solutions to IPs: § State of the art in many applications (image denoising, super resolution, ...) § Learn a function fpyq « x from training examples

Slide 16

Slide 16 text

Many DL-based solutions have limitations § Hard to understand when such models can recover the solution. § If not carefully designed, lack of stability can lead to “realistic” artifacts with non-negligible probability [Antun et al., 2020]. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 16 / 48

Slide 17

Slide 17 text

Promising approaches Recent approaches explored solutions that exploit DL models as regularizations: § Plug & Play [Venkatakrishnan et al., 2013]: use of pretrained denoisers to other IPs. § Deep image prior [Ulyanov et al., 2018]: use untrained neural nets as priors. § Deep generative models [Bora et al., 2017]: the prior is explicitly modeled. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 17 / 48

Slide 18

Slide 18 text

Deep Generative Models (DGMs) How to estimate the PDF ppxq of a random variable x P RD from a set of realizations? § define a latent random variable z P Rd with a known PDF ppzq (e.g., standard Gaussian) in a low-dimensional˚ space (d ! D), § learn a function g : z ÞÑ p x P RD such that the PDF of p x “ gpzq is close to ppxq. Examples of DGMs: § Variational autoencoders § Generative adversarial networks § Diffusion models˚ R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 18 / 48

Slide 19

Slide 19 text

Leveraging DGMs to solve IPs Main idea: Constrain the solution to the image of a (pretrained) DGM as x “ gpzq. Find the latent representation z that best fits the data [Bora et al., 2017]: min z › ›y ´ AΘ `gpzq˘› › 2 2 . (6) § mitigate ill-posedness by reducing the size of the solution space; § can address different measurement models AΘ at test time; R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 19 / 48

Slide 20

Slide 20 text

Solving the optimization problem Using the projected gradient descent [Shah and Hegde, 2018]: x˚ “ arg min xPrangepgq }y ´ AΘpxq}2 2 (7) § take a gradient step with respect to the cost function: r x “ xpiq ´ η∇x“xpiq }y ´ AΘpxq}2 2 (8) § afterwards project the solution onto the range of the DGM: xpi`1q “ g´ arg min zPRd } r x ´ gpzq}2 2 ¯ (9) Other algorithms were also proposed, e.g., plain gradient descent [Hand and Voroninski, 2019] or Langevin dynamics [Nguyen et al., 2022]. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 20 / 48

Slide 21

Slide 21 text

Also admits a Bayesian formulation DGMs can also be used in a probabilistic formulation [Holden et al., 2022]: max x log p`yˇ ˇx; Θ˘ loooooomoooooon Likelihood ` log ppxq looomooon DGM (implicitly) (10) However, computing ppxq based on the DGM gp¨q can be difficult. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 21 / 48

Slide 22

Slide 22 text

Recovery guarantees for CS Strong theoretical guarantees have been obtained for compressive sensing in [Bora et al., 2017]: Assuming g is L-Lipschitz, AΘ a Gaussian random matrix and the dimension of y is on the order of K log `rL{δ˘, then }gpˆ zq ´ xtrue}2 ď 6´ min }z}ďr }gpzq ´ xtrue} ` }noise} ` δ¯ with high probability. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 22 / 48

Slide 23

Slide 23 text

Performance The method performs well in experiments compared to classical compressive sensing, e.g., for the MNIST dataset [Bora et al., 2017]: R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 23 / 48

Slide 24

Slide 24 text

Example: Image super resolution The PULSE algorithm [Menon et al., 2020] searches for a high resolution image x on the domain of a DGM that can represent a low resolution one (y) when degraded. [Menon et al., 2020] R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 24 / 48

Slide 25

Slide 25 text

Challenges This approach is not without challenges: § Theoretical recovery results for more general forward operators? § Lack of training data in many applications (source of bias). LR True Rec. [Dahl et al., 2017] R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 25 / 48

Slide 26

Slide 26 text

III. Blind inverse problems and application to hyperspectral unmixing R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 26 / 48

Slide 27

Slide 27 text

BIPs: source separation A BIP of interest is source separation, e.g. [Comon and Jutten, 2010]: yn “ K ÿ k“1 ak,n xk,n , n “ 1, . . . , N , (11) Recover the sources xk,n and the mixing coefficients Θ “ tak,nu from a set of N measurements tynu. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 27 / 48

Slide 28

Slide 28 text

Hyperspectral imaging R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 28 / 48 High spectral resolution: easy to identify materials (hundreds of wavelengths) Low spatial resolution: different materials become mixed in some pixels:

Slide 29

Slide 29 text

Hyperspectral unmixing R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 29 / 48 Overcome this problem using hyperspectral unmixing. § Pixels can be expressed as: yn “ řK k“1 ak,n xk,n , n “ 1, . . . , N § Separate the n-th pixel yn into § pure material spectra xk,n § abundances proportions ak,n

Slide 30

Slide 30 text

Challenges § variability of the material spectra xk,n across pixels can be significant. Samples of Alunite from the USGS library. 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Real sample signatures of “red painted roof” R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 30 / 48

Slide 31

Slide 31 text

Inter-sample variability of the sources The general problem includes inter-sample variability of the sources: yn “ K ÿ k“1 ak,n xk,n , n “ 1, . . . , N § Makes the problem highly underdetermined. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 31 / 48

Slide 32

Slide 32 text

DGMs to solve BIPs Idea: parametrize each of the sources xk,n, k “ 1, . . . , K by one DGM gk as xk,n “ gkpzk,nq [Borsoi et al., 2019]: yn “ K ÿ k“1 ak,n gkpzk,nq looomooon xk,n , n “ 1, . . . , N , (12) R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 32 / 48

Slide 33

Slide 33 text

DGMs to solve BIPs § reduces the number of unknowns to be recovered. § Can address inter-sample source variability. § Hyperspectral unmixing is a key application [Borsoi et al., 2019, Shi et al., 2021], but it was also used in other problems [Shrestha et al., 2022]. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 33 / 48

Slide 34

Slide 34 text

How to train the DGMs § Finding data to train the DGMs g1p¨q, . . . , gKp¨q for the many source separation problems is challenging. § We consider a self-supervised approach: extract training samples from the image. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 34 / 48

Slide 35

Slide 35 text

Extracting training data from the measurements § Many pixels are composed of a single material, indexed by k1 P t1, . . . , Ku. The model yn1 “ řK k“1 ak,n1 xk,n1 particularizes to yn1 “ xk1,n1 R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 35 / 48

Slide 36

Slide 36 text

Extracting training data from the measurements Exploit this property to extract different samples from each material and use them to train the DGMs. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 36 / 48

Slide 37

Slide 37 text

A two step procedure 1. Extract training samples from the measured image and train one DGM gk for each material. 2. Solve an optimization problem to perform unmixing, by considering a block coordinate descent approach. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 37 / 48

Slide 38

Slide 38 text

Solving the inverse problem Given the pretrained generative models gk, a block coordinate descent-based optimization solution can be considered. min ak,n, zk,n N ÿ n“1 › › › yn ´ K ÿ k“1 ak,n gkpzk,nq › › › 2 2 ` Ra `tak,nu˘ ` Rz `tzk,nu˘ where Ra `tak,nu˘ and RZ `tzk,nu˘ denote regularizations. § Fixing zk,n, minimize w.r.t. ak,n § Fixing ak,n, minimize w.r.t. zk,n R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 38 / 48

Slide 39

Slide 39 text

Solving the inverse problem w.r.t. ak,n § Fixing zk,n “ zpiq k,n from the previous iteration, minimize w.r.t. ak,n: tapi`1q k,n u “ arg min ak,n N ÿ n“1 › › › yn ´ K ÿ k“1 ak,n gkpzpiq k,n q looomooon ”xpiq k,n › › › 2 2 ` Ra `tak,nu˘ (13) This becomes straightforward for typical choices of Ra (nonnegativity, sparsity, total variation, etc.). R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 39 / 48

Slide 40

Slide 40 text

Solving the inverse problem w.r.t. zk,n § Fixing ak,n “ api`1q k,n from the previous iteration, minimize w.r.t. zk,n, sequentially over k, n. Ignoring the regularization Rz for simplicity: zpiq k,n “ arg min zk,n › › › yn ´ K ÿ k“1 ak,n gkpzk,nq › › › 2 2 “ arg min zk,n › › › yn ´ ÿ k1‰k ak1,n gk1 pzk1,nq loooooooooooooomoooooooooooooon “ r yn ´gkpzk,nq › › › 2 2 (14) This is the projection of r yn onto the range of gk. Descent algorithms can be used (gradient descent or quasi-Newton methods). R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 40 / 48

Slide 41

Slide 41 text

Inverting DGMs The loss function of z˚ k,n “ minzk,n › ›r yn ´ gkpzk,nq› ›2 can have a favorable optimization landscape, as shown in [Hand and Voroninski, 2019]: Assume r yn belongs to the range of gk1 , and that gk1 is a q-layer neural network with ReLU activations whose weight matrices have zero-mean, iid Gaussian elements. Then with high probability the only stationary points of the loss function are z˚ k,n and ´ρz˚ k,n for constant ρ ą 0. Needs very strong assumptions on the parameters of gk1 . R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 41 / 48

Slide 42

Slide 42 text

A few results We consider an example with an image containing Vegetation, Water and Soil as materials. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 42 / 48

Slide 43

Slide 43 text

A few results § Abundances and spectra of each material can be well estimated. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 43 / 48

Slide 44

Slide 44 text

IV. Conclusions R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 44 / 48

Slide 45

Slide 45 text

Discussion and Conclusions § Approaches leveraging DGMs are interesting due to: § Close connection to model-based approaches. § Flexibility to deal with different forward operators. § Theoretical guarantees were obtained under some hypotheses. § Demonstrated good experimental performance in “standard” and “blind” IPs. § Self-supervised approach for learning the DGMs in hyperspectral unmixing mitigates bias in the training dataset. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 45 / 48

Slide 46

Slide 46 text

Remaining challenges § Self-supervised approaches: can they be theoretically understood? § Small/not-representative training datasets: how to address possible biases in more general IPs? § Can we theoretically show that DGMs improve recoverability in BIPs? Thank you for your time! R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 46 / 48

Slide 47

Slide 47 text

References Vegard Antun, Francesco Renna, Clarice Poon, Ben Adcock, and Anders C Hansen. On instabilities of deep learning in image reconstruction and the potential costs of ai. Proceedings of the National Academy of Sciences, 117(48):30088–30095, 2020. Simon Arridge, Peter Maass, Ozan ¨ Oktem, and Carola-Bibiane Sch¨ onlieb. Solving inverse problems using data-driven models. Acta Numerica, 28:1–174, 2019. Muhammad Asim, Fahad Shamshad, and Ali Ahmed. Blind image deconvolution using deep generative priors. IEEE Transactions on Computational Imaging, 6:1493–1506, 2020. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. In International Conference on Machine Learning, pages 537–546. PMLR, 2017. Ricardo Augusto Borsoi, Tales Imbiriba, and Jos´ e Carlos Moreira Bermudez. Deep generative endmember modeling: An application to unsupervised spectral unmixing. IEEE Transactions on Computational Imaging, 6:374–384, 2019. Ricardo Augusto Borsoi, Tales Imbiriba, Jos´ e Carlos Moreira Bermudez, C´ edric Richard, Jocelyn Chanussot, Lucas Drumetz, Jean-Yves Tourneret, Alina Zare, and Christian Jutten. Spectral variability in hyperspectral data unmixing: A comprehensive review. IEEE geoscience and remote sensing magazine, 9(4):223–270, 2021. Emmanuel J Candes, Justin K Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59(8):1207–1223, 2006. Pierre Comon and Christian Jutten. Handbook of Blind Source Separation: Independent component analysis and applications. Academic press, 2010. Ryan Dahl, Mohammad Norouzi, and Jonathon Shlens. Pixel recursive super resolution. In Proceedings of the IEEE international conference on computer vision, pages 5439–5448, 2017. Paul Hand and Vladislav Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. IEEE Transactions on Information Theory, 66(1):401–418, 2019. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 47 / 48

Slide 48

Slide 48 text

Matthew Holden, Marcelo Pereyra, and Konstantinos C Zygalakis. Bayesian imaging with data-driven priors encoded by neural networks. SIAM Journal on Imaging Sciences, 15(2):892–924, 2022. Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, and Cynthia Rudin. PULSE: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 2437–2445, 2020. Thanh V Nguyen, Gauri Jagatap, and Chinmay Hegde. Provable compressed sensing with generative priors via Langevin dynamics. IEEE Transactions on Information Theory, 68(11):7410–7422, 2022. Viraj Shah and Chinmay Hegde. Solving linear inverse problems using GAN priors: An algorithm with provable guarantees. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 4609–4613. IEEE, 2018. Shuaikai Shi, Min Zhao, Lijun Zhang, Yoann Altmann, and Jie Chen. Probabilistic generative model for hyperspectral unmixing accounting for endmember variability. IEEE Transactions on Geoscience and Remote Sensing, 60:1–15, 2021. Sagar Shrestha, Xiao Fu, and Mingyi Hong. Deep spectrum cartography: Completing radio map tensors using learned neural models. IEEE Transactions on Signal Processing, 70:1170–1184, 2022. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9446–9454, 2018. Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. In Proc. IEEE Global Conference on Signal and Information Processing, pages 945–948. IEEE, 2013. R. Borsoi DGMS IPs Seminaire S3 - The Paris-Saclay Sign 48 / 48