Slide 1

Slide 1 text

S3 SEMINAR JULIA LASCAR, JÉRÔME BOBIN, FABIO ACERO HYPERSPECTRAL DATA FUSION AND SOURCE SEPARATION FOR X-RAY ASTROPHYSICS 1 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 2

Slide 2 text

OUTLINE • Context: X-ray Astrophysics • Source Separation with Spectral Variabilities • Hyperspectral Fusion • Future works and perspectives 2 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 3

Slide 3 text

X-RAY ASTROPHYSICS

Slide 4

Slide 4 text

CONTEXT • Supernova Remnants: stars that exploded hundreds of years ago Cassiopeia A 4 Julia Lascar | [email protected] | https://github.com/JMLascar/ Type Ia (binary star collapse) Type II (core collapse) Tycho

Slide 5

Slide 5 text

CONTEXT • Hyperspectral images: 2 spatial dimensions + 1 energy dimension • For each pixel, a spectrum • Entangled physical components spatial spatial spectral 5 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 6

Slide 6 text

CHALLENGES IN X-RAY ASTROPHYSICS • Poisson Noise • Low signal/noise ratio, noise variabilities • High spectral variability • Non analytical physical model --> Need tools that account for these specific challenges 6 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 7

Slide 7 text

SEMI-BLIND UNMIXING WITH SPARSITY FOR HYPERSPECTRAL IMAGES 7 SUSHI SOURCE SEPARATION WITH SPECTRAL VARIABILITIES Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 8

Slide 8 text

UNMIXING: A NON-STATIONARY MODEL Stationary model: = + + … Spectrum_1 Image_1 Spectrum_2 Image_2 + + … More realistic model includes spatial variability: Cube_1 Cube_2 = 8

Slide 9

Slide 9 text

SUSHI: AN OVERVIEW  Plug a learnt model for spectral variation  Applies a spatial regularisation on the parameters map of the learnt model  Obtains, for each component, a cube that varies spatially and spectrally. 9 + … Component 1 = + Component 2 SUSHI output

Slide 10

Slide 10 text

SUSHI: OVERVIEW 10 Data fidelity Learnt Model Latent Parameters Spatial Reg = + 1 2 Amplitude Latent Parameters A, θ = argmin + A, θ Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 11

Slide 11 text

• Learn to interpolate between anchor points 11 LEARNT MODEL: INTERPOLATORY AUTO ENCODER (IAE) Bobin, J., Gertosio, R., Thiam, C., Bobin C. 2021. Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 12

Slide 12 text

LEARNT MODEL: INTERPOLATORY AUTO ENCODER (IAE) 12 Spectral Space Latent Space Spectral Space Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 13

Slide 13 text

THE DECODER AS A GENERATIVE MODEL 13 Julia Lascar | [email protected] | https://github.com/JMLascar/ • D is differentiable • Not costly to call upon • Latent parameters vary smoothly with physical parameters  Spatial regularization makes sense

Slide 14

Slide 14 text

SPATIAL REGULARIZATION OF LATENT PARAMETERS • Undecimated Isotropic Wavelet Transform: Starlet Transform (useful in astro) • Minimize L1 norm 14 Likelihood Learnt Model Latent Parameters Spatial Reg • Proximal Alternating Linearized Minimization (PALM, Bolte 2014) Julia Lascar | [email protected] | https://github.com/JMLascar/ A, θ = argmin A, θ +

Slide 15

Slide 15 text

SUSHI RESULTS 15 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 16

Slide 16 text

SIMULATED TOY MODEL • from real images + numerical simulations of CasA • Thermal Component: Varying redshift, temperature • Synchrotron Component: Constant Photon Index 16 16 Thermal Amplitude Synchrotron Amplitude Photon Index Velocity Redshift (z) Temperature (keV)

Slide 17

Slide 17 text

EXAMPLE OF SPECTRA FROM INDIVIDUAL PIXELS • Varying levels of noise / count statistics 17 Energy (keV) Flux Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 18

Slide 18 text

AMPLITUDE MAP COMPARISONS 18 Fit 1D pixel-per-pixel

Slide 19

Slide 19 text

INDIVIDUAL PIXEL COMPARISONS 19 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 20

Slide 20 text

INDIVIDUAL PIXEL COMPARISONS 20 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 21

Slide 21 text

21

Slide 22

Slide 22 text

PARAMETER RESIDUALS HISTOGRAMS 22 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 23

Slide 23 text

SUSHI SUMMARY • SUSHI is a method to unmix hyperspectral images with spectral variations based on a physical model (one for each endmember). • As a surrogate model, SUSHI uses the decoder of an Interpolatory Auto-Encoder. • It applies a spatial sparsity regularization on the surrogate model’s parameter maps. • General framework applicable to hyperspectral images with spectral variabilities where a spectral model can be learnt. 23 https://github.com/JMLascar/SUSHI Julia Lascar | [email protected] |

Slide 24

Slide 24 text

HYPERSPECTRAL IMAGE FUSION VIA REGULARIZED DECONVOLUTION 24 HI-FRED Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 25

Slide 25 text

FUSIO N 25 √ Spatial X Spectral X Spatial √ Spectral √ Spatial √ Spectral XMM- Newton, 1999 XRISM, 2023 Chandra, 1999 Athena-XIFU, 2037

Slide 26

Slide 26 text

Sourc e Z XMM XRISM Rebinni ng Operato rs R 1 ( ) Z⊛ ⊛ R 2 ( ) Z⊛ ⊛ Spatial 2D Convolution Kernel Spectral 1D Convolution Kernel Effectiv e Area X Y × × Observed data Poisso n Noise Poisso n Noise FUSION FORWARD MODEL 26 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 27

Slide 27 text

ESTIMATOR 27 Spectral Response Rebinning Effective Area Spatial Response

Slide 28

Slide 28 text

REGULARIZATION • Three methods: • l1 norm of Wavelet 2D-1D coefficients • Low rank Approximation (using PCA) with Sobolev Regularization (inspired by Guilloteau 2020 work on JWST) • Low rank Approximation with 2D Wavelet Regularization 28 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 29

Slide 29 text

ALGORITHM • Negative log likelihood Poisson • Most of the cost function is differentiable (except at 0) • Non differentiable constraints, but convex • Proximal Gradient Descent / ISTA • FFT of the kernels is calculated at the start, then saved. 29 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 30

Slide 30 text

FUSION RESULTS 30 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 31

Slide 31 text

SPATIAL RESPONSE S 31 Julia Lascar | [email protected] | https://github.com/JMLascar/ σ=1 px σ=3 px

Slide 32

Slide 32 text

SPECTRAL RESPONSES 32 Julia Lascar | [email protected] | https://github.com/JMLascar/ σ=21.0 eV σ=2.52 eV σ=51.0 eV σ=1.38 eV

Slide 33

Slide 33 text

FOUR TOYMODELS 33 • Gaussian: • 1 keV • 6 keV • Gaussian with rebinning: • 1 keV • Realistic: • 1 keV Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 34

Slide 34 text

SPECTRAL VARIABILITY OF EACH MODEL 34

Slide 35

Slide 35 text

GAUSSIAN TOY MODEL 35 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 36

Slide 36 text

GAUSSIAN TOY MODEL 36 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 37

Slide 37 text

GAUSSIAN WITH REBINNING TOY MODEL 37 Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 38

Slide 38 text

REALISTIC TOY MODEL 38 (Note: version with more noise in progress) Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 39

Slide 39 text

GAUSSIAN (1 KEV) 39

Slide 40

Slide 40 text

GAUSSIAN (1 KEV) 40

Slide 41

Slide 41 text

GAUSSIAN (6 KEV) 41

Slide 42

Slide 42 text

GAUSSIAN (6 KEV) 42

Slide 43

Slide 43 text

GAUSSIAN (WITH REBIN, 1KEV) 43

Slide 44

Slide 44 text

GAUSSIAN (WITH REBIN, 1K 44

Slide 45

Slide 45 text

REALISTIC (1 KEV) 45

Slide 46

Slide 46 text

REALISTIC (1 KEV) 46

Slide 47

Slide 47 text

Gaussian 1 keV Gaussian 6 keV Realistic 1 keV Gaussian with Rebinning 1 keV SUMMARY 47 ASAM: Moyenne de Carte d’angle spectral, PSNR: Pic Rapport Signal sur Bruit, acSSIM: moyenne complémentaire de l’indice de similarité structurelle

Slide 48

Slide 48 text

FUSION SUMMARY 48 • Propose a new algorithm for hyperspectral fusion adapted to X-ray imaging • Studied regularization at different regimes • Methods work similarly at low spectral variability • Low rank is faster thanks to dimension reduction • W2D1D works best at high spectral variability • Though, both can be biased • Need a method that reduces dimension, preserves variabilities  include physical information Julia Lascar | [email protected] | https://github.com/JMLascar/

Slide 49

Slide 49 text

FUTURE WORK • FUSION + SOURCE SEPARATION • Will have the acceleration obtained by dimension reduction • Including physical information • Performs source separation and fusion at the same time

Slide 50

Slide 50 text

CONCLUSION • X-ray hyperspectral images are complex to analyse because of Poisson noise and high spectral variability • SUSHI is a method to unmix hyperspectral images with spectral variations based on a physical model (one for each endmember). • HIFReD is a fusion method, which combines two hyperspectral images to obtain the best spatial and spectral resolutions 50 [email protected] https://github.com/JMLascar/SUSHI

Slide 51

Slide 51 text

EXTRA SLIDES

Slide 52

Slide 52 text

DECODER FUNCTION AS A GENERATIVE MODEL • D is differentiable • Not costly to call upon • Keeps a notion of neighborhood  spatial regularization makes sense 52 52

Slide 53

Slide 53 text

ALGORITHM OUTLINE • Until stopping criterion: • For each component C : • 1. Update latent parameters for C, keep all else fixed • Gradient descent for the latent parameters • Soft thresholding in wavelet domain of the parameter maps • 2. Update Amplitude for C, keep all else fixed •Gradient descent for the Amplitude 53

Slide 54

Slide 54 text

CLASSIC METHOD • Physical model fit pixel by pixel with multiple variables • Treats pixels individually: • Performs poorly on low signal to noise pixels • Ignores correlation between pixels • Costly to call upon • Not differentiable model 54 Improvement: Spatial regularization on the parameters of a learnt spectral model

Slide 55

Slide 55 text

STATE OF THE ART • First, panchromatic / multi-spectral, then multispectral / hyper-spectral • Subspace projection • Unmixing, e.g. Yokoya et al., 2012, Prévost et al. 2022 • Low rank approximation, e.g. Simões et al., 2015 • Spatial Regularization • TV, sparse dictionary (Wei et al., 2015), deep learning (Uezato et al., 2020) • Astrophysics, mainly for JWST: • Guilloteau et al. 2020, low rank approximation with Sobolev regularization • Pineau et al., 2023, exact solution for fusion with unmixing 55 Julia Lascar | [email protected] | https://github.com/JMLascar/