Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥

SunGAN

Mehdi
July 20, 2023
18

 SunGAN

Mehdi

July 20, 2023
Tweet

Transcript

  1. SunGAN: Towards understanding of heliophysical processes with GANs 20 Apr

    2021 Dr. Mehdi Cherti Joint work with: - Dr. Frederic Effenberger (Ruhr-University Bochum) - Dr. Ruggero Vasile (GFZ-Postdam) - Dr. Jenia Jitsev (FZJ) - Dr. Stefan Kesselheim (FZJ)
  2. Heliophysics + Auroras + Sunspots + Solar Wind + Magnetic

    Storms + Space Weather + Satellite Disturbances + Power Outages
  3. Solar Dynamics Observatory (SDO) SDO is a 3 ton satellite

    in geosynchronous orbit Instruments: - Helioseismic and Magnetic Imager (HMI): magnetic activity - Extreme Ultraviolet Variability Experiment (EVE): extreme ultraviolet irradiance - Atmospheric Imaging Assembly (AIA) - AIA Images on visible light, 2 UV and 7 EUV wavelength - 4096x4096 resolution taken every 12 seconds - Started on 2010, estimate mission end: 2030
  4. Objectives - Generative models for understanding of factors of variation

    in the solar data - Using (controllable) generative models to generate rare and interesting solar events, and use them for data augmentation in forecasting
  5. Solar data - We collected a subsample of ~40K images

    from SDO (each raw image is ~12MB in a FITS file) - 10 different wavelengths (channels), but we used only 193 Angstrom - Max resolution: 4096x4096 but we trained on 1024x1024 due to memory constraints - Intensity range from 0 to 16383
  6. Training: architectures - Different architectures trained: StyleGAN2, StyleALAE, BigGAN -

    StyleGAN2 provided the best results - BigGAN consistently mode collapsed - StyleALAE gave blurrier images than StyleGAN2 and was slower to train (progressive training) (StyleGAN2 architecture)
  7. Training: preprocessing - Contrary to natural images, histogram of pixel

    intensities of solar data is very skewed - To make the histogram less skewed we use the log transform - We found it much easier to learn generative models on “log(intensities)”, but it is still an open question if we can learn from raw data Cumulative distribution function of natural images from ImageNet (grayscale) Cumulative distribution function of Solar data
  8. Solving mode collapse - Tuning the learning rate of discriminator

    with respect to generator helped - More importantly, differentiable augmentation from [1] successfully helped to prevent mode collapse - Translation and cutout augmentation operations were used [1] https://arxiv.org/abs/2006.10738
  9. Evaluation - While not perfect for the task, Fréchet Inception

    Distance (FID) was still helpful to detect mode collapse, to check for training evolution (learning curves) and find well performing models - Human evaluation is still needed, especially to make sure fine scale details are well modeled
  10. Evaluation Model Resolution FID No Log transform 512x512 Collapse from

    beginning +Log transform 512x512 108 +Tune LR 512x512 84 +DiffAug 512x512 35 +Double resolution 1024x1024 18 +Relativistic Loss[1] 1024x1024 12 FID ranges from ~200 to 0, lower is better [1] https://arxiv.org/abs/1807.00734
  11. Latent space control - Most techniques for latent space control

    need labels (e.g., smile or gender predictor), but we do not have labels - There are recent works that deal with unsupervised latent space control - GANSpace: Discovering Interpretable GAN Controls
  12. Latent space control - GANSpace idea: apply PCA to W

    empirical distribution - No labels needed - Each PCA component can can be used to modify a given W, then the image is generated and visualized
  13. Next steps - Investigate more closely semantics of latent space

    components - Improve fine scale detail using more multi-scale sophisticated architectures - Train on higher resolutions (2048x2048, 4096x4096): exploit recent DeepSpeed features such as Zero-OffLoad and model parallel training to deal with the GPU memory bottleneck - Train on more wavelengths (channels), they offer a richer and complementary information