Upgrade to Pro — share decks privately, control downloads, hide ads and more …

DL4DS - Deep Learning for empirical DownScaling

DL4DS - Deep Learning for empirical DownScaling

Virtual poster at Climate Informatics 2022

More Decks by Carlos Alberto Gomez Gonzalez

Other Decks in Science

Transcript

  1. • Empirical downscaling of gridded climate data is a task

    closely related to that of super-resolution, considering that both aim to learn a mapping between low-res and high-res gridded data. • Most of the Deep Learning (DL)-based methods proposed so far have in common the use of convolutions. • DL4DS is a Python library that draws from recent developments in the field of computer vision for tasks such as image-to-image translation and super-resolution. • The mapping between the low- and high-resolution data is learned by training either a supervised or a conditional generative adversarial DL model (see Fig. 1). • 1 Experimental results DL4DS - Deep Learning for empirical DownScaling Carlos Alberto Gómez Gonzalez Earth Sciences department, Barcelona Supercomputing Center, Spain [email protected] Overview Fig. 1: The general architecture of DL4DS. Training is possible in either MOS or PerfectProg fashion. Fig. 5: Examples of downscaled products obtained with DL4DS, corresponding to the reference grid shown in panel (A) of Fig. 4. Acknowledgements: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement H2020-MSCA-COFUND-2016-754433. CNN-based model architectures for empirical downscaling Fig. 4: Panel (A) shows a reference NO2 surface concentration field from the low-resolution CAMS global reanalysis. Panel (B) shows the corresponding high-resolution field from the CAMS regional reanalysis. Fig. 2: Main blocks and layers of DL4DS. Fig. 3: supervised DL models, as well as generators, are composed of a backbone section (examples in (A), (B), (C) and (D)) and an output module (E). • A wide choice of blocks can be arranged into different backbones (see Fig. 3) to model spatial and spatiotemporal samples. • A localized convolutional block is included in the output module to learn location-specific information. • We showcase DL4DS using Copernicus Atmosphere Monitoring Service (CAMS) reanalysis data (see Fig. 4). • We include predictor atmospheric variables from the ECMWF ERA5 reanalysis (TAS and SfcWind) and static variables (high-res topography, urban fraction data). • We showcase eight models, without the intention of a full exploration of possible architectures and learning strategies. See the table on the right for details. • We find that turning on the LCB benefits the model performance. For this task, MOS training is better than PerfectProg and a Resnet backbone outperforms other architectures (see Fig.6 and the table on the right). • DL4DS repository: https://github.com/carlos-gg/dl4ds Fig. 6: Pixel-wise RMSE for each model, computed for the holdout year (2018). The models are detailed in the table below.