Layer Nth Layer Input X Loss function Labels Y Optimizer loss score Deep Neural Network Succession of simple linear data transformations interleaved with simple non-linearities …
Ronneberger et al. 2015 Learning the mapping (transfer function) between an input image and an output image (or between data modalities) Isola et al. 2017 ... and other generative models proposed in the last few years
at 1.4 deg, 1 hourly resampled to daily) • E-OBS daily gridded precipitation (regridded to 1.4 deg) • Predicting the ERA5 precipitation is a rather methodological exercise • Data from 1979 to 2018 (~14.6 k samples) • Implementation of various models including deep neural networks for learning transfer funcions • Comparison in terms of MSE and Pearson correlation ERA5 variables E-OBS precipitation Transfer function
Random forest regression 1.45E-03 0.58 All convolutional network 1.10E-03 0.72 U-NET 1.04E-03 0.71 V-NET 9.73E-04 0.72 (~320 k pars) (~500 k pars) (~1.4 M pars)
supervised context) yield impressive results on I2I tasks using NWP fields • Same experiments with 0.25 deg E-OBS precipiation and ERA 5 variables • Different strategies for exploiting multiple variables more independently • Compare current results with generative models (conditional GANS) • Validation with external observational precipitation data • Downscaling • ERA 5 at 14 deg -> E-OBS original 0.25 degree resolution (Baño-Medina et al. 2019) • Use the sparse station measurements • Forecasting • Use lead time to forecast future states (almost for free) • Global precipitation data?