Upgrade to Pro — share decks privately, control downloads, hide ads and more …

OpenTalks.AI - Виктор Лемпицкий, Deep image prior: Why Convolutional Networks Are So Good at Generating Realistic Images?

OpenTalks.AI - Виктор Лемпицкий, Deep image prior: Why Convolutional Networks Are So Good at Generating Realistic Images?

OpenTalks.AI

March 01, 2018
Tweet

More Decks by OpenTalks.AI

Other Decks in Science

Transcript

  1. ConvNets: how they end up being used • Image recognition

    (all kinds of tasks) • Speech recognition • Catching up with Recurrent nets in Natural Language Processing • Understanding positions in Atari, Go, Chess • Image processing • Image generation
  2. ConvNets: how they end up being used • Image recognition

    (all kinds of tasks) • Speech recognition • Catching up with Recurrent nets in Natural Language Processing • Understanding positions in Go, Chess…. • Image processing • Image generation
  3. Why do generative ConvNets work? It must be because of

    a lot of learning (and high network capacity)! Right? Indeed: • ConvNets are data hungry, have millions of params • Big GANs are trained for days on tens of thousands of images (or more) • Training process is often tricky with lots of caveats, important to get it right
  4. Image restoration: standard approach • – Recovered image • –

    Corrupted image (observed) “Classical” MAP approach to inverse problems: Regularization / Prior term • Denoising: • Inpainting: • Super-Res.: • Feature inv.:
  5. Image restoration with Deep Image Prior • – Recovered image

    • – Corrupted image (observed) “Classical” MAP approach to inverse problems: Prior term Convolutional network with parameters Fixed input Consider all images obtained from a random signal z via a convolutional network with a certain architecture : Perform the reconstruction by solving (optionally: use fixed number of iterations): “Find the most likely image that can be generated by a ConvNet from z”
  6. • – Corrupted image (observed) • Initialize ◦ For example

    fill it with uniform noise • Solve ◦ With your favorite gradient-based method • Get the solution Deep Image Prior step by step
  7. Recap • Deep Image Prior: “Swiss knife” for image processing

    (but a very slow one) • Convolutional network architectures impose natural image priors • Intuition: generative ConvNets capture “Hierarchical self-similarity” of natural images by design • Learning is important, but so is the prior • Outlook: further convergence with computer graphics • Outlook: convergence with recognition networks Thank you!