Super Resolution with CoreML @ try! Swift Tokyo 2018

Super Resolution with CoreML @ try! Swift Tokyo 2018

The 'super resolution' technique is used for converting low resolution image into high resolution, which reduces the amount of image data that needs to be transfered. In this talk, I'd like to show you the implementation of super resolution with CoreML and Swift, and compare the results with conventional methods.


Kentaro Matsumae

March 02, 2018


  1. Super Resolution with CoreML Kentaro Matsumae @DeNA #tryswiftconf 2018.03.02 try!

  2. Super Resolution (SR) SR

  3. Super Resolution (SR) SR

  4. SR Method • SRCNN • SR method with Deep Learning

    technology • Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, ’Image Super-Resolution Using Deep Convolutional Networks’ (2015)
  5. None
  6. Reduce Manga image data size with SRCNN + CoreML

  7. Demo

  8. 400x600 WebP 50 KB Display SR on CoreML Overview 800x1200

    WebP 200 KB Client Server Data Size 1/4 Resize
  9. Client Server ? How to get MLModel

  10. How to get MLModel (A) Use an open source MLModel

    (B) Train your own model
  11. • • Waifu2x iOS version Try, but… waifu2x-ios

  12. #1. Illegible Serif characters waifu2x-ios Original

  13. #2. Lost screen tone texture waifu2x-ios Original

  14. Waifu2x is trained using Anime Style images, not Manga style.

  15. How to prepare MLModel file (A) Use public MLModel (B)

    Train your own model (with Manga images)
  16. Training

  17. Implement SRCNN

  18. Training Environment • Training Data are Manga image files. •

    340,000 patch images • AWS EC2 GPU instance (p3.2xlarge )
  19. About 24hr later… ($25) PSNR: 45.9 db

  20. Results

  21. Waifu2x-ios Our model Original Illegible Serif characters

  22. Waifu2x-ios Our model Original Lost screen tone texture

  23. Import a trained model into your app

  24. Convert to MLModel from coremltools.converters.keras import convert model = convert(‘model.h5’,

    …)'SRCNN.mlmodel') SRCNN.mlmodel 400 KB
  25. Run Super Resolution process let model = SRCNN() for patch

    in patches { let res = try! model.prediction(image: patch.buff) outs.append(res) }
  26. Performance Patch size Device Time 32x32 iPhone X 10.89 sec

    112x112 iPhone X 2.39 sec 200x200 iPhone X 1.04 sec 200x200 iPhone 7 1.21 sec
  27. This is useful for any types of images. Not only

  28. Open Source

  29. let imageView: UIImageView = … let image: UIImage = …

    imageView.setSRImage(image) //Super Resolution
  30. SRCNNKit •UIImageView+SRCNN extension •SRCNNConverter (UIImage to UIImage) •Include pre-trained model

    •Include python script to train your own model
  31. SRCNNKit •UIImageView+SRCNN extension •SRCNNConverter (UIImage to UIImage) •Include pre-trained model

    •Include python script to train your own model Coming Soon
  32. Recap • Reduced image file size with CoreML + SRCNN

    • You need only Swift skill ( if you have a model ) • CoreML is a good parts for building apps • I feel the CoreML have great potential for the future
  33. Thank You Twitter / github @kenmaz