Slide 1

Slide 1 text

WWDC19 RECAP OF ML @kagemiku (Akira Fukunaga) / Recap of WWDC19 at Mercari

Slide 2

Slide 2 text

▸ kagemiku (Akira Fukunaga) ▸ GitHub: @kagemiku ▸ Twitter: @kagemiku_en ▸ iOS Engineer (19’s new grad) at Mercari JP ▸ First time participation in WWDC!!! 
 (and also, first time application) ABOUT ME

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

ML??

Slide 5

Slide 5 text

SESSIONS ▸ 209: What’s New in Machine Learning ▸ 704: Core ML 3 Framework ▸ 406: Create ML for Object Detection and Sound Classification ▸ 222: Understanding Images in Vision Framework ▸ 228: Creating Great Apps Using Core ML and ARKit ▸ 407: Create ML for Activity, Text, and Recommendations ▸ 232: Advances in Natural Language Framework ▸ 234: Text Recognition in Vision Framework ▸ 420: Drawing Classification and One-Shot Object Detection in Turi Create ▸ 803: Designing Great ML Experiences ▸ 614: Metal for Machine Learning

Slide 6

Slide 6 text

WHAT’S NEW IN ML ▸ Create ML ▸ Domain APIs ▸ Core ML

Slide 7

Slide 7 text

WHAT’S NEW IN ML ▸ Create ML ▸ Domain APIs ▸ Core ML

Slide 8

Slide 8 text

CREATE ML WHAT’S CREATE ML? ▸ Framework for creating ML model with Swift, appeared in Xcode 10 let data = try! MLDataTable(contentsOf: URL(fileURLWithPath: "/path/to/dataset.json")) let (trainingData, testingData) = data.randomSplit(by: 0.8, seed: 5) let sentimentClassifier = try! MLTextClassifier(trainingData: trainingData, textColumn: "text", labelColumn: "label")

Slide 9

Slide 9 text

CREATE ML ▸ In Xcode10, we can create ML model in Playground GUI

Slide 10

Slide 10 text

CREATE ML NEW APPLICATION ▸ Now, the feature has been cut off as an independent GUI app

Slide 11

Slide 11 text

CREATE ML ▸ 9 templates were described in session ▸ Image Classifier ▸ Sound Classifier ▸ Activity Classifier ▸ Tabular Classifier ▸ and so on… ▸ But now, there are only 2 templates
 at the first seed

Slide 12

Slide 12 text

CREATE ML ▸ 9 templates were described in session ▸ Image Classifier ▸ Sound Classifier ▸ Activity Classifier ▸ Tabular Classifier ▸ and so on… ▸ But now, there are only 2 templates
 at the first seed

Slide 13

Slide 13 text

CREATE ML ▸ Dataset notes ▸ Balanced quantity ▸ : 10 s, 100 s, 1000 s ▸ : 100 s, 100 s, 100 s ▸ At least 10 data for each label ▸ At least 299 × 299 px

Slide 14

Slide 14 text

CREATE ML DEMO ▸ (I cannot record the screen in macOS Catalina beta….)

Slide 15

Slide 15 text

WHAT’S NEW IN ML ▸ Create ML ▸ Domain APIs ▸ Core ML

Slide 16

Slide 16 text

DOMAIN APIS WHAT’S DOMAIN API ▸ Useful ML models prepared by Apple ▸ We don’t have to collect data and build model ▸ Main frameworks ▸ Computer Vision (Vision Framework) ▸ Natural Language Processing (NaturalLanguage Framework)

Slide 17

Slide 17 text

DOMAIN APIS NEW DOMAIN APIS ▸ Many new apis appear ▸ Image Saliency ▸ Image Similarity ▸ Sentiment Analysis ▸ Text Recognition ▸ NL Transfer Learning ▸ and so on…

Slide 18

Slide 18 text

DOMAIN APIS NEW DOMAIN APIS ▸ Many new apis appear ▸ Image Saliency ▸ Image Similarity ▸ Sentiment Analysis ▸ Text Recognition ▸ NL Transfer Learning ▸ and so on…

Slide 19

Slide 19 text

DOMAIN APIS IMAGE SALIENCY ▸ Saliency: the salient points or features of something are 
 the most important or most noticeable parts of it ▸ There are 2 types of saliency ▸ Attention based ▸ Objectness based

Slide 20

Slide 20 text

DOMAIN APIS IMAGE SALIENCY ▸ Attention based ▸ Training data: Human eye movement ▸ App example: image cropping ▸ Objectness based ▸ Training data: distinguished foreground object from background ▸ App example: object tracking

Slide 21

Slide 21 text

DOMAIN APIS DEMO

Slide 22

Slide 22 text

DOMAIN APIS ▸ code // 1. prepare request and handler let request: VNRequest = VNGenerateAttentionBasedSaliencyImageRequest() let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: .up, options: [:]) // 2. perform requests and get results try? requestHandler.perform([request]) let observation = request.results?.first as? VNSaliencyImageObservation // 3. do something using results if let salientObjects = observation?.salientObjects { for object in salientObjects { let boundingBox = object.boundingBox // do something } } IMAGE SALIENCY

Slide 23

Slide 23 text

DOMAIN APIS NEW DOMAIN APIS ▸ Many new apis appear ▸ Image Saliency ▸ Image Similarity ▸ Sentiment Analysis ▸ Text Recognition ▸ NL Transfer Learning ▸ and so on…

Slide 24

Slide 24 text

DOMAIN APIS SENTIMENT ANALYSIS ▸ Analyze sentiment of text, positive or negative

Slide 25

Slide 25 text

DOMAIN APIS SENTIMENT ANALYSIS ▸ App example: ▸ Change review text color based on analysis

Slide 26

Slide 26 text

DOMAIN APIS DEMO

Slide 27

Slide 27 text

DOMAIN APIS ▸ code ▸ Support 7 languages at the moment ▸ English/French/Italian/German/Spanish/Portuguese/Simplified Chinese ▸ Japanese is not be included now // 1. Prepare NLTagger with .sentimentScore scheme let tagger = NLTagger(tagSchemes: [.sentimentScore]) // 2. Set text you want to analyze tagger.string = text // 3. Get result let (sentiment, _) = tagger.tag(at: text.startIndex, unit: .paragraph, scheme: .sentimentScore) print(sentiment!.rawValue) SENTIMENT ANALYSIS

Slide 28

Slide 28 text

WHAT’S NEW IN ML ▸ Create ML ▸ Domain APIs ▸ Core ML

Slide 29

Slide 29 text

CORE ML WHAT’S CORE ML ▸ Multi platform framework for ML ▸ Optimized for on-device performance ▸ low memory footprint ▸ low power consumption ▸ Protect security and privacy

Slide 30

Slide 30 text

CORE ML WHAT’S CORE ML ▸ Support many models ▸ Generalized Linear Model ▸ SVM ▸ CNN/RNN ▸ Tree Ensembles ▸ and so on…

Slide 31

Slide 31 text

CORE ML CORE ML 3 ▸ Model Flexibility ▸ Model Personalization

Slide 32

Slide 32 text

CORE ML MODEL FLEXIBILITY ▸ Support 100+ NN layers

Slide 33

Slide 33 text

CORE ML MODEL FLEXIBILITY ▸ Model Galleries 
 You can start immediately using these models

Slide 34

Slide 34 text

CORE ML MODEL PERSONALIZATION ▸ You can fine-tune created models on device

Slide 35

Slide 35 text

CORE ML MODEL PERSONALIZATION ▸ Supporting models ▸ NN ▸ Nearest Neighbor ▸ Fine-tune can be done in background process

Slide 36

Slide 36 text

CORE ML MODEL PERSONALIZATION ▸ Application example: ▸ User can train the existing model with user hand-written for drawing sticker automatically train

Slide 37

Slide 37 text

CORE ML MODEL PERSONALIZATION ▸ Code

Slide 38

Slide 38 text

SUMMARY ▸ Create ML ▸ A brand new app ▸ Domain APIs ▸ Significant expansion ▸ Core ML3 ▸ More flexible ▸ On-device personalization

Slide 39

Slide 39 text

REFERENCES ▸ Create ML - Apple ▸ Core ML - Apple