Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Push du Machine Learning dans to app
Search
Sandra Dupre
July 23, 2018
Programming
0
160
Push du Machine Learning dans to app
When Tensorflow and MLKit rule the world...
Sandra Dupre
July 23, 2018
Tweet
Share
More Decks by Sandra Dupre
See All by Sandra Dupre
To Smartphones and Beyond: Screens Everywhere
sandraddev
0
41
Do you want an easy way to add Machine Learning into your app?
sandraddev
0
120
Push some Machine Learning into your App
sandraddev
2
38
Other Decks in Programming
See All in Programming
Haskell でアルゴリズムを抽象化する / 関数型言語で競技プログラミング
naoya
17
4.1k
FastMCPでMCPサーバー/クライアントを構築してみる
ttnyt8701
2
130
Passkeys for Java Developers
ynojima
2
840
生成AIコーディングとの向き合い方、AIと共創するという考え方 / How to deal with generative AI coding and the concept of co-creating with AI
seike460
PRO
1
180
GoのWebAssembly活用パターン紹介
syumai
3
9.9k
レガシーシステムの機能調査・開発におけるAI利活用
takuya_ohtonari
0
590
FormFlow - Build Stunning Multistep Forms
yceruto
1
160
2度もゼロから書き直して、やっとブラウザでぬるぬる動くAIに辿り着いた話
tomoino
0
160
統一感のある Go コードを生成 AI の力で手にいれる
otakakot
0
3k
漸進。
ssssota
0
1.9k
ReadMoreTextView
fornewid
1
360
関数型まつり2025登壇資料「関数プログラミングと再帰」
taisontsukada
2
790
Featured
See All Featured
Practical Orchestrator
shlominoach
188
11k
Building a Scalable Design System with Sketch
lauravandoore
462
33k
Build your cross-platform service in a week with App Engine
jlugia
231
18k
Making Projects Easy
brettharned
116
6.2k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
6
690
How to train your dragon (web standard)
notwaldorf
92
6.1k
実際に使うSQLの書き方 徹底解説 / pgcon21j-tutorial
soudai
PRO
181
53k
Raft: Consensus for Rubyists
vanstee
139
7k
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
48
5.4k
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
228
22k
A Modern Web Designer's Workflow
chriscoyier
693
190k
Adopting Sorbet at Scale
ufuk
77
9.4k
Transcript
Push du Machine Learning dans ton app … When TensorFlow
and ML Kit rule the world
None
None
Machine Learning
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Un Neurone Opération Linéaire Fonction Filtre input 1 input n
output 1 output 1
Réseau neuronal convolutif R E S H A P E
None
TensorFlow Outils de calcul numérique haute performance Réseau de neurones
via Deep Learning Possède deux versions Mobile Open Source Made By Google Brain
None
Modèles Pré entraînés
Inception V3 MobileNet Smart Reply
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
→ Ré-entraîné MobileNet
Classer les images
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
Sauf que… Le modèle créé ne fonctionne pas Solution ?
Utiliser le retrain.py du codelab
python codeLab/tensorflow-for-poets-2/scripts/retrain.py \ --how_many_training_steps=500 \ --model_dir=model/ \ --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \ --output_graph=model/graph.pb
\ --output_labels=model/label.txt \ --architecture=mobilenet_0.50_224 \ --image_dir=monkey retrain.py https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
TF model person label.txt
TensorFlow Mobile
TensorFlow Lite
Solution allégée Utilise des modèles en FlatBuffers Optimisé pour le
mobile Supporte une partie des opérations de TensorFlow Considéré encore comme une contribution à TensorFlow TensorFlow Lite ?
Optimisations : Quantization : FLOAT32 → BYTE8 Freeze : Couper
les branches inutiles pour la prédiction
T O C O TENSORFLOW LITE OPTIMIZING CONVERTER Saved Model
ou Frozen Graph → FlatBuffer
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO (Quantized Model) bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \
--input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=Placeholder \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6
Intégration sur Android : FlatBuffer Model + labels.txt Android Assets
Image → ByteBuffer private fun fromBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { val
imgData = ByteBuffer.allocateDirect(4 * IMG_SIZE * IMG_SIZE * 3).apply { order(ByteOrder.nativeOrder()) rewind() } val pixels = IntArray(IMG_SIZE * IMG_SIZE) Bitmap.createScaledBitmap(bitmap, IMG_SIZE, IMG_SIZE, false).apply { getPixels(pixels, 0, width, 0, 0, width, height) } pixels.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
Interpreter val fileInputStream = context.assets.openFd(MODEL_NAME).let { FileInputStream(it.fileDescriptor).channel.map( FileChannel.MapMode.READ_ONLY, it.startOffset, it.declaredLength
) } val interpreter = Interpreter(fileInputStream) val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Run ! fun recognizeMonkey(bitmap: Bitmap) { val imgData = fromBitmapToByteBuffer(bitmap)
val outputs = Array(1, { FloatArray(labels.size) }) interpreter.run(imgData, outputs) val monkey = labels .mapIndexed { index, label -> Pair(label, outputs[0][index]) } .sortedByDescending { it.second } .first() view?.displayMonkey(monkey.first, monkey.second * 100) }
ML KIT
ML Kit: la boîte à outils Mobile Vision + Google
Cloud API + TensorFlow Lite
OCR Détection de Visages Lecture de code-barres Labelliser des images
Reconnaissance de points de repères Smart Reply
Exemple : Détection de Visages init { val options =
FirebaseVisionFaceDetectorOptions .Builder() .setClassificationType( FirebaseVisionFaceDetectorOptions .ALL_CLASSIFICATIONS ) .build() detector = FirebaseVision.getInstance().getVisionFaceDetector(options) }
fun recognizePicture(bitmap: Bitmap) { } Exemple : Détection de Visages
val firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(firebaseVisionImage) .addOnSuccessListener { faces -> } .addOnFailureListener { view.displayFail() } try { if (faces.first().smilingProbability > 0.70) { view.displaySmile() } else { view.displaySad() } } catch (e: NoSuchElementException) { view.displayFail() }
CUSTOM MODEL with TensorFlow Lite
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
Modèle : - En local - A distance - Les
deux !
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation Interpreter: Local Source val localSource = FirebaseLocalModelSource .Builder(ASSET) .setAssetFilePath("$MODEL_NAME.tflite")
.build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter FirebaseModelManager.getInstance().apply { registerLocalModelSource(localSource) registerCloudModelSource(cloudSource) } val interpreter =
FirebaseModelInterpreter.getInstance( FirebaseModelOptions.Builder() .setCloudModelName(MODEL_NAME) .setLocalModelName(ASSET) .build() )
Run ! val inputs = FirebaseModelInputs.Builder() .add(fromBitmapToByteBuffer(bitmap)) .build() interpreter?.run(inputs, dataOptions)
?.addOnSuccessListener { val output = it.getOutput<Array<FloatArray>>(0) val label = labels.mapIndexed { index, label -> Pair(label, output[0][index]) }.sortedByDescending { it.second }.first() view?.displayMonkey(label.first, label.second*100) } ?.addOnFailureListener { view?.displayError() }
None
Mais : Téléchargement du modèle long et aléatoire Aucune indication
sur le % de téléchargement du modèle Quid des bugs de TensorFlow Lite ? TOCO, quantized model et autres incompréhensions Documentation légère Exemples peu compréhensibles (dont le code est assez sale) Côté API cher
Merci ! Références : https://firebase.google.com/docs/ml-kit/ https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/ https://codelabs.developers.google.com/codelabs/mlkit-android/ Dataset : https://www.kaggle.com/slothkong/10-monkey-species/version/1
@SandraDdev @sandra.dupre