Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Push du Machine Learning dans to app
Search
Sandra Dupre
July 23, 2018
Programming
0
160
Push du Machine Learning dans to app
When Tensorflow and MLKit rule the world...
Sandra Dupre
July 23, 2018
Tweet
Share
More Decks by Sandra Dupre
See All by Sandra Dupre
To Smartphones and Beyond: Screens Everywhere
sandraddev
0
42
Do you want an easy way to add Machine Learning into your app?
sandraddev
0
120
Push some Machine Learning into your App
sandraddev
2
38
Other Decks in Programming
See All in Programming
TypeScriptでDXを上げろ! Hono編
yusukebe
3
770
CDK引数設計道場100本ノック
badmintoncryer
2
480
顧客の画像データをテラバイト単位で配信する 画像サーバを WebP にした際に起こった課題と その対応策 ~継続的な取り組みを添えて~
takutakahashi
4
1.3k
What's new in AppKit on macOS 26
1024jp
0
150
AI駆動のマルチエージェントによる業務フロー自動化の設計と実践
h_okkah
0
230
ソフトウェア品質を数字で捉える技術。事業成長を支えるシステム品質の マネジメント
takuya542
2
15k
GPUを計算資源として使おう!
primenumber
1
250
初学者でも今すぐできる、Claude Codeの生産性を10倍上げるTips
s4yuba
16
13k
LT 2025-06-30: プロダクトエンジニアの役割
yamamotok
0
870
イベントストーミング図からコードへの変換手順 / Procedure for Converting Event Storming Diagrams to Code
nrslib
2
1.1k
dbt民主化とLLMによる開発ブースト ~ AI Readyな分析サイクルを目指して ~
yoshyum
3
1.1k
Claude Code派?Gemini CLI派? みんなで比較LT会!_20250716
junholee
1
530
Featured
See All Featured
VelocityConf: Rendering Performance Case Studies
addyosmani
332
24k
BBQ
matthewcrist
89
9.7k
Building Better People: How to give real-time feedback that sticks.
wjessup
367
19k
Git: the NoSQL Database
bkeepers
PRO
430
65k
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
229
22k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
29
2.7k
実際に使うSQLの書き方 徹底解説 / pgcon21j-tutorial
soudai
PRO
181
54k
The Success of Rails: Ensuring Growth for the Next 100 Years
eileencodes
45
7.5k
GraphQLとの向き合い方2022年版
quramy
49
14k
Writing Fast Ruby
sferik
628
62k
Art, The Web, and Tiny UX
lynnandtonic
299
21k
Docker and Python
trallard
45
3.5k
Transcript
Push du Machine Learning dans ton app … When TensorFlow
and ML Kit rule the world
None
None
Machine Learning
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Un Neurone Opération Linéaire Fonction Filtre input 1 input n
output 1 output 1
Réseau neuronal convolutif R E S H A P E
None
TensorFlow Outils de calcul numérique haute performance Réseau de neurones
via Deep Learning Possède deux versions Mobile Open Source Made By Google Brain
None
Modèles Pré entraînés
Inception V3 MobileNet Smart Reply
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
→ Ré-entraîné MobileNet
Classer les images
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
Sauf que… Le modèle créé ne fonctionne pas Solution ?
Utiliser le retrain.py du codelab
python codeLab/tensorflow-for-poets-2/scripts/retrain.py \ --how_many_training_steps=500 \ --model_dir=model/ \ --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \ --output_graph=model/graph.pb
\ --output_labels=model/label.txt \ --architecture=mobilenet_0.50_224 \ --image_dir=monkey retrain.py https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
TF model person label.txt
TensorFlow Mobile
TensorFlow Lite
Solution allégée Utilise des modèles en FlatBuffers Optimisé pour le
mobile Supporte une partie des opérations de TensorFlow Considéré encore comme une contribution à TensorFlow TensorFlow Lite ?
Optimisations : Quantization : FLOAT32 → BYTE8 Freeze : Couper
les branches inutiles pour la prédiction
T O C O TENSORFLOW LITE OPTIMIZING CONVERTER Saved Model
ou Frozen Graph → FlatBuffer
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO (Quantized Model) bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \
--input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=Placeholder \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6
Intégration sur Android : FlatBuffer Model + labels.txt Android Assets
Image → ByteBuffer private fun fromBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { val
imgData = ByteBuffer.allocateDirect(4 * IMG_SIZE * IMG_SIZE * 3).apply { order(ByteOrder.nativeOrder()) rewind() } val pixels = IntArray(IMG_SIZE * IMG_SIZE) Bitmap.createScaledBitmap(bitmap, IMG_SIZE, IMG_SIZE, false).apply { getPixels(pixels, 0, width, 0, 0, width, height) } pixels.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
Interpreter val fileInputStream = context.assets.openFd(MODEL_NAME).let { FileInputStream(it.fileDescriptor).channel.map( FileChannel.MapMode.READ_ONLY, it.startOffset, it.declaredLength
) } val interpreter = Interpreter(fileInputStream) val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Run ! fun recognizeMonkey(bitmap: Bitmap) { val imgData = fromBitmapToByteBuffer(bitmap)
val outputs = Array(1, { FloatArray(labels.size) }) interpreter.run(imgData, outputs) val monkey = labels .mapIndexed { index, label -> Pair(label, outputs[0][index]) } .sortedByDescending { it.second } .first() view?.displayMonkey(monkey.first, monkey.second * 100) }
ML KIT
ML Kit: la boîte à outils Mobile Vision + Google
Cloud API + TensorFlow Lite
OCR Détection de Visages Lecture de code-barres Labelliser des images
Reconnaissance de points de repères Smart Reply
Exemple : Détection de Visages init { val options =
FirebaseVisionFaceDetectorOptions .Builder() .setClassificationType( FirebaseVisionFaceDetectorOptions .ALL_CLASSIFICATIONS ) .build() detector = FirebaseVision.getInstance().getVisionFaceDetector(options) }
fun recognizePicture(bitmap: Bitmap) { } Exemple : Détection de Visages
val firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(firebaseVisionImage) .addOnSuccessListener { faces -> } .addOnFailureListener { view.displayFail() } try { if (faces.first().smilingProbability > 0.70) { view.displaySmile() } else { view.displaySad() } } catch (e: NoSuchElementException) { view.displayFail() }
CUSTOM MODEL with TensorFlow Lite
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
Modèle : - En local - A distance - Les
deux !
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation Interpreter: Local Source val localSource = FirebaseLocalModelSource .Builder(ASSET) .setAssetFilePath("$MODEL_NAME.tflite")
.build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter FirebaseModelManager.getInstance().apply { registerLocalModelSource(localSource) registerCloudModelSource(cloudSource) } val interpreter =
FirebaseModelInterpreter.getInstance( FirebaseModelOptions.Builder() .setCloudModelName(MODEL_NAME) .setLocalModelName(ASSET) .build() )
Run ! val inputs = FirebaseModelInputs.Builder() .add(fromBitmapToByteBuffer(bitmap)) .build() interpreter?.run(inputs, dataOptions)
?.addOnSuccessListener { val output = it.getOutput<Array<FloatArray>>(0) val label = labels.mapIndexed { index, label -> Pair(label, output[0][index]) }.sortedByDescending { it.second }.first() view?.displayMonkey(label.first, label.second*100) } ?.addOnFailureListener { view?.displayError() }
None
Mais : Téléchargement du modèle long et aléatoire Aucune indication
sur le % de téléchargement du modèle Quid des bugs de TensorFlow Lite ? TOCO, quantized model et autres incompréhensions Documentation légère Exemples peu compréhensibles (dont le code est assez sale) Côté API cher
Merci ! Références : https://firebase.google.com/docs/ml-kit/ https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/ https://codelabs.developers.google.com/codelabs/mlkit-android/ Dataset : https://www.kaggle.com/slothkong/10-monkey-species/version/1
@SandraDdev @sandra.dupre