Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Push du Machine Learning dans to app
Search
Sandra Dupre
July 23, 2018
Programming
0
160
Push du Machine Learning dans to app
When Tensorflow and MLKit rule the world...
Sandra Dupre
July 23, 2018
Tweet
Share
More Decks by Sandra Dupre
See All by Sandra Dupre
To Smartphones and Beyond: Screens Everywhere
sandraddev
0
42
Do you want an easy way to add Machine Learning into your app?
sandraddev
0
120
Push some Machine Learning into your App
sandraddev
2
40
Other Decks in Programming
See All in Programming
STUNMESH-go: Wireguard NAT穿隧工具的源起與介紹
tjjh89017
0
360
Claude Code と OpenAI o3 で メタデータ情報を作る
laket
0
130
Google I/O Extended Incheon 2025 ~ What's new in Android development tools
pluu
1
270
CEDEC2025 長期運営ゲームをあと10年続けるための0から始める自動テスト ~4000項目を50%自動化し、月1→毎日実行にした3年間~
akatsukigames_tech
0
120
あまり知られていない MCP 仕様たち / MCP specifications that aren’t widely known
ktr_0731
0
250
なぜあなたのオブザーバビリティ導入は頓挫するのか
ryota_hnk
5
590
Strands Agents で実現する名刺解析アーキテクチャ
omiya0555
1
120
Reactの歴史を振り返る
tutinoko
1
180
Terraform やるなら公式スタイルガイドを読もう 〜重要項目 10選〜
hiyanger
13
3.1k
GUI操作LLMの最新動向: UI-TARSと関連論文紹介
kfujikawa
0
830
CLI ツールを Go ライブラリ として再実装する理由 / Why reimplement a CLI tool as a Go library
ktr_0731
3
1.1k
可変性を制する設計: 構造と振る舞いから考える概念モデリングとその実装
a_suenami
10
1.7k
Featured
See All Featured
Unsuck your backbone
ammeep
671
58k
Code Reviewing Like a Champion
maltzj
524
40k
jQuery: Nuts, Bolts and Bling
dougneiner
63
7.8k
Imperfection Machines: The Place of Print at Facebook
scottboms
267
13k
Distributed Sagas: A Protocol for Coordinating Microservices
caitiem20
332
22k
RailsConf 2023
tenderlove
30
1.2k
Keith and Marios Guide to Fast Websites
keithpitt
411
22k
BBQ
matthewcrist
89
9.8k
実際に使うSQLの書き方 徹底解説 / pgcon21j-tutorial
soudai
PRO
183
54k
Put a Button on it: Removing Barriers to Going Fast.
kastner
60
4k
How To Stay Up To Date on Web Technology
chriscoyier
790
250k
No one is an island. Learnings from fostering a developers community.
thoeni
21
3.4k
Transcript
Push du Machine Learning dans ton app … When TensorFlow
and ML Kit rule the world
None
None
Machine Learning
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Un Neurone Opération Linéaire Fonction Filtre input 1 input n
output 1 output 1
Réseau neuronal convolutif R E S H A P E
None
TensorFlow Outils de calcul numérique haute performance Réseau de neurones
via Deep Learning Possède deux versions Mobile Open Source Made By Google Brain
None
Modèles Pré entraînés
Inception V3 MobileNet Smart Reply
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
→ Ré-entraîné MobileNet
Classer les images
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
Sauf que… Le modèle créé ne fonctionne pas Solution ?
Utiliser le retrain.py du codelab
python codeLab/tensorflow-for-poets-2/scripts/retrain.py \ --how_many_training_steps=500 \ --model_dir=model/ \ --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \ --output_graph=model/graph.pb
\ --output_labels=model/label.txt \ --architecture=mobilenet_0.50_224 \ --image_dir=monkey retrain.py https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
TF model person label.txt
TensorFlow Mobile
TensorFlow Lite
Solution allégée Utilise des modèles en FlatBuffers Optimisé pour le
mobile Supporte une partie des opérations de TensorFlow Considéré encore comme une contribution à TensorFlow TensorFlow Lite ?
Optimisations : Quantization : FLOAT32 → BYTE8 Freeze : Couper
les branches inutiles pour la prédiction
T O C O TENSORFLOW LITE OPTIMIZING CONVERTER Saved Model
ou Frozen Graph → FlatBuffer
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO (Quantized Model) bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \
--input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=Placeholder \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6
Intégration sur Android : FlatBuffer Model + labels.txt Android Assets
Image → ByteBuffer private fun fromBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { val
imgData = ByteBuffer.allocateDirect(4 * IMG_SIZE * IMG_SIZE * 3).apply { order(ByteOrder.nativeOrder()) rewind() } val pixels = IntArray(IMG_SIZE * IMG_SIZE) Bitmap.createScaledBitmap(bitmap, IMG_SIZE, IMG_SIZE, false).apply { getPixels(pixels, 0, width, 0, 0, width, height) } pixels.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
Interpreter val fileInputStream = context.assets.openFd(MODEL_NAME).let { FileInputStream(it.fileDescriptor).channel.map( FileChannel.MapMode.READ_ONLY, it.startOffset, it.declaredLength
) } val interpreter = Interpreter(fileInputStream) val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Run ! fun recognizeMonkey(bitmap: Bitmap) { val imgData = fromBitmapToByteBuffer(bitmap)
val outputs = Array(1, { FloatArray(labels.size) }) interpreter.run(imgData, outputs) val monkey = labels .mapIndexed { index, label -> Pair(label, outputs[0][index]) } .sortedByDescending { it.second } .first() view?.displayMonkey(monkey.first, monkey.second * 100) }
ML KIT
ML Kit: la boîte à outils Mobile Vision + Google
Cloud API + TensorFlow Lite
OCR Détection de Visages Lecture de code-barres Labelliser des images
Reconnaissance de points de repères Smart Reply
Exemple : Détection de Visages init { val options =
FirebaseVisionFaceDetectorOptions .Builder() .setClassificationType( FirebaseVisionFaceDetectorOptions .ALL_CLASSIFICATIONS ) .build() detector = FirebaseVision.getInstance().getVisionFaceDetector(options) }
fun recognizePicture(bitmap: Bitmap) { } Exemple : Détection de Visages
val firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(firebaseVisionImage) .addOnSuccessListener { faces -> } .addOnFailureListener { view.displayFail() } try { if (faces.first().smilingProbability > 0.70) { view.displaySmile() } else { view.displaySad() } } catch (e: NoSuchElementException) { view.displayFail() }
CUSTOM MODEL with TensorFlow Lite
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
Modèle : - En local - A distance - Les
deux !
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation Interpreter: Local Source val localSource = FirebaseLocalModelSource .Builder(ASSET) .setAssetFilePath("$MODEL_NAME.tflite")
.build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter FirebaseModelManager.getInstance().apply { registerLocalModelSource(localSource) registerCloudModelSource(cloudSource) } val interpreter =
FirebaseModelInterpreter.getInstance( FirebaseModelOptions.Builder() .setCloudModelName(MODEL_NAME) .setLocalModelName(ASSET) .build() )
Run ! val inputs = FirebaseModelInputs.Builder() .add(fromBitmapToByteBuffer(bitmap)) .build() interpreter?.run(inputs, dataOptions)
?.addOnSuccessListener { val output = it.getOutput<Array<FloatArray>>(0) val label = labels.mapIndexed { index, label -> Pair(label, output[0][index]) }.sortedByDescending { it.second }.first() view?.displayMonkey(label.first, label.second*100) } ?.addOnFailureListener { view?.displayError() }
None
Mais : Téléchargement du modèle long et aléatoire Aucune indication
sur le % de téléchargement du modèle Quid des bugs de TensorFlow Lite ? TOCO, quantized model et autres incompréhensions Documentation légère Exemples peu compréhensibles (dont le code est assez sale) Côté API cher
Merci ! Références : https://firebase.google.com/docs/ml-kit/ https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/ https://codelabs.developers.google.com/codelabs/mlkit-android/ Dataset : https://www.kaggle.com/slothkong/10-monkey-species/version/1
@SandraDdev @sandra.dupre