Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Push du Machine Learning dans to app
Search
Sandra Dupre
July 23, 2018
Programming
0
160
Push du Machine Learning dans to app
When Tensorflow and MLKit rule the world...
Sandra Dupre
July 23, 2018
Tweet
Share
More Decks by Sandra Dupre
See All by Sandra Dupre
To Smartphones and Beyond: Screens Everywhere
sandraddev
0
39
Do you want an easy way to add Machine Learning into your app?
sandraddev
0
120
Push some Machine Learning into your App
sandraddev
2
36
Other Decks in Programming
See All in Programming
データの整合性を保つ非同期処理アーキテクチャパターン / Async Architecture Patterns
mokuo
41
15k
社内フレームワークとその依存性解決 / in-house framework and its dependency management
vvakame
1
550
お前もAI鬼にならないか?👹Bolt & Cursor & Supabase & Vercelで人間をやめるぞ、ジョジョー!👺
taishiyade
5
3.8k
[JAWS-UG横浜 #79] re:Invent 2024 の DB アップデートは Multi-Region!
maroon1st
1
140
Compose でデザインと実装の差異を減らすための取り組み
oidy
1
300
一休.com のログイン体験を支える技術 〜Web Components x Vue.js 活用事例と最適化について〜
atsumim
0
110
ISUCON14公式反省会LT: 社内ISUCONの話
astj
PRO
0
180
How mixi2 Uses TiDB for SNS Scalability and Performance
kanmo
30
11k
Grafana Cloudとソラカメ
devoc
0
140
Flutter × Firebase Genkit で加速する生成 AI アプリ開発
coborinai
0
150
Grafana Loki によるサーバログのコスト削減
mot_techtalk
1
110
Linux && Docker 研修/Linux && Docker training
forrep
23
4.5k
Featured
See All Featured
KATA
mclloyd
29
14k
How STYLIGHT went responsive
nonsquared
98
5.3k
Build your cross-platform service in a week with App Engine
jlugia
229
18k
Become a Pro
speakerdeck
PRO
26
5.1k
Refactoring Trust on Your Teams (GOTO; Chicago 2020)
rmw
33
2.8k
For a Future-Friendly Web
brad_frost
176
9.5k
The Art of Programming - Codeland 2020
erikaheidi
53
13k
A better future with KSS
kneath
238
17k
Gamification - CAS2011
davidbonilla
80
5.1k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
7
630
Reflections from 52 weeks, 52 projects
jeffersonlam
348
20k
Mobile First: as difficult as doing things right
swwweet
223
9.3k
Transcript
Push du Machine Learning dans ton app … When TensorFlow
and ML Kit rule the world
None
None
Machine Learning
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Machine learning ? Supervisé Arbre de décision Régression logistique Boosting
Réseau de Neurones … Non Supervisé Clustering K-moyenne ... Par renforcement Agent autonome capable d’apprendre de ses erreurs
Un Neurone Opération Linéaire Fonction Filtre input 1 input n
output 1 output 1
Réseau neuronal convolutif R E S H A P E
None
TensorFlow Outils de calcul numérique haute performance Réseau de neurones
via Deep Learning Possède deux versions Mobile Open Source Made By Google Brain
None
Modèles Pré entraînés
Inception V3 MobileNet Smart Reply
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
Inception V3 MobileNet Smart Reply ImageNet trained with trained with
Accuracy ++ Poids - Accuracy + Poids ++
→ Ré-entraîné MobileNet
Classer les images
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
python retrain.py \ --image_dir monkey \ --output_graph model/graph.pb \ --output_labels
model/label.txt \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_050_224/quantops/feature_vector/1 retrain.py https://www.tensorflow.org/tutorials/image_retraining
Sauf que… Le modèle créé ne fonctionne pas Solution ?
Utiliser le retrain.py du codelab
python codeLab/tensorflow-for-poets-2/scripts/retrain.py \ --how_many_training_steps=500 \ --model_dir=model/ \ --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \ --output_graph=model/graph.pb
\ --output_labels=model/label.txt \ --architecture=mobilenet_0.50_224 \ --image_dir=monkey retrain.py https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
TF model person label.txt
TensorFlow Mobile
TensorFlow Lite
Solution allégée Utilise des modèles en FlatBuffers Optimisé pour le
mobile Supporte une partie des opérations de TensorFlow Considéré encore comme une contribution à TensorFlow TensorFlow Lite ?
Optimisations : Quantization : FLOAT32 → BYTE8 Freeze : Couper
les branches inutiles pour la prédiction
T O C O TENSORFLOW LITE OPTIMIZING CONVERTER Saved Model
ou Frozen Graph → FlatBuffer
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \ --input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=FLOAT \ --input_shape=1,224,224,3 \ --input_array=input \ --output_array=final_result \ --input_data_type=FLOAT
TOCO (Quantized Model) bazel run tensorflow/contrib/lite/toco:toco -- \ --input_file=model/graph.pb \
--input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=model/graph.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shape=1,224,224,3 \ --input_array=Placeholder \ --output_array=final_result \ --default_ranges_min=0 \ --default_ranges_max=6
Intégration sur Android : FlatBuffer Model + labels.txt Android Assets
Image → ByteBuffer private fun fromBitmapToByteBuffer(bitmap: Bitmap): ByteBuffer { val
imgData = ByteBuffer.allocateDirect(4 * IMG_SIZE * IMG_SIZE * 3).apply { order(ByteOrder.nativeOrder()) rewind() } val pixels = IntArray(IMG_SIZE * IMG_SIZE) Bitmap.createScaledBitmap(bitmap, IMG_SIZE, IMG_SIZE, false).apply { getPixels(pixels, 0, width, 0, 0, width, height) } pixels.forEach { imgData.putFloat(((it shr 16 and 0xFF) - MEAN) / STD) imgData.putFloat(((it shr 8 and 0xFF) - MEAN) / STD) imgData.putFloat(((it and 0xFF) - MEAN) / STD) } return imgData }
Interpreter val fileInputStream = context.assets.openFd(MODEL_NAME).let { FileInputStream(it.fileDescriptor).channel.map( FileChannel.MapMode.READ_ONLY, it.startOffset, it.declaredLength
) } val interpreter = Interpreter(fileInputStream) val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Run ! fun recognizeMonkey(bitmap: Bitmap) { val imgData = fromBitmapToByteBuffer(bitmap)
val outputs = Array(1, { FloatArray(labels.size) }) interpreter.run(imgData, outputs) val monkey = labels .mapIndexed { index, label -> Pair(label, outputs[0][index]) } .sortedByDescending { it.second } .first() view?.displayMonkey(monkey.first, monkey.second * 100) }
ML KIT
ML Kit: la boîte à outils Mobile Vision + Google
Cloud API + TensorFlow Lite
OCR Détection de Visages Lecture de code-barres Labelliser des images
Reconnaissance de points de repères Smart Reply
Exemple : Détection de Visages init { val options =
FirebaseVisionFaceDetectorOptions .Builder() .setClassificationType( FirebaseVisionFaceDetectorOptions .ALL_CLASSIFICATIONS ) .build() detector = FirebaseVision.getInstance().getVisionFaceDetector(options) }
fun recognizePicture(bitmap: Bitmap) { } Exemple : Détection de Visages
val firebaseVisionImage = FirebaseVisionImage.fromBitmap(bitmap) detector.detectInImage(firebaseVisionImage) .addOnSuccessListener { faces -> } .addOnFailureListener { view.displayFail() } try { if (faces.first().smilingProbability > 0.70) { view.displaySmile() } else { view.displaySad() } } catch (e: NoSuchElementException) { view.displayFail() }
CUSTOM MODEL with TensorFlow Lite
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
ML Kit Custom Android + iOS
Modèle : - En local - A distance - Les
deux !
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation val dataOptions = FirebaseModelInputOutputOptions .Builder() .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, IMG_SIZE,
IMG_SIZE, 3)) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, labels.size)) .build() val labels = context.assets.open("labels.txt").bufferedReader().readLines()
Initialisation Interpreter: Local Source val localSource = FirebaseLocalModelSource .Builder(ASSET) .setAssetFilePath("$MODEL_NAME.tflite")
.build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter: Cloud Source val conditions = FirebaseModelDownloadConditions .Builder() .requireWifi()
.build() val cloudSource = FirebaseCloudModelSource.Builder(MODEL_NAME) .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build()
Initialisation Interpreter FirebaseModelManager.getInstance().apply { registerLocalModelSource(localSource) registerCloudModelSource(cloudSource) } val interpreter =
FirebaseModelInterpreter.getInstance( FirebaseModelOptions.Builder() .setCloudModelName(MODEL_NAME) .setLocalModelName(ASSET) .build() )
Run ! val inputs = FirebaseModelInputs.Builder() .add(fromBitmapToByteBuffer(bitmap)) .build() interpreter?.run(inputs, dataOptions)
?.addOnSuccessListener { val output = it.getOutput<Array<FloatArray>>(0) val label = labels.mapIndexed { index, label -> Pair(label, output[0][index]) }.sortedByDescending { it.second }.first() view?.displayMonkey(label.first, label.second*100) } ?.addOnFailureListener { view?.displayError() }
None
Mais : Téléchargement du modèle long et aléatoire Aucune indication
sur le % de téléchargement du modèle Quid des bugs de TensorFlow Lite ? TOCO, quantized model et autres incompréhensions Documentation légère Exemples peu compréhensibles (dont le code est assez sale) Côté API cher
Merci ! Références : https://firebase.google.com/docs/ml-kit/ https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/ https://codelabs.developers.google.com/codelabs/mlkit-android/ Dataset : https://www.kaggle.com/slothkong/10-monkey-species/version/1
@SandraDdev @sandra.dupre