Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Learning for Android Developers with ...

Machine Learning for Android Developers with TensorFlow - Droidcon London 2018

During this session, Attila will introduce the TensorFlow framework from an Android developer’s point of view. When it comes to blending AI into mobile apps, there are a couple of tools at developers disposal. You will explore some of these available options, then linger on to the steps and challenges of integrating a TenorFlow model. In the end, you’ll be left with a basic understanding of the process and work with TensorFlow on Android. Hopefully, these new learnings will spark your curiosity and encourage you to experiment and implement your crazy, creative AI ideas.

Blénesi Attila

October 26, 2018
Tweet

More Decks by Blénesi Attila

Other Decks in Programming

Transcript

  1. Application Developer ML Practitioner Data Scientist Firebase ML Kit Machine

    Learning APIs TensorFlow Cloud ML @ablenessy | @droidconUK | #droidconUK
  2. Application Developer ML Practitioner Data Scientist Firebase ML Kit Machine

    Learning APIs TensorFlow Cloud ML @ablenessy | @droidconUK | #droidconUK
  3. val image = FirebaseVisionImage.fromBitmap(selectedImage) val detector = FirebaseVision.getInstance() .getVisionTextDetector() detector.detectInImage(image)

    .addOnSuccessListener { texts -> processTextRecognitionResult(texts) }a .addOnFailureListener(...) @ablenessy | @droidconUK | #droidconUK
  4. val image = FirebaseVisionImage.fromBitmap(selectedImage) val options = FirebaseVisionCloudDetectorOptions.Builder() .setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL) .setMaxResults(15)

    .build() val detector = FirebaseVision.getInstance() .getVisionCloudDocumentTextDetector(options) detector.detectInImage(image) .addOnSuccessListener { texts -> processTextRecognitionResult(texts) }a .addOnFailureListener(...) @ablenessy | @droidconUK | #droidconUK
  5. generic model = generic solution Do I even have to

    dig deeper ? @ablenessy | @droidconUK | #droidconUK
  6. Model > Training > Deployment gcloud ml-engine jobs submit training

    $JOB_NAME \ --job-dir $JOB_DIR \ --module-name trainer.pix2pix \ --package-path ./trainer \ --region $REGION \ --config=trainer/cloudml-gpu.yaml \ -- \ --mode train \ --input-dir gs://$BUCKET_NAME/train Cloud ML
  7. Model > Training > Deployment gcloud ml-engine jobs submit training

    $JOB_NAME \ --job-dir $JOB_DIR \ --module-name trainer.pix2pix \ --package-path ./trainer \ --region $REGION \ --config=trainer/cloudml-gpu.yaml \ -- \ --mode train \ --input-dir gs://$BUCKET_NAME/train Cloud ML
  8. trainingInput: scaleTier: CUSTOM masterType: standard_gpu # 1 GPU pythonVersion: "3.5"

    runtimeVersion: "1.8" Model > Training > Deployment Cloud ML
  9. trainingInput: scaleTier: CUSTOM masterType: complex_model_m_gpu # 4 GPUs pythonVersion: "3.5"

    runtimeVersion: "1.8" Model > Training > Deployment Cloud ML
  10. Model > Training > Deployment gcloud ml-engine jobs submit training

    $JOB_NAME \ --job-dir $JOB_DIR \ --module-name trainer.pix2pix \ --package-path ./trainer \ --region $REGION \ --config=trainer/cloudml-gpu.yaml \ -- \ --mode train \ --input-dir gs://$BUCKET_NAME/train Cloud ML
  11. Model > Training > Deployment gcloud ml-engine jobs submit training

    $JOB_NAME \ --job-dir $JOB_DIR \ --module-name trainer.pix2pix \ --package-path ./trainer \ --region $REGION \ --config=trainer/cloudml-gpu.yaml \ -- \ --mode train \ --input-dir gs://$BUCKET_NAME/train Cloud ML
  12. Model > Training > Deployment # We start a session

    using a fresh Graph with tf.Session(graph=tf.Graph()) as sess: # We import the meta graph saver = tf.train.import_meta_graph(input_checkpoint + ‘.meta’,…) saver.restore(sess, input_checkpoint) # We restore the weights output_graph_def = tf.graph_util.convert_variables_to_constants( sess, # The session is used to retrieve the weights tf.get_default_graph().as_graph_def(), # retrieve the nodes output_node_names # select the useful nodes ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString())
  13. Model > Training > Deployment # We start a session

    using a fresh Graph with tf.Session(graph=tf.Graph()) as sess: # We import the meta graph saver = tf.train.import_meta_graph(input_checkpoint + ‘.meta’,…) saver.restore(sess, input_checkpoint) # We restore the weights output_graph_def = tf.graph_util.convert_variables_to_constants( sess, # The session is used to retrieve the weights tf.get_default_graph().as_graph_def(), # retrieve the nodes output_node_names # select the useful nodes ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString())
  14. Model > Training > Deployment # We start a session

    using a fresh Graph with tf.Session(graph=tf.Graph()) as sess: # We import the meta graph saver = tf.train.import_meta_graph(input_checkpoint + ‘.meta’,…) saver.restore(sess, input_checkpoint) # We restore the weights output_graph_def = tf.graph_util.convert_variables_to_constants( sess, # The session is used to retrieve the weights tf.get_default_graph().as_graph_def(), # retrieve the nodes output_node_names # select the useful nodes ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString())
  15. Model > Training > Deployment # We start a session

    using a fresh Graph with tf.Session(graph=tf.Graph()) as sess: # We import the meta graph saver = tf.train.import_meta_graph(input_checkpoint + ‘.meta’,…) saver.restore(sess, input_checkpoint) # We restore the weights output_graph_def = tf.graph_util.convert_variables_to_constants( sess, # The session is used to retrieve the weights tf.get_default_graph().as_graph_def(), # retrieve the nodes output_node_names # select the useful nodes ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString())
  16. Model > Training > Deployment val inferenceInterface = TensorFlowInferenceInterface( assetManager,

    "file:///android_asset/frozen_model.pb" ) inputValues = generateInput(inputDrawing) inferenceInterface.feed("input_node", inputValues, inputDimensions) inferenceInterface.run(arrayOf("output_node"), false) inferenceInterface.fetch("output_node", outputValues) val result = generateBitmapFromOutput(outputValues)
  17. Application Developer ML Practitioner Data Scientist Firebase ML Kit Machine

    Learning APIs TensorFlow Cloud ML Machine learning is NOT just for experts