Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Streamline your Android ML pipeline

MOSDROID
December 02, 2017

Streamline your Android ML pipeline

Dmitrij Tichonov @ Splyt on MOSDROID #4 – watch video
http://bit.ly/2BzFdKG

In this talk I would like to glide through the streamlined pipeline on utilising machine learning in Android. You will get insights on how to build and test your predictive models as well as how to transfer it to your Android application and where you might face bottlenecks. Hopefully at the end of this talk you will take home a clean understanding of where to start and all the bolts required to streamline the process of integrating this exciting technology.

MOSDROID

December 02, 2017
Tweet

More Decks by MOSDROID

Other Decks in Programming

Transcript

  1. Who am I Deep Learning Engineer and ex-CTO at Splyt

    Quant Developer at J.P.Morgan Crypto Analyst at Barclays Mobile Developer at Accenture Machine Learning advisor at PapaJobs
  2. General questions that arise Why do we even need it?

    How do we use it on Android? How to connect the dots? Online vs offline mode?
  3. Offline ML use cases Speech recognition Image recognition Object Detection

    Gesture recognition Character recognition Translation Speech synthesis Face tracking Sensitive data
  4. Offline ML nuances Retraining is not possible on mobile device

    Requires at least 1MB of RAM Know exact input and output tensors and dimensions If dimensions and names are kept you could retrain and sync the model Housekeeping of Tensorflow versions Specifics on how you suppose to store your trained model
  5. Simply put Placeholder(…) - defines inputs Variable(…) - defines parameters

    Constant(…) - will get back to them **Can name any of them with name=… attribute
  6. How does it train? Minimising Loss Loss is a distances

    measure between Z_pred and Z_true
  7. Graph(…) and Session(…) Model is build on a Graph(…) Session(…)

    allows tensors to flow If not specified directly Session(…) will use default Graph(…) Graph(…) contextualises a Session(…)
  8. Saving your model Saver(…) Saving Variable(…) requires a session Because

    of a connection between Graph(…) and a Session(…), model structure is saved as well
  9. Freezing your model Saved Graph has Variable(…), if loaded we

    can train further Need to transform all Variable(…) to Constant(…) This will generate a single binary trained model file convert_variables_to_constants(…) write_graph(…)
  10. Common pitfalls Checking model operations Checking model input and output

    tensor with tf.Session(graph=tf.Graph()) as session: g = tf.GraphDef() g.ParseFromString(open(‘./model/model.pb', 'rb').read()) set([n.op for n in g.node]) : {‘Relu', 'MatMul', 'Identity', 'Const', 'Add', 'Placeholder'} [n.name for n in g.node] : [‘model_input’, 'Variable_1', ‘Relu’, … , 'MatMul_1', 'Add_1', 'model_output']
  11. Using your model on Android TensorFlowInferenceInterface tensorflow = new TensorFlowInferenceInterface();

    tensorflow.initializeTensorFlow(getAssets(), "file:///android_asset/model.pb"); **Tell Proguard not to obfuscate TensorFlow tensorflow.fillNodeFloat(…); Init Input Output tensorflow.runInference(…); tensorflow.readNodeFloat(…);