Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Few steps to make your app smarter with TensorF...

Few steps to make your app smarter with TensorFlow Lite

Volodia Chornenkyi

January 30, 2019
Tweet

More Decks by Volodia Chornenkyi

Other Decks in Programming

Transcript

  1. Few steps to make your app smarter with TensorFlow Lite

    Volodia Chornenkyi Android Software Engineer @TechMagic @GoodCo
  2. • Lightweight • Limited set of op • Custom op

    support • Optimized work with models • Hardware acceleration • No Swift support Lite • Training support • Java&Swift APIs are not up to date • Android NDK • Bazel • Higher accuracy
  3. Step 1. Model cp tf_files/optimized_graph.lite ios/tflite/data/graph.lite cp tf_files/retrained_labels.txt ios/tflite/data/labels.txt cp

    tf_files/optimized_graph.lite android/tflite/app/src/main/ass ets/graph.lite cp tf_files/retrained_labels.txt android/tflite/app/src/main/ass ets/labels.txt iOS Android https://www.tensorflow.org/lite/
  4. Step 2.1. Setup platform :ios, '8.0' inhibit_all_warnings! target 'tflite_photos_example' pod

    'TensorFlowLite' repositories { maven { url 'https://google.bintray.com/tensorflo w' } } dependencies { implementation 'org.tensorflow:tensorflow-lite:+' } iOS Android https://www.tensorflow.org/lite/
  5. Step 2.2. Setup android { aaptOptions { noCompress "tflite" noCompress

    "lite" } } Android https://www.tensorflow.org/lite/
  6. NSString* graph_path = [[NSBundle mainBundle] pathForResource:name ofType:extension]; std::unique_ptr<tflite::FlatBufferModel> model =

    tflite::FlatBufferModel::BuildFromFile([graph_path UTF8String]); iOS Android Step 3. Model AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_PATH); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); MappedByteBuffer model = fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); Android https://www.tensorflow.org/lite/
  7. std::vector<std::string> labels; NSString* labels_path = [[NSBundle mainBundle] pathForResource:name ofType:extension]; std::ifstream

    t; t.open([labels_path UTF8String]); std::string line; while (t) { std::getline(t, line); if (line.length()){ labels->push_back(line); } } t.close(); Android Step 4. Labels on iOS https://www.tensorflow.org/lite/
  8. Android Step 4. Labels on Android List<String> labelList = new

    ArrayList<String>(); BufferedReader reader = new BufferedReader(new InputStreamReader(activity.getAssets().open(LABEL_PATH))); String line; while ((line = reader.readLine()) != null) { labelList.add(line); } reader.close(); https://www.tensorflow.org/lite/
  9. float* out = interpreter->typed_input_tensor<float>(0); uint8_t* in = image.data.data(); for (int

    y = 0; y < wanted_input_height; ++y) { const int in_y = (y * image.height) / wanted_input_height; uint8_t* in_row = in + (in_y * image.width * image.channels); float* out_row = out + (y * wanted_input_width * wanted_input_channels); for (int x = 0; x < wanted_input_width; ++x) { const int in_x = (x * image.width) / wanted_input_width; uint8_t* in_pixel = in_row + (in_x * image.channels); float* out_pixel = out_row + (x * wanted_input_channels); for (int c = 0; c < wanted_input_channels; ++c) { out_pixel[c] = (in_pixel[c] - input_mean) / input_std; } } } Android Step 6. Convert image on iOS https://www.tensorflow.org/lite/
  10. // ByteBuffer imgData bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());

    int pixel = 0; for (int i = 0; i < DIM_IMG_SIZE_X; ++i) { for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) { final int val = intValues[pixel++]; imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD); imgData.putFloat((((val >> 8) & 0xFF)-IMAGE_MEAN)/IMAGE_STD); imgData.putFloat((((val) & 0xFF)-IMAGE_MEAN)/IMAGE_STD); } } Android Step 6. Convert image on Android https://www.tensorflow.org/lite/
  11. Step 7. Run model interpreter->Invoke() float* output = interpreter-> typed_output_tensor<float>(0)

    tflite.run(imgData, labelProbArray) iOS Android https://www.tensorflow.org/lite/
  12. Use TF Lite if • Weight is important • Limited

    set of op is fine • Speed over accuracy • Hardware acceleration as a bonus