Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Softpia Japan Seminar 20190724

Softpia Japan Seminar 20190724

2019年7月24日、ソフトピアジャパンで開催された「人工知能セミナー ~クラウド・モバイル・エッジにおける機械学習~」の発表資料です。

「機械学習モデルを利用したモバイルアプリ開発の事例」について。

人工知能セミナー ~クラウド・モバイル・エッジにおける機械学習~
https://www.softopia.or.jp/events/20190724jinzai/

ARIYAMA Keiji

July 24, 2019
Tweet

More Decks by ARIYAMA Keiji

Other Decks in Technology

Transcript

  1.   ධՁ༻αʔόʔ ܇࿅ɾֶश༻αʔόʔ܈ σʔληοτసૹ ʢTFRecordʣ ֶशࡁϞσϧऔಘ ը૾औಘ ը૾औಘ ϥϕϧ

    ෇͚ σʔληοτ؅ཧ αʔόʔ ը૾ऩू ϥϕϧ ෇͚ σʔληοτ
 ؅ཧΞϓϦ playground.megane.ai ֶशࡁΈϞσϧ഑ஔ ը૾ૹ৴ ൑ఆ݁Ռ .FHBOF$P
  2. def _export_graph(sess, input_tensors, output_tensors, output_dir): output_path = os.path.join(output_dir, 'model.tflite') converter

    = TFLiteConverter.from_session(sess, input_tensors, output_tensors) # converter.post_training_quantize = True tflite_model = converter.convert() open(output_path, "wb").write(tflite_model)   Ϟσϧͷग़ྗʢ5FOTPS'MPX-JUFʣ
  3. val tfInference = Interpreter( model, options) val resizedImageBuffer = ByteBuffer

    .allocateDirect(IMAGE_BYTES_LENGTH) .order(ByteOrder.nativeOrder()) val inputBuffer = ByteBuffer .allocateDirect(IMAGE_BYTES_LENGTH * 4) .order(ByteOrder.nativeOrder()) val resultBuffer = ByteBuffer .allocateDirect(4) .order(ByteOrder.nativeOrder())   Ϟσϧͷ࣮ߦ JPLFJKJGPPEHBMMFSZ*NBHF3FDPHOJ[FSLU
  4. val scaledBitmap = Bitmap.createScaledBitmap(bitmap, IMAGE_WIDTH, IMAGE_HEIGHT, false) resizedImageBuffer.rewind() scaledBitmap.copyPixelsToBuffer(resizedImageBuffer) inputBuffer.rewind()

    for (index in (0..IMAGE_BYTES_LENGTH - 1)) { inputBuffer.putFloat(resizedImageBuffer[index].toInt().and(0xFF).toFloat()) } inputBuffer.rewind() resultBuffer.rewind() tfInference.run(inputBuffer, resultBuffer) resultBuffer.rewind() val confidence = resultBuffer.getFloat()   Ϟσϧͷ࣮ߦ JPLFJKJGPPEHBMMFSZ*NBHF3FDPHOJ[FSLU
  5. 1PTUUSBJOJOHRVBOUJ[BUJPO   def _export_graph(sess, input_tensors, output_tensors, output_dir): output_path =

    os.path.join(output_dir, 'model.tflite') converter = TFLiteConverter.from_session(sess, input_tensors, output_tensors) converter.post_training_quantize = True tflite_model = converter.convert() open(output_path, "wb").write(tflite_model)
  6. ਪ࿦଎౓ͷൺֱ   ػछ໊ NN API͋Γ NN APIͳ͠ Essential PH-1

    556,323ns 185,372,624ns Pixel 2 450,807ns 187,395,464ns Pixel 3 477,489ns 129,994,563ns IUUQTHJUIVCDPNLFJKJGPPE@HBMMFSZ@XJUI@UFOTPSqPXSFMFBTFTUBHUqJUF@OOBQJ