Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ML on Microcontrollers with TFLITE

Dipika Baad
September 23, 2021

ML on Microcontrollers with TFLITE

As a part of internal company deep edge talks, I did a demo session on how to build and run inference on a neural network model on raspberry pi and NodeMCU.

Dipika Baad

September 23, 2021
Tweet

More Decks by Dipika Baad

Other Decks in Technology

Transcript

  1. Dipika Baad Consultant at Netlight Masters in Data Science &

    Entrepreneurship
 Aalto University & TUE, Netherlands
  2. Schedule • How we see ML theses days • Build

    the model • Show it running on computer & raspberry pi • Converting Model for tiny devices • Loading it onto microcontroller • Trade o ff s & Application fi ts • Questions & Discussion
  3. Ok Google - Running locally on small device 14KB size

    model That can run on devices running DSPs continuously listening for OK Google. On the phone, this program needs to consume as little battery power as possible. https://learning.oreilly.com/library/view/tinyml/9781492052036/ch01.html
  4. Let’s start with one use case • Predicting tra ff

    i c volume using neural network model • Smart tra ff i c lights which can know the future volume and adjust the timing of lights for lanes • Currently there are local sensors in the road to know current situation but they are not predicting. Inductive-Loop Sensors, Infrared Sensors, Microwave sensors, Video sensor s , Microwave Sensor s https://elteccorp.com/news/other/are-there-sensors-at-traf fi c-lights /
  5. Raspberry Pi 3B • Quad Core 1.2GHz 64bit CPU •

    1GB RAM • BCM43438 wireless LAN & BLE • 100 Base Ethernet • 40-pin extended GPIO • 4 USB 2 ports • Full size HDMI • CSI camera port • DSI display port for connecting a Raspberry Pi touchscreen display
  6. Tensorflow & Tensorflow Lite • Tensor fl ow has collection

    of work fl ows to implement and train models in di ff erent programming languages. It is open source framework.
  7. Tensorflow Lite • Tensor fl owLite is a set of

    tools which enables on-device ML by helping developers run their models on mobile, embedded, and IoT devices. • Optimized for on-device ML ( Reduced Model and binary size) • Multi-platform support : Android, iOS, embedded linux and microcontrollers • Hardware acceleration is done using Delegates in TFLite by leveraging on-device accelerators such as GPU and DSP (Digital Signal Processor). • Model optimization that makes it easy to make the model optimized for smaller storage size, download size and less memory usage when they are running. Quantization can be used to reduce the size of the models & reduced latency for model inference. Ref : https://www.tensor fl ow.org/lite/guide
 https://www.tensor fl ow.org/lite/performance/delegates
 tensor fl ow.org/lite/performance/model_optimization
  8. NodeMCU ESP8266 Microcontroller • Microcontroller: Tensilica 32-bit RISC CPU Xtensa

    LX106 • Operating Voltage: 3.3V • Input Voltage: 7-12V • Digital I/O Pins (DIO): 16 • Analog Input Pins (ADC): 1 • Flash Memory: 4 MB • SRAM: 64 KB • Clock Speed: 80 MHz
  9. What it means to run ML on Microcontrollers? • Running

    AI/ML models on devices having tens of KBs of RAM and fl ash memory. • Running e ff i ciently so it doesn’t take lot of power. • This means you can use the sensor data locally and make predictions or choose actions to do locally on the device • Making smart & new products that fi t into small space, use minimal energy, RAM, CPU and use the data locally instead of forwarding to servers. • Fun to work on as this fi eld is quite young :)
  10. Trade offs & applications of Tiny ML • Model size

    reduction damages accuracy of the model a little bit. • More secure and less power consuming applications as the data is local and not sent to servers. Gives more privacy to users. • ML is not limited to running only in cloud/big servers with complex pipeline to serve the requests. • Predictive maintenance, Magic Wand, Safety monitoring with audio recognitions, etc.