Slide 1

Slide 1 text

Introduction to tinyML -- uday kiran

Slide 2

Slide 2 text

What is tinyML? • Machine learning + embedded systems = Intelligent IOT devices • It helps to explore what type of models you can run on small, low-power devices like microcontrollers. • Summarizing and analyzing data at the edge on low power devices

Slide 3

Slide 3 text

Why tinyML? • Instant response • It enables • low-latency • low power(1000x) • low bandwidth model inference at edge devices. • Privacy • Can collect more data which helps to build better products.

Slide 4

Slide 4 text

Applications • Personal assistants like Google assistant, Alexa and Siri. • Intelligent industrial sensors • Wildlife tracking • Detecting crop diseases • Predicting wildfires • Smart game controllers • Ocean Life Conservation • Etc...

Slide 5

Slide 5 text

TensorFlow Lite • You can use this tool to wrap your models to run on embedded systems • It supports Android, IOS, Arduino etc.. • Not only Python you can use C, C++ and JAVA • Pretrained models • Other tools like • CoreML • PyTorch Mobile

Slide 6

Slide 6 text

TensorFlow Lite for Microcontrollers • Especially designed to run machine learning models on microcontrollers and other devices with only few kilobytes of memory. • TensorFlow Lite for Microcontrollers is written in C++ 11 and requires a 32-bit platform. • The framework is available as an Arduino library

Slide 7

Slide 7 text

Supported devices • Arduino Nano 33 BLE Sense • SparkFun Edge • STM32F746 Discovery kit • Adafruit EdgeBadge • Adafruit TensorFlow Lite for Microcontrollers Kit • Adafruit Circuit Playground Bluefruit • Espressif ESP32-DevKitC • Espressif ESP-EYE • Wio Terminal: ATSAMD51 • Himax WE-I Plus EVB Endpoint AI Development Board • Synopsys DesignWare ARC EM Software Development Platform

Slide 8

Slide 8 text

Workflow • Train your Machine learning models • Convert them to a TensorFlow Lite model using the TensorFlow Lite converter. • Convert to a C byte array using standard tools to store it in a read-only program memory on device. • Run inference on device using the C++ library and process the results.

Slide 9

Slide 9 text

Limitations of TF Lite for Microcontrollers • Support for a limited subset of TensorFlow operations • Support for a limited set of devices • Low-level C++ API requiring manual memory management • On device training is not supported

Slide 10

Slide 10 text

Complete project flow • Define the problem • Set up your hardware • Set up your software • Build and prepare the binary • Get ready to flash the binary • Flash the binary • Generate the output

Slide 11

Slide 11 text

Resources • HarvardX's Tiny Machine Learning (TinyML)

Slide 12

Slide 12 text

The Future of Machine Learning is Tiny and Bright

Slide 13

Slide 13 text

Thank you - Ask your questions