Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Serving TensorFlow Models in Django with FastAPI

Wesley Kambale
October 21, 2024
3

Serving TensorFlow Models in Django with FastAPI

Session about serving machine learning models in Django with the FastAPI framework

Wesley Kambale

October 21, 2024
Tweet

Transcript

  1. • Machine Learning Engineer • Community Builder • Explore ML

    Facilitator with Crowdsource by Google • Consultant at The Innovation Village • Google Dev Library Contributor Profile Wesley Kambale Interests Experience • Research in TinyML, TTS and LLM
  2. • Machine Learning Engineer • Python Kigali Community Organizer •

    Team Lead at InversePay Profile Kayongo Johnson Brian Interests Experience • On Device Training, ML and AI in Medicine
  3. The agenda to agend… • Introduction & Overview • Setting

    Up the Django App • Building the FastAPI API • Integrating Django and FastAPI • Q&A
  4. Introduction to ML Model Serving What is Model Serving? •

    Model serving is how machine learning models are deployed to make predictions in production. • We’ll use Django for the app framework and FastAPI for serving the model via APIs. Goal: Build an app that detects mango damage using a TensorFlow model with Django and FastAPI.
  5. Tools and Frameworks What tools and frameworks can we use?

    Django: A high-level Python web framework. FastAPI: A fast web framework to serve APIs, especially for machine learning models. TensorFlow: Our deep learning framework for building and serving the model. Uvicorn: ASGI server to run FastAPI apps. Lightweight, fast for async requests.
  6. Libraries What libraries to use? Uvicorn: ASGI server for FastAPI.

    Lightweight, fast for async requests. Pillow: Image processing (resize, format conversion). Handles JPEG, PNG. Numpy: Used for preprocessing images. Python-Multipart: Handles file uploads in FastAPI.
  7. Why FastAPI over Django? Why use FastAPI? FastAPI is designed

    for asynchronous programming. It’s built on ASGI. This allows for high concurrency, serving of requests. Django is synchronous. Built on WSGI. Isn't as efficient for real-time for high-throughput tasks without extra tooling like Django Channels for async support. FastAPI is better suited for serving machine learning models, especially with frameworks like TensorFlow or PyTorch, due to its ability to handle large data and fast response times.
  8. ├── mango_app 
 │ ├── __init__.py
 │ ├── views.py
 ├──

    fastapi_app
 │ ├── __init__.py
 │ ├── api.py
 ├── model 
 │ ├── mango_model.h5
 ├── main/templates 
 │ ├── index.html
 └── manage.py
 Project Directory Layout
  9. Setting Up Django Web App How do I setup a

    Django application? Initialize a Django project and create an app. Set up views and templates to handle user interaction.
  10. # views.py
 from django.shortcuts import render
 def index(request):
 return render(request,

    'index.html') 
 
 # index.html
 <form action="/predict/" method="POST" enctype="multipart/form-data">
 <input type="file" name="mango_image" /> 
 <button type="submit">Detect</button> 
 </form>
 

  11. FastAPI API Setup How do I setup a FastAPI? Set

    up FastAPI to load the model and serve the predictions via an endpoint. FastAPI handles asynchronous requests well, making it ideal for serving models.
  12. from fastapi import FastAPI, File, UploadFile
 import tensorflow as tf


    from PIL import Image
 import numpy as np
 
 app = FastAPI()
 
 model = tf.keras.models.load_model('model/mango_model.h5') 
 
 @app.post("/predict/") 
 async def predict(file: UploadFile = File(...)):
 image = Image.open(file.file)
 img_array = np.array(image.resize((224, 224))) / 255.0
 predictions = model.predict(np.expand_dims(img_array, 0))
 return {"predictions": predictions.tolist()}

  13. Integrating Django with FastAPI How do I integrate FastAPI and

    Django? Django handles the web interface, while FastAPI serves the model. Use Uvicorn to run the FastAPI server and route requests from Django. # Bash
 uvicorn fastapi_app.api:app --reload --port 8001
 
 

  14. # Call FastAPI
 
 import requests
 from django.shortcuts import render


    
 def detect(request): 
 if request.method == 'POST':
 image = request.FILES['mango_image']
 response = requests.post('http://127.0.0.1:8001/predict/', files={'file': image})
 predictions = response.json()['predictions'] 
 return render(request, 'index.html', {'predictions': predictions})