Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Build LLM-powered Web Apps with Django and Gemi...

Wesley Kambale
October 21, 2024
11

Build LLM-powered Web Apps with Django and Gemini API

Build LLM-powered Web Apps with Django and Gemini API

Wesley Kambale

October 21, 2024
Tweet

Transcript

  1. • Machine Learning Engineer • Community Builder • Explore ML

    Facilitator with Crowdsource by Google • Consultant at The Innovation Village • Google Dev Library Contributor Profile Interests Experience • Research in TinyML, TTS and LLM
  2. The agenda to agend… • Introduction to LLMs & Gemini

    API • Setting up Django • Integrating LLMs with Django using Gemini • Building a Simple LLM-powered Web App • Q&A • Resources & Takeaways
  3. Introduction to LLMs & Gemini API What are LLMs and

    what is Gemini API? • Large Language Models (LLMs): Machine learning models designed to understand and generate human-like text. • Gemini API: An API that allows developers to interact with LLMs using natural language instructions.
  4. Django Overview What is Django? High-level Python web framework for

    building web apps quickly and efficiently. Robust, scalable, and perfect for rapid prototyping. Ideal for integrating machine learning and AI features.
  5. Overview of Gemini API What is the Gemini API? Gemini

    API is an interface for natural language processing. Integrates with LLMs to handle queries, generate text, or perform tasks via natural language prompts. Choose the right Gemini model
  6. from django.shortcuts import render, reverse
 from django.contrib.auth.decorators import login_required
 from

    .models import ChatBot
 from django.http import HttpResponseRedirect, JsonResponse
 import google.generativeai as genai
 
 genai.configure(api_key="YourAPI-Key")
 
 @login_required
 def ask_question(request):
 if request.method == "POST":
 text = request.POST.get("text")
 model = genai.GenerativeModel("gemini-pro")
 chat = model.start_chat()
 response = chat.send_message(text)
 user = request.user
 ChatBot.objects.create(text_input=text, gemini_output=response.text, user=user)
 (request, "chat_bot.html", {"chats": chats})

  7. # Extract necessary data from response
 response_data = {
 "text":

    response.text, # Assuming response.text contains the relevant response data
 }
 # Add other relevant data from response if needed
 else:
 return JsonResponse({"data": response_data})
 
 return HttpResponseRedirect(reverse("chat")) # Redirect to chat page for GET requests
 
 @login_required
 def chat(request):
 user = request.user
 chats = ChatBot.objects.filter(user=user)
 return render

  8. <body>
 <form id="promptForm">
 <input type="text" name="prompt" placeholder="Enter your prompt">
 <button

    type="submit">Generate</button>
 </form>
 <div id="response"></div>
 
 <script>
 document.getElementById('promptForm').onsubmit = function(e) {
 e.preventDefault();
 const prompt = document.querySelector('[name="prompt"]').value;
 
 fetch(`/generate/?prompt=${prompt}`)
 .then(response => response.json())
 .then(data => {
 document.getElementById('response').innerText = data.result;
 });
 }
 </script>
 </body>