Slide 1

Slide 1 text

#MachineLearning Deploying ML Models in Google Cloud with Vertex AI Olayinka Peter Oluwafemi Snr ML, Youverify @olayinkapeter_ July 2023

Slide 2

Slide 2 text

$whoami ● ML engineering @ Youverify ● ML GDE ● PaLM API 󰙤 ● ❤ TensorFlow, anime & peanut butter ● Coordinates 󰗓 coffeewithpeter.com

Slide 3

Slide 3 text

Presenting this with the help of my friend, Lekan Raheem (@lekanraheem_)

Slide 4

Slide 4 text

Why do we build ML models?

Slide 5

Slide 5 text

Why do we build ML models? Data Data

Slide 6

Slide 6 text

Data Neural Network diagram by Lia Koltyrina Data Why do we build ML models?

Slide 7

Slide 7 text

Data Neural Network diagram by Lia Koltyrina Data Why do we build ML models? Perfect Model (100% Accuracy)

Slide 8

Slide 8 text

Data Neural Network diagram by Lia Koltyrina Data Why do we build ML models? Perfect Model (100% Accuracy) But what’s the use of a Perfect Model that’s not in Production?

Slide 9

Slide 9 text

Some of the most famous tools used to deploy machine learning models Popular Model Deployment Tools Eiffel Tower by Travel-Fr

Slide 10

Slide 10 text

TensorFlow Serve Pros and Cons of Our Favorites Torch Serve

Slide 11

Slide 11 text

TensorFlow Serve Torch Serve Each of them need some other tool to achieve end-to-end Pros and Cons of Our Favorites

Slide 12

Slide 12 text

Enter Vertex AI

Slide 13

Slide 13 text

Enter Vertex AI Vertex AI combines data engineering, data science, and ML engineering workflows, enabling your teams to collaborate using a common toolset.

Slide 14

Slide 14 text

Vertex AI One AI platform, every ML tool you need 🔥🚀

Slide 15

Slide 15 text

Vertex AI is a one-stop shop machine learning platform that provides tools for every step of the machine learning workflow across different stages of machine learning development. It allows for Dataset Preparation (Gathering, Preprocessing and Version Control), Train (AutoML and Custom) and Deploy ML models and AI applications — all of these using the benefits of Google Cloud Introduction to Vertex AI Vertex AI: Introduction

Slide 16

Slide 16 text

A-Z ML Workflow Simplified Simplified platform to manage end-to-end ML lifecycle, reduces complexity of managing separate components and services making development efficient. Pros #1

Slide 17

Slide 17 text

Automated Model Deployment Makes deploying ML model at scale seamless. Packaging, versioning, A/B Testing, updates roll out with ease and without manual errors. Pros #2

Slide 18

Slide 18 text

Integrated Experiments and Pipelines GC services like Cloud Blind, Kubeflow Pipelines etc. are integrated allowing for reproducibility, monitoring into unified pipelines and efficiency in the development process Pros #3

Slide 19

Slide 19 text

Built-in Monitoring and Management Provides detailed metrics, logs and performance insights, allowing to track and evaluate performance, behavior, usage patterns in real-time Pros #4

Slide 20

Slide 20 text

Scalability and Performance Handles large-scale workloads allowing to deploy and serve ML models handling heavy traffic and concurrent requests with Google Cloud Infrastructure Pros #5

Slide 21

Slide 21 text

Collaboration Facilitates teamwork by providing shared project spaces, allowing multiple users/teams work on same project simultaneously and manage project collectively Pros #6

Slide 22

Slide 22 text

Pre-trained Models and AutoML Integration AutoML capabilities and provides access to pre-trained models allowing to leverage existing models reducing extensive manual model development Pros #7

Slide 23

Slide 23 text

Integrated Services Integrated with services like Cloud Storage, BigQuery, Dataflow to leverage processes like data ingestion, preprocessing, storage. Also advanced features like data encryption, access control, compliance certification, data governance etc. Pros #8

Slide 24

Slide 24 text

Vertex AI Walkthrough Model Garden: Discover, Test, Customize and deploy Vertex AI and select open-source models (some pretrained) and assets Workbench: Jupyter Notebook based development environment, integrates Cloud storage and BigQuery to access and process data faster Pipelines: build and monitor pipelines to helps to automate, monitor and govern ML systems by orchestrating ML workflow in a serverless manner and store workflow’s artifacts using Vertex ML metadata Generative AI Studio: create, experiment with generative aI models. Test and customize Google’s LLMs Data: All data preparation takes place, can label, annotate and do a lot on the data Model Development: We can train ML models using either AutoML or Custom. After training, we can assess the model, optimize and even understand the signals behind the model’s predictions with “Explainable AI” Deploy and Use: Deploy Model to an endpoint to serve for online predictions using the API or the console. Includes all the physical resources and scalable hardware needed to scale the model for lower latency and online predictions. Undeployed model can also be used for Batch predictions using CLI, console UI or the SDK and the APIs. Each model can have multiple endpoints

Slide 25

Slide 25 text

Let’s dive in Dive in

Slide 26

Slide 26 text

Thank You Olayinka Peter Oluwafemi he/him @olayinkapeter_