Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Driving MLOps on Azure: A Strategic Blueprint f...

Sam Ayo
October 28, 2023
22

Driving MLOps on Azure: A Strategic Blueprint for Scaling ML for African Enterprise Success

Sam Ayo

October 28, 2023
Tweet

Transcript

  1. Driving MLOps on Azure: A Strategic Blueprint for Scaling ML

    for African Enterprise Success Sam Ayo @officialsamayo
  2. Meet Sam Ayo - AI/ML Engineer, Data Scientist and Head

    of Engineering - Extensive experience in software, data, and AI consulting - Providing insights into AI's transformative power Sam Ayo @officialsamayo
  3. Agenda • Introduction to MLOps • Elements of an MLOps

    solution • Orchestrating Azure CI/CD Workflow • Principles for scalable MLOps architecture
  4. What is ML? Model = Algorithm + Training Data MLOps

    = ML model + Software System mlops=MLOps(“lets get started”)
  5. • MLOps is the extension of DevOps to ML as

    a first class citizen • MLOps is the collaboration of infrastructure and tooling to productionize ML Machine Leaning Operations(MLOps) is the practice that combines software engineering, devOps and machine learning to design, develop, deploy and manage production-level machine learning models. • The happy marriage of AI and the traditional DevOps model mlops.whatis()
  6. The goal of MLOps is to reduce technical friction to

    get the model from an idea into production in the shortest possible time with as little risk as possible.
  7. According to the MITSIoan and BCG 2019 survey, 7/10 companies

    report little or no impact with the use of AI. 40% of organizations with significant investments in Al report no benefits. FACT Only 22% of companies using ML have successfully deployed an ML model into production. 87% of data science projects never make it into production. The main challenges people face when developing ML capabilities are scale, version control, model reproducibility, and aligning stakeholders. Reality is: • Al is a source of opportunities and advantages • Implementing Al is a risk • Implementing Al correctly is difficult
  8. MLOps levels: • Level 0 – Manual process • Level

    1 – ML Pipeline automation • Level2 – CICD pipeline automation • Level3 – Full CICD pipeline automation and retraining mlops.elements()
  9. Technical Concepts • Iterative-Incremental Dev • Automation • CT/Cl/CD •

    Versioning • Testing • Reproducibility • Monitoring • Source/version control • Experiment tracking • Test & build services • Automatic deployment services • Model/code registry • Feature store • ML metadata store • Model monitoring • Model & data performance assessment Technical Components
  10. MLOps Setup Tools Description Experiment design/development Jupyter Notebook(Python, Pandas) Experiment

    tracking Comet, MLFlow Source/Version Control Git, DVC, Github Test & Build Services PyTest, Make Model & Dataset Registry Blob Storage, PostgreSQL Feature Store Feast, PostgreSQL, Blob Storage Model Serving FastAPI Model Monitoring Evidently AI
  11. The Critical Questions • How will the Predictions be served?

    • How will the model be served? • How will ML meet the software system? mlops.principles()
  12. Integrating ML? • Batch inference • Real-time inference • Streaming

    inference • Edge inference 1. Serving model Predictions
  13. Integrating ML? • Experimentation is at the heart of the

    Machine Learning profession. • We progress because we experiment and it begins in a notebook. 3. Design model experiments
  14. • Create sectionally headlined workflows • Linear flow of execution

    • Set Parameters on top of the notebook Notebook Practices
  15. • Create sectionally headlined workflows • Linear flow of execution

    • Set Parameters on top of the notebook Notebook Practices
  16. Integrating ML? • Monolithic integration • single service integration •

    Microservice integration 3. Model meets the software system
  17. Monolithic integration Single service integration Microservice integration The ML service

    code base is integrated within the rest of the backend code base. The ML service code base is deployed on a single server, with elastic load balancing for scaling. The ML service code base is deployed such that components get their own services. The entire system process is slowed down by the ML service, the model size and computation requirements usually add additional load on the backend servers. Usually considered if the inference process is very light to run. The model size can be complex without putting load pressure on the rest of the infrastructure. This is typically the easiest way to deploy a model while ensuring scalability, maintainability and reliability. This is a relief system for the entire codebase. It ensures the different components of the ML system can be reused for different purposes. For example, the ML inference manager at RadioAdSpread. www.radioadspread.com