Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Build and deploy PyTorch models with Azure Machine Learning

Build and deploy PyTorch models with Azure Machine Learning

With machine learning becoming more and more an engineering problem the need to track, work together and easily deploy ML experiments with integrated CI/CD tooling is becoming more relevant then ever.

In this session we take a deep-dive into Azure Machine Learning service, a cloud service that you can use to track as you build, train, deploy, and manage models. We zoom into the building blocks provided and show, through some demos, how to use them.

At the end of this session you have a good grasp of the technological building blocks of Azure machine learning services. Just waiting to be used in your own projects afterwards.

Henk Boelman

April 22, 2020
Tweet

More Decks by Henk Boelman

Other Decks in Technology

Transcript

  1. Henk Boelman Cloud Advocate @ Microsoft Build and deploy PyTorch

    models with Azure Machine Learning HenkBoelman.com @hboelman
  2. Sophisticated pretrained models To simplify solution development Azure Databricks Machine

    Learning VMs Popular frameworks To build advanced deep learning solutions TensorFlow Keras Pytorch Onnx Azure Machine Learning Language Speech … Azure Search Vision On-premises Cloud Edge Productive services To empower data science and development teams Powerful infrastructure To accelerate deep learning Flexible deployment To deploy and manage models on intelligent cloud and edge Machine Learning on Azure Cognitive Services
  3. Ask a sharp question Collect the data Prepare the data

    Select the algorithm Train the model Use the answer The data science process
  4. Ask a sharp question How much / how many? Is

    it this or that? Is it weird? Which group? Which action?
  5. Prepare the data Every column in your dataset has to

    be: Relevant Independent Simple Clean
  6. Azure Machine Learning studio A fully-managed cloud service that enables

    you to easily build, deploy, and share predictive analytics solutions.
  7. Sophisticated pretrained models To simplify solution development Azure Databricks Machine

    Learning VMs Popular frameworks To build advanced deep learning solutions TensorFlow Keras Pytorch Onnx Azure Machine Learning Language Speech … Azure Search Vision On-premises Cloud Edge Productive services To empower data science and development teams Powerful infrastructure To accelerate deep learning Flexible deployment To deploy and manage models on intelligent cloud and edge Machine Learning on Azure Cognitive Services
  8. Create a workspace ws = Workspace.create( name='<NAME>', subscription_id='<SUBSCRIPTION ID>', resource_group='<RESOURCE

    GROUP>', location='westeurope') ws.write_config() ws = Workspace.from_config() Create a workspace
  9. Datasets – registered, known data sets Experiments – Training runs

    Models – Registered, versioned models Endpoints: Real-time Endpoints – Deployed model endpoints Pipeline Endpoints – Training workflows Compute – Managed compute Datastores – Connections to data Azure Machine Learning Service
  10. Create Compute cfg = AmlCompute.provisioning_configuration( vm_size='STANDARD_NC6', min_nodes=1, max_nodes=6) cc =

    ComputeTarget.create(ws, '<NAME>', cfg) Create a workspace Create compute
  11. Create an estimator params = {'--data-folder': ws.get_default_datastore().as_mount()} estimator = TensorFlow(

    source_directory = script_folder, script_params = params, compute_target = computeCluster, entry_script = 'train.py’, use_gpu = True, conda_packages = ['scikit-learn','keras','opencv’], framework_version='1.10') Create an Experiment Create a training file Create an estimator
  12. Submit the experiment to the cluster run = exp.submit(estimator) RunDetails(run).show()

    Create an Experiment Create a training file Submit to the AI cluster Create an estimator
  13. Create an Experiment Create a training file Submit to the

    AI cluster Create an estimator Demo: Creating and run an experiment
  14. Azure Notebook Compute Target Experiment Docker Image Data store 1.

    Snapshot folder and send to experiment 2. create docker image 3. Deploy docker and snapshot to compute 4. Mount datastore to compute 6. Stream stdout, logs, metrics 5. Launch the script 7. Copy over outputs
  15. Register the model model = run.register_model( model_name='SimpsonsAI', model_path='outputs') Create an

    Experiment Create a training file Submit to the AI cluster Create an estimator Register the model
  16. Create an Experiment Create a training file Submit to the

    AI cluster Create an estimator Register the model Demo: Register and test the model
  17. Score.py %%writefile score.py from azureml.core.model import Model def init(): model_root

    = Model.get_model_path('MyModel’) loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights(model_file_h5) def run(raw_data): url = json.loads(raw_data)['url’] image_data = cv2.resize(image_data,(96,96)) predicted_labels = loaded_model.predict(data1) return json.dumps(predicted_labels)
  18. Environment File from azureml.core.runconfig import CondaDependencies cd = CondaDependencies.create() cd.add_conda_package('keras==2.2.2')

    cd.add_conda_package('opencv') cd.add_tensorflow_conda_package() cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')
  19. Deploy to ACI aciconfig = AciWebservice.deploy_configuration( cpu_cores = 1, memory_gb

    = 2) service = Model.deploy(workspace=ws, name='simpsons-aci', models=[model], inference_config=inference_config, deployment_config=aciconfig)
  20. Deploy to AKS aks_target = AksCompute(ws,"AI-AKS-DEMO") deployment_config = AksWebservice.deploy_configuration( cpu_cores

    = 1, memory_gb = 1) service = Model.deploy(workspace=ws, name="simpsons-ailive", models=[model], inference_config=inference_config, deployment_config=deployment_config, deployment_target=aks_target) service.wait_for_deployment(show_output = True)
  21. Azure Machine Learning Pipelines Workflows of steps that can use

    Data Sources, Datasets and Compute targets Unattended runs Reusability Tracking and versioning
  22. Azure Pipelines Orchestration for Continuous Integration and Continuous Delivery Gates,

    tasks and processes for quality Integration with other services Trigger on code and non-code events
  23. Create a pipeline step Input Output Runs a script on

    a Compute Target in a Docker container. Parameters
  24. Create a pipeline Dataset of Simpsons Images Prepare data Train

    the Model with a PyTorch estimator Processed dataset model Register the model Blob Storage Account Model Management
  25. “The future we invent is a choice we make, not

    something that just happens” Satya Nadella