Speech recognition human parity 2018 Reading comprehension human parity 2018 Machine translation human parity 2018 Speech synthesis near-human parity 2019 General Language Understanding human parity 2020 Document summary at human parity
slides used Tested at scale in Microsoft solutions 80M Personalized experiences delivered daily Machine translation human parity Object detection human parity Switchboard Switchboard cellular Meeting speech IBM Switchboard Broadcast speech Speech recognition human parity Conversational Q&A human parity First FPGA deployed in a datacenter
Agents Azure 58 Regions 90+ Compliance Offerings $1B Security investment per year 95% Fortune 500 use Azure Azure AI ML platform Customizable models Vision, Speech, Language, Decision Scenario-specific services Cognitive Services Azure Machine Learning Data Platform App Dev Platform & tools Compute Cognitive Search Bot Service Form Recognizer Video Indexer
for Human-AI Design Homomorphic Encryption Interpret ML Differential Privacy Data Drift Secure MPC Guidelines for Conversational AI Fairness Reliability Privacy Inclusivity Accountability Transparency
system can behave unfairly A voice recognition system might fail to work as well for women as it does for men A model for screening loan or job application might be much better at picking good candidates among white men than among other groups Avoiding negative outcomes of AI systems for different groups of people
andaninteractive dashboardto assess which groups of peoplemay benegatively impacted Model formats: Python models using scikit predict convention, Scikit, Tensorflow, Pytorch, Keras Metrics: 15+ common group fairness metrics Model types: Classification, Regression Fairness mitigation: Use state-of-the-art algorithms to mitigate unfairness in your classificationandregressionmodels
methods for tabular data Interpret- community Additional interpretability techniques for tabular data Interpret-text Interpretability methods for text data DiCE Diverse Counterfactual Explanations Blackbox models: Model formats: Python models using scickit predict convention, Scikit, Tensorflow, Pytorch, Keras Explainers: SHAP, LIME, Global Surrogate, Feature Permutation Glassbox Models: Model types: Linear Models, Decision Trees, Decision Rules, Explainable Boosting Machines AzurML-interpret AzureML SDK wrapper for Interpret and Interpret-community https://github.com/interpretml
C,C++, Python, R Validator Automatically stress test DP algorithms Data Source Connectivity Data Lakes, SQL Server, Postgres, Apache Spark, Apache Presto and CSV files Privacy Budget Control queries by users
Submits a query Receives a differentially private report Mechanism adds noise Private data Dataset checks budget and access credentials Checks budget and private compute Credentials to access the data https://github.com/opendifferentialprivacy
* Encrypt(B)) = A * B Privacy Barrier Homomorphic encryption allows certain computations to be done on encrypted data, without requiring any decryption in the process: Different from classical encryption like AES or RSA:
Recently released v3.5 Available at GitHub.com/Microsoft/SEAL Supports Windows, Linux, macOS, Android, FreeBSD Written in C++; includes .NET Standard wrappers for public API From open source community: PyHeal (Python wrappers from Accenture) node-seal (JavaScript wrappers) nGraph HE Transformer (from Intel)
model, private prediction enables inferencing on encrypted data without revealing the content of the data to anyone. Microsoft SEAL can be deployed in a variety of applications to protect users personal and private data Privacy Barrier [cryptographic] Medical prediction
DeepSpeed allowing for training models 15x bigger, 10x faster on the same infrastructure ONNX Runtime Central AI group coordinating bringing the best of research into products All available on Azure and GitHub for everyone!