ChatGPT, Bing, GitHub Copilot, Google Bard, Adobe Firefly and many more. Innovation around AI are no longer incremental but are more radical and disruptive. With changes comes outcomes, both positive and negative. Responsible AI 4
Bias can affect results A loan-approval model discriminates by gender due to bias in the data with which it was trained Errors may cause harm An autonomous vehicle experiences a system failure and causes a collision Data could be exposed A medical diagnostic bot is trained using sensitive patient data, which is stored insecurely Solutions may not work for everyone A home automation assistant provides no audio output for visually impaired users Users must trust a complex system An AI-based financial tool makes investment recommendations - what are they based on? Who's liable for AI-driven decisions? An innocent person is convicted of a crime based on evidence from facial recognition – who's responsible?
deployment scenario is considered a “sensitive use” if it falls into one or more of the following categories: • Denial of consequential services: The scenario involves the use of AI in a way that may directly result in the denial of consequential services or support to an individual (for example, financial, housing, insurance, education, employment, healthcare services, etc.). • Risk of harm: The scenario involves the use of AI in a way that may create a significant risk of physical, emotional, or psychological harm to an individual (for example, life or death decisions in military, safety-critical manufacturing environments, healthcare contexts, almost any scenario involving children or other vulnerable people, etc.). • Infringement on human rights: The scenario involves the use of AI in a way that may result in a significant restriction of personal freedom, opinion or expression, assembly or association, privacy, etc. (for example, in law enforcement or policing).
falls into one of these three categories, they report it via a central submission tool and it's routed to their local Responsible AI Champ-an individual who is responsible for driving awareness and understanding of the company's responsible AI policies, standards, and guidance.
of Responsible AI and the Microsoft team involved in the use case, investigates the case to gather the relevant facts, follows a guided process to assess the impact of the proposed system on individuals and society, and reviews past cases to determine if guidance already exists for a similar scenario.
we established six key principles to guide our development and use of AI, which are outlined in a book we published in 2018, The Future Computed: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.3 With these foundational principles in place, we began developing more scenario-specific guidelines.
for correct predictions Resource type Details Guidelines Consider using guidelines like Microsoft's Securing the Future of Artificial Intelligence and Machine Learning to formulate your own policies.10The AI Security Guidelines will provide you with findings that can protect your AI Services with guidance materials. Technology tools Research Microsoft SEAL-a set of libraries powered by homomorphic encryption that allow computations to be performed directly on encrypted data.11Counterfit-an open-source tool to help organizations assess AI security risks, allowing developers to ensure that their algorithms are robust, reliable, and trustworthy.12SmartNoise-a differential privacy tool that adds a carefully tuned amount of statistical noise to sensitive data, helping to protect data used in AI systems by preventing reidentification.13Presidio- an open-source library for data protection and anonymization for text and images.14Azure confidential computing- provides data security using trusted execution environments or encryption, providing protection of sensitive data across the machine learning life cycle.15Look into other technologies like multi-party computation (MPC), homomorphic encryption, differential privacy, and secure execution environments to see if they're right for your use case. Third-party tools Take advantage of third-party tools like the Private Data Sharing Interface (PSI), which allows researchers to explore private datasets securely using differential privacy.16
subsets of the population based on ethnicity, gender, age, or other factors. Resource type Details Guidelines There are some great papers that can help you in this journey: Utilize the AI Fairness checklist to provide structure for improving ad-hoc processes and empowering advocates. For more information on how to assess the fairness of AI models, watch the NIPS keynote address from Kate Crawford, Principle Researcher at Microsoft and Co-founder of the AI Now Institute at NYU.20To understand the unique challenges regarding fairness in machine learning, watch a free Microsoft webinar on Machine Learning and Fairness.21 Technology tools Plus look for a Python package for the FairLearn approach on GitHub.22Use FairLearn to assess your AI systems and mitigate any negative impacts towards groups of people. Leverage the methodology for reducing bias in word embedding. Third-party tools Learn how to avoid five key “traps” of fair-ML work in a paper from the ACM Conference entitled Fairness and Abstraction in Sociotechnical Systems;23or read the Counterfactual Fairness paper from Cornell University.24Check out the Aequitas open-source toolkit.
the circumstances. AI systems can become unreliable or inaccurate if their development and testing environment is not the same as the real world or if the system is not maintained properly. Resource type Details Technology tools Explore how to monitor data drift and adapt models to maintain accuracy in Azure Machine Learning. InterpretML - training interpretable glassbox machine learning models. Error Analysis- identify cohorts with higher error rates. Research the Pandora debugging framework and Microsoft AirSim.
and potentially harmful. it is critical that people understand how AI decisions were made. Resource type Details Management tool Datasheets for Datasets-Consider these questions to help prioritize transparency by creating datasheets for the datasets involved in your AI systems.32 Technology tools Access several powerful transparency methods through the InterpretML open-source package. Explore a variety of tools that support model transparency in Azure Machine Learning including the model interpretability feature.
control. The people who design and deploy AI systems must be accountable for how their systems operate. Resource type HAX Toolkit Guideline HAX Workbook–supports early planning and collaboration between UX, AI, PM, and engineering disciplines and helps drive alignment on product requirements across teams. Hax Interaction Guidelines–HAI guidelines synthesize 20 years of research into 18 recommended guidelines for designing AI systems across the user interaction and solution life cycle. HAX Design Patters–provide common ways of implementing the HAX Guidelines. The patterns are UI-independent and can be implemented in various systems and interfaces. Management tools Read the paper about datasheets for datasets to learn more about the benefits of this approach. HAX Playbook–The HAX Playbook is an interactive tool for generating interaction scenarios to test when designing user-facing AI systems, before building out a fully functional system. Technology tools Document and manage the entire model development process in one place with MLOps in Azure Machine Learning.
imperative that organizations have processes in place to ensure it's used responsibly. • Explain how Microsoft provides accountability for responsible AI through a governance structure. • Describe how Microsoft risk management processes are used to identify, assess, and mitigate risks. • Establish responsible design principles within your own organization.
about our perspective on responsible AI and the impact of AI on our future: • Download PDF of Understanding AI governance at Microsoft. • Download PDF of Governance in action at Microsoft. • Download PDF of Establishing responsible design principles in AI engineering to share with others. • Download PDF of Engaging externally: AI for Good to share with others. • Download PDF of Putting principles into practice: how we approach responsible AI at Microsoft.
states many companies have been focusing their upskilling and retraining efforts on those people who already have higher skills and value to the company. • Developer-focused AI School, which provides online videos and other assets that help build professional AI skills. • The Skillful Initiative, a partnership with the Markle Foundation in the US, helps match people with employers and fill high-demand jobs.
we established six key principles to guide our development and use of AI, which are outlined in The Future Computed: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. • Design bots based on ethical principles by reviewing these 10 guidelines. • Join Partnership on AI (PAI), a group of researchers, non-profits, non-governmental organizations (NGOs), and companies dedicated to ensuring that AI is developed and used in a responsible manner. • When working with Facial Recognition, understand current and future regulation, follow a principled approach, and understand the design scenarios and limitations.