Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Ethics

AI Ethics

AI has become an integral part of human life. There is no domain that does not use AI tools. It is embedded in business, heath sector, education, politics, ect. If used efficiently, it can lead to numerous gains for the individual and society.
Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a set of guidelines that advise on the design and outcomes of artificial intelligence. Only by adopting an ethical approach to artificial intelligence, can we gain from it. Along with its numerous benefits, AI can also be harmful if not used with care.

Amanat Amrit Kaur

April 15, 2022
Tweet

More Decks by Amanat Amrit Kaur

Other Decks in Technology

Transcript

  1. Table of Contents Introduction to AI Ethics Types of Biases

    Model Cards Human- Centered Design for AI References AI Fairness Identifying Bias in AI
  2. INTRODUCTION TO AI ETHICS AI has become an integral part

    of humsn life. There is no domain that does not use AI tools. It is embedded in business, heath sector, education, politics, ect. If used efficiently, it can lead to numerous gains for the individual and society. Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a set of guidelines that advise on the design and outcomes of artificial intelligence. Only by adopting an ethical approach to artificial intelligence, can we gain from it. Along with its numerous benefits, AI can also be harmful if not used with care.
  3. Human-Centered Design for AI Human-centered design (HCD) is an approach

    to designing systems that serve people’s needs. HCD involves people in every step of the design process. It is also a subjective approach as it depends on people and what different industries demand. Broadly, there can be six steps involved in applying HCD in AI design.
  4. Identifying Bias in AI Machine learning (ML) has the potential

    to improve lives, but it can also be a source of harm. ML applications have discriminated against individuals on the basis of race, sex, religion, socioeconomic status, and other categories. This is what we call a bias. Biases can arise due to flawed data. Biases can also arise due to the way ML models are defined and are not just constrained to data. Biases can be classified into 6 types. These six biases are discussed in the following slide.
  5. Types of Bias Historical bias Measurement bias Representation bias 01

    03 02 Historical bias occurs when the state of the world in which the data was generated is flawed. Measurement bias occurs when the accuracy of the data varies across groups. This can happen when working with proxy variables (variables that take the place of a variable that cannot be directly measured), if the quality of the proxy varies in different groups. Representation bias occurs when building datasets for training a model, if those datasets poorly represent the people that the model will serve.
  6. Types of Bias Aggregation bias Deployment bias Evaluation bias 04

    06 05 Aggregation bias occurs when groups are inappropriately combined, resulting in a model that does not perform well for any group or only performs well for the majority group. Deployment bias occurs when the problem the model is intended to solve is different from the way it is actually used. If the end users don’t use the model in the way it is intended, there is no guarantee that the model will perform well. Evaluation bias occurs when evaluating a model, if the benchmark data (used to compare the model to other models that perform similar tasks) does not represent the population that the model will serve.
  7. AI Fairness There are four creterias for a AI model

    to be fair. 1. Demographic parity / statistical parity Demographic parity says the model is fair if the composition of people who are selected by the model matches the group membership percentages of the applicants. 2. Equal opportunity Equal opportunity fairness ensures that the proportion of people who should be selected by the model ("positives") that are correctly selected by the model is the same for each group. We refer to this proportion as the true positive rate (TPR) or sensitivity of the model. 3. Equal accuracy That is, the percentage of correct classifications (people who should be denied and are denied, and people who should be approved who are approved) should be the same for each group. If the model is 98% accurate for individuals in one group, it should be 98% accurate for other groups. 4. Group unaware Group unaware fairness removes all group membership information from the dataset. For instance, we can remove gender data to try to make the model fair to different gender groups.
  8. Model Cards A model card is a short document that

    provides key information about a machine learning model. Model cards increase transparency by communicating information about trained models to broad audiences. Though AI systems are playing increasingly important roles in every industry, few people understand how these systems work. AI researchers are exploring many ways to communicate key information about models to inform people who use AI systems, people who are affected by AI systems and others. A model card should strike a balance between being easy-to- understand and communicating important technical information. When writing a model card, you should consider your audience: the groups of people who are most likely to read your model card. These groups will vary according to the AI system’s purpose. It should idealy consist of model details, intended use, factors, matrics, evaluation data, training data, quatitative analysis and ethical considerations.
  9. Thank you References Cook, A & Shankar V. Intro to

    AI Ethics. Kaggle Learn Course. https://www.kaggle.com/learn/intro-to-ai-ethics