Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥

Introduction to AI Ethics

Avatar for rohitburman rohitburman
April 15, 2022
11

Introduction to AI Ethics

Avatar for rohitburman

rohitburman

April 15, 2022
Tweet

Transcript

  1. Introduction • AI increasingly has an impact on everything from

    social media to healthcare. • AI is used to make credit card decisions, to conduct video surveillance in airports, and to inform military operations. • These technologies have the potential to harm or help people that they serve. By applying an ethical lens, we can work toward identifying the harms that these technologies can cause to people.
  2. Human-Centered Design for AI • Before choosing data and training

    models, think about the human needs that an AI system should address- and whether or not it should be constructed at all. • Human-centered design(HCD) is an approach to designing systems that serve people’s needs. • HCD involves people in every step of the design process- ideally from when you begin to entertain the possibility of building an AI system. • What HCD entails for you will be determined by your industry, resources, organisation, and the individuals you want to help.
  3. • The following six steps are intended to help get

    started with applying HCD to design of AI systems: I. Understand people’s needs to define the problem. II. Ask if AI adds value to any potential solution. III. Consider the potential harms that the AI system could cause. IV. Prototype, starting with non-AI solutions. V. Provide ways for people to challenge the system. VI. Build in safety measures.
  4. Identifying Bias in AI • Machine learning(ML) has the potential

    to improve lives, but it can be a source of harm. • ML applications have discriminated against individuals on the basis of race, sex, religion, socioeconomic status and other categories. • Bias in data is complex. Flawed data can also result in representation bias, if a group is underrepresented in the training data. • Many ML practitioners are familiar with “biased data” and the concept of “garbage in, garbage out”. • And it’s not just biased data that can lead to unfair ML applications, bias can also result from the way in which the ML model is defined, and from the way the model is compared to other models.
  5. • There are six types of bias, which are more

    common in ML projects. I. Historical bias: Historical bias occurs when the state of the world in which the data is generated is flawed. II. Representation bias: When creating a datasets for training a model, representation bias occurs if the datasets do not accurately represent the individuals who the model will serve. III. Measurement bias: When the accuracy of data differs among groups, measurement bias occurs. When working with proxy variables( variables that stand in for a variable that can’t be directly assessed), this can arise if the proxy’s quality changes between groups.
  6. IV. Aggregation bias : Aggregation bias occurs when groups are

    inappropriately combined, resulting in a model that does not perform well for any group or only performs well for the majority group. V. Evaluation Bias: Evaluation bias occurs when evaluating a model, if the benchmark data( used to compare the model to other models that perform similar tasks) does not represent the population that the model will serve. VI. Deployment Bias : Deployment bias occurs when the problem the model is intended to solve is different from the way it is actually used. If the end users don’t use the model, in the way it is intended, there is no guarantee that the model will perform well.
  7. AI Fairness • There are many different ways of defining

    what we might look for in a fair machine learning model. • There are four fairness criteria which are useful as a starting point: I. Demographic parity/ statistical parity- Demographic parity says the model is fair if the composition of people who are selected by the model matches the group membership percentages of the applicants. II. Equal opportunity: Equal opportunity fairness ensures that the proportion of people who should be selected by the model(“positives”) that are correctly selected by the model is the same for each group.
  8. III. Equal Accuracy- Equal accuracy for each group refers to

    the percentage of correct classifications (people who should be denied and are denied, and people who should be approved who are approved) should be the same for each group. IV. Group unaware/ “Fairness through unawareness”- Group unaware fairness removes all group membership information from the dataset.
  9. Model Card • A model card is a short document

    that provides key information about a machine learning model. • Model cards increase transparency by communicating information about trained model to broad audiences. • A model card should strike a balance between being easy-to- understand and communicating important technical information. • A model card should have the following nine sections: I. Model Details- It includes background information II. Intended use- It specifies the model’s scope, and its intended users. III. Factors- It includes all the factors affecting the model.
  10. IV. Metrics- It includes all the metrics used to measure

    the performance of the model. V. Evaluation Data- It includes the datasets used to evaluate model performance. VI. Training Data- It includes the dataset on which the model was trained. VII. Quantitative Analyses- It answers the question of the model performance on the basis of the metrics chosen. VIII. Caveats and Recommendations- Add anything important that the model card failed to cover.