Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Diversity Bias in AI

Avatar for Ankit Sirmorya Ankit Sirmorya
June 08, 2024
18

Diversity Bias in AI

Avatar for Ankit Sirmorya

Ankit Sirmorya

June 08, 2024
Tweet

Transcript

  1. Introduction ▸ Far-reaching applications of AI in everyday life ▸

    AI systems ▹ Only as good as humans building them ▹ Human biases infiltrate them ▹ Amplify existing prejudices ▹ Represent people who build them, not people they serve 2
  2. Examples of Bias in AI 3 Figure: Visual semantic role

    labeling system identifies a person cooking as female. Image taken from a report by University of Washington and University of Virginia
  3. Sources of Bias in AI ▸ Can be introduced purposefully

    or inadvertently ▸ Can emerge as the AI is used in an application. ▸ Preferences or exclusions in training data ▸ Data sourcing strategies, ▸ Algorithm Design ▸ Interpretation of outputs 5
  4. “Diversity is the engine of invention. It generates creativity that

    enriches the world.” Justin Trudeau Prime Minister of Canada 6
  5. Dimensions of Diversity ▸ Diverse AI workforce ▹ Different races,

    genders, ethnicities and ages ▹ Decreases the likelihood of racial, gender, ethnic and age discrimination by artificially intelligent systems. ▸ Dimensions of diversity in AI ▹ Gender diversity in AI ▹ Racial & Ethnic diversity in AI 7
  6. Gender Diversity in AI Researchers ▸ Female graduates of AI

    PhD programs in North America account for less than 18% of all PhD graduates on average – CRA ▸ Female faculty make up just 16% of all tenure track CS faculty at several universities around the world. 8
  7. Reasons for AI Bias ▸ Insufficient Training Data ▸ Humans

    Are Biased – And So Is The Data That AI Is Trained On ▸ De-Biasing Data Is Exceptionally Hard To Do ▸ De-Biasing AI Models Is Very Difficult Too ▸ Diversity Amongst AI Professionals Is Not As High As It Should Be ▸ Fairness Comes At A Cost (That Companies May Not Be Willing To Pay) ▸ External Audits Could Help – If Privacy Would Not Be An Issue ▸ Fairness Is Hard To Define ▸ What Was Fair Yesterday Can Be Biased Tomorrow 10
  8. Fixing AI bias: Step 1 Fathom the algorithm and data

    to assess where the risk of unfairness is high. ▸ Examine the training dataset ▸ Conduct subpopulation analysis ▸ Monitor the model over time against biases 11
  9. Fixing AI bias: Step 2 Establish a debiasing strategy within

    the overall AI strategy. ▸ Technical Strategy: use tools to help identify bias and traits ▸ Operational Strategy: improve data collection processes ▸ Organizational Strategy: establish transparency in workplace 12
  10. Fixing AI bias: Step 4 Decide on use cases where

    automated decision making should be preferred and when humans should be involved. 14
  11. Fixing AI bias: Step 5 Follow a multidisciplinary approach. ▸

    Build a team with ethicists, social scientists and domain experts 15
  12. Fixing AI bias: Step 6 Diversify your organization. ▸ People

    that first notice bias are usually users who are from that specific minority community. 16
  13. Success Stories • Review the AI Training Data • AI

    has made our business processes smarter and more efficient due to its data-driven results, said Dror Zaifman, director of digital marketing for iCASH. "We make sure that AI bias doesn't exist by understanding our training data. The academic and the commercial datasets are the major cause of bias in AI algorithms. We have a team of dedicated data scientists who cross-train employees in different departments to understand how AI bias works and the best way to combat the problem.” • Get Direct Input From Your Customers • "We do a good bit to eliminate bias in our AI algorithms," said Baruch Labunski, founder of Rank Secure. "You have to look at the limitations of your data and then look at the customer's experiences. We do that by actually talking to customers from time to time to collect a sampling of their personal experiences with AI. That means we personally contact them by email or phone and ask about their experience. We go through the AI experience with our vendor to understand what the customer is experiencing. Once we experience it for ourselves, we can find issues that need correcting. That is how you find bias." 17
  14. Success Stories • Use Constant Monitoring to Prevent AI Bias

    • Prevision.io uses a five-part framework for ethical decision-making in data and machine learning projects, said Nicolas Gaude, co-founder and chief technology officer. "We organize it to align with the five distinct phases of a data project: initiation, planning, execution, monitoring, and closing. That way we are constantly monitoring that there are not biases present in our AI." ]\ • Check and Recheck AI's Decisioning • In the past, with manual lead scoring models it was somewhat easy to inspect the manual models for scoring elements that could be considered discriminatory, this might be harder to spot in AI models, which require more specialized skills to understand them, said Christian Wettre, senior vice president and general manager, Sugar Platform, for SugarCRM. "A best practice is to enable the AI to be prescriptive but always transparent, to enable business users to review the application of the AI, so that it can always be corroborated by the business," 18
  15. Will AI ever be completely unbiased ? ▸ Technically, yes.

    ▸ An AI system can be as good as the quality of its input data. ▸ If we can clean the training dataset from conscious and unconscious assumptions on race, gender, or other ideological concepts, you are able to build an AI system that makes unbiased data-driven decisions. 19
  16. Will AI ever be completely unbiased ? ▸ However, in

    the real world, not any time soon. ▸ There are numerous human biases and ongoing identification of new biases is increasing the total number constantly. ▸ Therefore, it may not be possible to have a completely unbiased human mind like the AI system. ▸ After all, humans are creating the biased data while humans and human-made algorithms are checking the data to identify and remove biases. 20