Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Limitations and Dangers of Facial Recognition

The Limitations and Dangers of Facial Recognition

Slides for my PyCon 2020 talk on Facial Recognition.

Manojit Nandi

April 19, 2020
Tweet

More Decks by Manojit Nandi

Other Decks in Programming

Transcript

  1. DISCLAIMER Opinions and views reflected in this talk are my

    own, and do not reflect the opinions or views of my employer. All research for this talk was performed on personal time, and not on the behalf of my employer.
  2. Face Recognition and Algorithmic Bias Fighting The Coded Gaze “We

    have the Pale Male datasets being used as something that is universal when that isn’t actually the case when it comes to representing the full sepia of humanity” - Joy Buolamwini
  3. Gender Shades Project • In the Gender Shades (2018) project,

    Buolamwini audited multiple face analytics systems. • Taking an intersectional approach to audit these systems, she found that these classifiers mis-identify darker-skinned women at a higher rate (20% - 35%) compared to other groups. Source: Buolamwini & Gebru (2018)
  4. Gender Shades - Impact • Following Gender Shades, IBM released

    a Diversity In Faces dataset and associated white paper. • Multiple federal bills (Algorithmic Accountability Act, No Biometric Barriers to Housing Act) and legislative bans on facial recognition (Oakland) directly reference Gender Shades. • The Atlantic Plaza Towers complaint opposing the installation of face recognition in Brooklyn apartment buildings lists Gender Shades as a reason for concern. Source:https://www.nytimes.com/2019/01/24/tech nology/amazon-facial-technology-study.html
  5. Auditing Facial Recognition Systems • With the rising popularity of

    algorithmic audits of technical systems, Raji et. al (2020) explores many considerations when performing facial recognition audits. • Highlighting tensions between privacy, fairness, and intersectionality, these researchers aim to provide a set of considerations for other researchers who wish to audit face recognition systems.
  6. “Common” Emotions & Micro-Expressions • American psychologist Paul Ekman stated

    there are six “common” emotions all persons display, irrespective of culture or upbringing. • Ekman also identified the concept of micro-expression: short, involuntary facial expressions. Source: Ekman (1978)
  7. Facial Action Coding System • The Facial Action Coding System

    (FACS) provides a schema for defining emotions based on facial expressions. • Facial actions units characterize individuals aspects of a facial expression (e.g. eyes glare), and an emotion can be described as the sum of the action units (Anger = Lips Narrow + Eyes Glare + Eyebrows Furrowed). Source: Lie To Me
  8. Why Emotion Recognition won’t scale In Barrett et. al (2019),

    a group of psychologist concluded that facial expression do not properly represent emotional states for three reasons: 1. Limited Reliability: People express the same emotion in different ways. 2. Lack of Specificity: No unique mapping between a facial configuration and a specific emotion. 3. Limited Generalizability: Context is excluded from this methodology.
  9. The Misgendering Machine • In Keyes (2018), the author conducts

    a literature review of automatic gender recognition (AGR) in the field of human-computer interaction. • Keyes found many of these systems define gender as a binary variable that is defined by physical appearance. • Keyes argues these systems inherently exclude trans and non-binary individuals. Source:https://www.fastcompany.com/90216258/uber-face-reco gnition-tool-has-locked-out-some-transgender-drivers
  10. Computer Vision and Gender • Schuerman, Paul, & Brubaker (2019),

    the authors compile a dataset of 2450 images from Instagram, reflecting seven gender identities (Agender, Genderqueer, Man, Non-Binary, Trans Man, Trans Woman, Woman). • In their analysis, they found facial analysis services consistently misgender #transmen and #transwoman at a higher rate (10%-30% higher error rate) compared to cisgender identities. Source: Schuerman, Paul, & Brubaker (2019)
  11. Face Recognition Technology and Governance “These tools are DANGEROUS when

    they fail and HARMFUL when they work.” - Kate Crawford
  12. Misuse of Face Recognition • NYPD officers upload celebrity photos

    to the Facial Identification Section to generate “leads” for identifying potential suspects. • In China, face recognition is used to track and detain members of the Uyghur population and other minority groups. • Since there is little to no regulation on how FRT and other biometric data is used, institutions can easily abuse this technology with no accountability. Source: Garbage In, Garbage Out; https://www.flawedfacedata.com/
  13. Technology and Power • Algorithms exist in the context of

    human systems. It’s important to build technology that does not reinforce oppressive power structures! • Technologists who build these types of systems are rarely the ones who experience algorithmic harms. • One important question we must ask ourselves: “Should we building these systems?”. Source: https://www.nytimes.com/2019/10/02/magazine/ice-su rveillance-deportation
  14. When Not to Build, Design, or Deploy • One emerging

    topic in Responsible AI design is empowering developers with a right to refusal. • Drawing inspiration from governance and regulation, management strategy, and DevOps, researchers hope to create frameworks for enabling tech workers to protest internal work projects. Source Credit: Varoon Mathur and Genevieve Fried, AI Now Institute, AI in 2019: A Year In Review
  15. Works Referenced 1. Gender Shades: Intersectional Accuracy Disparities in Commercial

    Gender Classification. Buolamwini, Joy. Gebru, Timnit; http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf 2. Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. Raji, Inioluwa D. Gebru, Timnit. Mitchell, Margaret. Buolamwini, Joy. Lee, Joonseok. Denton, Emily; https://www.aies-conference.com/2020/wp-content/papers/134.pdf 3. Gender Recognition or Gender Reductionism? The Social Implications of Automatic Gender Recognition Systems. Hamidi, Foad. Scheurman, Morgan K. Branham, Stacy M; https://docs.wixstatic.com/ugd/eb2cd9_ff211774807946099e7e1dcd0023497d.pdf 4. Emotional Expressions Reconsidered: Challenges to Inferring Emotion from Human Facial Movement; Barret, Lisa F. Adolps, Ralph. Marsella, Stacy. Martinez Aleix M. Pollack, Seth D; https://journals.sagepub.com/doi/pdf/10.1177/1529100619832930 5. How Computers See Gender: An Evaluation of Gender in Commercial Face Analysis and Image Labeling Services. Scheurman, Morgan K. Paul, Jacob M. Brubaker, Jed R; https://docs.wixstatic.com/ugd/eb2cd9_963fbde2284f4a72b33ea2ad295fa6d3.pdf 6. The Misgendering Machine: Trans/HCI Implications of Automatic Gender Recognition. Keyes, Os; https://ironholds.org/resources/papers/agr_paper.pdf 7. Concrete Problems in AI Safety. Amodei, Dario. Olah, Chris. Steinhardt, Jacob. Christiano, Paul. Schulman, John. Mane, Dan; https://arxiv.org/pdf/1606.06565.pdf 8. Garbage In, Garbage Out: Face Recognition on Flawed Data. Garvie, Clare; https://www.flawedfacedata.com/