Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Smelling ethics smells: when it looks ethical but something smells off...

Smelling ethics smells: when it looks ethical but something smells off...

Originally for the one-credit topics course "[Big] Data Ethics," taught at the University of Wisconsin-Madison Information School, summer 2019. Has been edited for clarity and to add new ethics smells and examples since.

Dorothea Salo

June 12, 2019
Tweet

More Decks by Dorothea Salo

Other Decks in Education

Transcript

  1. Smelling “ethics smells”
    when it looks ethical, but something
    smells off…
    Dorothea Salo

    View full-size slide

  2. Programmers and “code
    smells”
    • Sometimes a given chunk of programming code works, but
    appears highly likely to cause a problem down the road.
    • Missing edge cases
    • Lots of cut-and-pasted code (unmaintainable after a certain point)
    • Cuts corners on security
    • Etc etc etc—ask a dozen programmers, get
    fi
    fty problems cited
    • These hints at poor/unstable/unmaintainable/hackable
    code are called “code smells.”
    • I ruthlessly appropriated this idea into “ethics smells.”

    View full-size slide

  3. What’s an ethics smell?
    • A trope, rhetorical
    fl
    ourish, or line of reasoning that may look
    reasonable at
    fi
    rst glance, but hints at ethical
    fl
    aws.
    • Not everyone guilty of an ethics smell is a Bad Person doing
    Bad Things. It may not be that simple.
    • Sometimes ethics smells become part of prevailing discourse! Such
    that omitting or questioning them feels weird!
    • Sometimes people just haven’t done the work yet, or are seduced by
    whatever the latest shiny thing is.
    • Sometimes the ethical issues are legitimately hard to understand!
    • But yes, sometimes people do use these tropes etc. to
    forestall or de
    fl
    ect legitimate ethical critique. That’s not okay.
    It is, in fact, unethical.

    View full-size slide

  4. Empty platitudes
    • E.g. “Your privacy and security are very important to us.”
    • Who even believes this now? We’re all like “prove it,” for good reason.
    • How can you tell a pleasant-seeming ethics platitude is empty?
    • Is any information given about actual actions in support of the ethics? If
    not, be suspicious.
    • If what they’re saying about the ethics issue confuses you… that may be
    intentional. Be suspicious… especially where they’re using way more
    words, or way more jargon, than seems warranted.
    • When in doubt, check into the person or organization’s track record. Is
    there reason to believe they will act (un)ethically? Are they self-serving?
    • Equitywashing is another common example.
    • “Of course my thing will be fair to and accessible by everybody!”

    View full-size slide

  5. One dumpster
    fi
    re of
    bewildering ba
    ff l
    egab later…
    Higher-ed’s
    Surveillance
    Central!

    View full-size slide

  6. “We’re good people! It’ll be
    fi
    ne!
    It’s
    fi
    ne because it’s us!”
    • Good is not a thing you can be. It’s only a thing you can DO.
    • When you stop doing good—especially when you start
    doing bad—you fall right off the high road.
    • This one is psychologically understandable; we all
    desperately want to believe we’re good people.
    • Ironically, insisting that we are good people no matter what
    we DO raises the chances we stop governing ourselves
    aright.
    • LIBRARIANS, WE ARE PRONE TO THIS ONE.
    • “To serve our patrons!” can become the door to evil. I’ve seen it.

    View full-size slide

  7. Bluntly:
    no author on that paper should ever
    be allowed to work with data
    from living human beings again,
    especially not people in crisis!
    How. DARE. They.

    View full-size slide

  8. Exact words
    • (see TVTropes.org on this, also “Insistent Terminology”)
    • Often used to justify ethically dubious behavior because
    fi
    fty pages of legalese supposedly spelled it out, and/or the
    victims “consented” to it.
    • “if you read the Exact Words of our novel-length privacy policy…”
    • Sometimes a distraction tactic; see e.g. Facebook: “the
    Cambridge Analytica thing wasn’t actually a breach!”
    • Technically true—Cambridge Analytica was allowed access to the
    data they abused by Facebook itself; nobody’s security was broken—
    but so what?
    • How is “Facebook allowed CA to use Facebook data to try to throw
    elections!” supposed to be an ethical improvement here?

    View full-size slide

  9. “But my thing is neutral!”
    • Librarians: we are SO GUILTY of this one. So very, very guilty.
    • It’s also rife in computer science. I don’t know why (and I doubt it’s a
    legacy from LIS), but I know from experience and reading that it’s so.
    • Often the subtext here is “how dare you judge me?” or “I
    shouldn’t have to think about the ethics of what I do!”
    • Often an attempt to evade ethical questions around power
    and oppression.
    • If it’s “neutral,” it can’t be racist or sexist or ableist or ageist or
    homophobic or transphobic OR GENOCIDAL (Facebook!!!), can it?!
    • Decades of STS, LIS, ethics work giving the lie to “neutrality.” I
    don’t have time to recount it all (ask me if you want readings).

    View full-size slide

  10. The “neutral platform”
    Story from The Atlantic.

    View full-size slide

  11. Own the good, ignore the bad
    • aka “because it helps [some] people, it must be in the clear ethically!”
    • This can be nakedly self-serving, of course.
    • And it fails consequentialist ethics forever!
    • Can also be power and privilege: “this thing helps some people
    (often ‘people just like me’), so all the people it hurts don’t matter!”
    • Sometimes it’s naïve optimism at work.
    • You may be able to tell the difference based on how long the responsible
    entity has been up to whatever the thing is. Are they still under the spell of
    the shiny? Or should they really know better by now?
    • I say this because you’ll want to tailor your counters.
    • Take the self-serving and privilege-leveraging down. No mercy.
    • Ask the optimists questions about how they’re handling their thing’s ethical
    issues. They don’t have to be gentle questions, necessarily!

    View full-size slide

  12. This guy should know better.
    Remedial history/sociology-of-medicine class time!
    Story: Misha Angrist, “Do You Belong To You?” 2 January 2018.
    http://genomemag.com/do-you-belong-to-you/
    Fair use asserted.

    View full-size slide

  13. But what do we do with this?

    View full-size slide

  14. Variants on “ignore the bad”
    • FOMOngering
    • “Everybody’s doing this!” (I have yet to see this be true when said aloud.
    It is invariably bullying or rationalization, not fact.)
    • “If you’re not doing this, you’re WrongBadBehind and you’ll die!
    Unethical-Thing is your only hope!” This is bullying.
    • Risks, what risks? Ethics, what ethics?
    • De
    fi
    nitely look for a self-serving (or at least self-justifying) motive here.
    • “We have an obligation to do this! How can we not?”
    • Often accompanied by a heartstring-tugging case study
    • Conspicuous by its absence: any discussion of risks or harms, except
    possibly for an Empty Platitude or two
    • (This is so common in learning analytics. So very, very common.)

    View full-size slide

  15. Variant: Scaremongering
    • “If you don’t do this dubiously-ethical thing, All The Bad Stuff
    will happen and it’ll be YOUR FAULT!”
    • So nakedly self-serving that you’d think people would see
    through it instantly… but if the Bad Stuff is bad enough and
    people are desperate enough to avoid it, we land here.
    • Sometimes the desire to Do Something, or Be Seen To Be
    Doing Something, overcomes common sense.
    • (Comes up frequently to justify surveillance because of
    crime-avoidance/safety.)
    • Also common in innovation-race discourse, e.g. AI.
    • “You can’t regulate our thing! We’ll lose the race to [enemy country]!”

    View full-size slide

  16. “We can’t tell them what we do!
    Or let them say no to it!
    They just wouldn’t understand!”
    • This is invariably code for “we’re doing a Bad Thing and we
    don’t want people to know about it because they would
    object to it.”
    • I don’t love notice-and-consent. I will never love notice-and-
    consent. (See “Exact Words” slide for why.)
    • But keeping dodgy practices secret is even worse.

    View full-size slide

  17. “Those pesky ethics, always
    getting in the way of progress”
    • I don’t think I’ve ever seen this be so much as well-
    intended, never mind ethics-minded. It exists to
    delegitimize ethical concerns and those who hold them.
    • Seeing it? Question “progress,” loudly and often.
    • “Progress” is not always the word that will be used, of course;
    “innovation” is another common culprit.
    • Medicine: Primum non nocere =
    fi
    rst, do no harm.
    • Not the same as a con
    fl
    icting-ethics dilemma—that’s
    normal, to be expected, and absolutely a legitimate thing to
    think through.

    View full-size slide

  18. When people tell you who they
    are, believe them!
    Oakleaf, Megan. 2018. “Library integration in institutional learning analytics.”
    https://library.educause.edu/-/media/
    fi
    les/library/2018/11/liila.pdf

    View full-size slide

  19. Variant: “Ethics are out of scope.”
    • Technology standardistas and startups are fond of this
    one.
    • I got into such an argument on Twitter once about whether OAIS
    should require digital archives to pay attention to risks to individuals
    or groups represented in a dataset, equivalently to “designated
    community” of data end-users…
    • The
    fi
    ght ended with an OAIS booster declaring repeatedly that this
    question was out of scope for OAIS. If so, then OAIS needs to
    fi
    x its
    scope, in my not-entirely-humble but trying-to-be-ethical opinion.
    • The IETF
    fi
    nally,
    fi
    nally, FINALLY repudiated this in 2019:
    https://datatracker.ietf.org/doc/draft-iab-for-the-users/
    • “The IAB encourages the IETF to explicitly consider user impacts and
    points of view in any IETF work.” Whoa. That’s a change for the better.

    View full-size slide

  20. “This problematic thing
    is a fact of life now.”
    • Zuckerberg is full of these. “Only connect.”
    • Ethics rule of thumb: If you start to sound like The Zuck, stop.
    • So many Big Data news stories and explainers start with this!
    • So much that I think it’s now one of those part-of-the-discourse
    things…
    • Sometimes, as with The Zuck, this is a naked attempt to set
    the terms of discourse to evade ethical responsibility.
    • In my experience, though, this one is especially likely to occur
    in the writing and work of people who are basically decent.
    • If that’s you, my advice is, do not cede ground without a
    fi
    ght.
    • Is the problematic thing really a fact of life? Is it truly unsubstitutable,
    unstoppable? Or is it just convenient for some people if we all think so?

    View full-size slide

  21. Big assumption!

    View full-size slide

  22. “Nobody told me I shouldn’t!”
    • the
    fl
    ip side of “I was ordered to” perhaps? Since Nuremberg at least, we’ve
    known that one’s out of bounds!
    • see also TVTropes “Ain’t No Rule” (often “Ain’t No Law/Regulation”)
    • So ethically lazy I can’t even. OWN YOUR ACTIONS. But I’ve seen it.
    • Assumes law/ethics systems are perfect and omniscient, such that
    if someone were really doing something bad, they’d be stopped.
    • This is nonsense, of course. Law lags bad behavior even more than ethics!
    • Not the same as “the ethics system here has gaps/loopholes;
    here’s how I navigated them, though I may have gotten it wrong.”
    • Where you see this, it is typically a thoughtful analysis that understands that
    where ethical guidance is not clear or there is no applicable guidance, we
    have a duty to think the ethics through on our own.

    View full-size slide

  23. About that “gayface” study…
    From Metcalf, Jacob. 2017. “‘The study has been approved by the IRB’: Gayface AI, research hype
    and the pervasive data ethics gap.”
    https://medium.com/pervade-team/the-study-has-been-approved-by-the-irb-gayface-ai-
    research-hype-and-the-pervasive-data-ethics-ed76171b882c

    View full-size slide

  24. “Who could have known
    this would be bad?”
    • Often a sign of someone who didn’t do their historical or
    ethical homework. Or said “talk to the hand!” to anyone who
    DID know.
    • Also common where the responsible entity is not inclusive.
    • See e.g. Google, which screwed this up repeatedly with search, image
    auto-tagging, Buzz, and Google+ (look up the “nymwars”).
    • Sometimes a sign of a new, naïvely optimistic person or
    fi
    eld.
    • Information security labors under a lot of technology standards and
    infrastructures that were poorly designed because nobody thought
    anybody would ever hack or misuse them. (See e.g.: email, BGP.)
    • I can grudgingly forgive this… for a little while. (For most technosocial
    phenomena… it’s way, way too late for any more of this.)

    View full-size slide

  25. Story from The Atlantic.

    View full-size slide

  26. Variant: “Who could have
    known this wouldn’t work?”
    • self-archiving in institutional repositories, MOOCs, AI in
    radiology (Geoffrey Hinton), cryptocurrency, NFTs, DAOs…
    • Usually someone knew it wasn’t going to work, and
    explained pretty clearly why not.
    • self-archiving: the major boosters were Cliff Lynch, Raym Crow, and
    Stevan Harnad; the one who knew was me, actually
    • (Salo 2009, “Innkeeper at the Roach Motel.” See also an angrier-and-
    wiser Salo 2013, “How to Scuttle a Scholarly Communication Initative.”)
    • Not coincidentally, the hypesters tend to be cis white men,
    and those who know it won’t work… aren’t.
    • Self-archiving did have one white female booster: Alma Swan. The
    rest were cis white men.

    View full-size slide

  27. Variant: “It’s too early to tell if this
    thing I did will be bad. It might
    even be good!”
    • This variant often comes from academic researchers,
    particularly of the “tech is neutral!” variety.
    • Often code: “I don’t want to think about ethics! Don’t make
    me!”
    • If you see a lot of security/privacy experts and/or historians
    and/or non-cis-white-men head-desking over something…
    examine what they’re saying carefully.
    • The head-deskers will usually be vili
    fi
    ed — as Luddites, as anti-
    progress, as resentful, as Generally Bad People, whatever.
    • But my experience, for what it’s worth, is that these folks aren’t
    alarmists, just people who have seen this before and understand the
    patterns behind it.

    View full-size slide

  28. Variant: “Now I know! It’s
    really bad! Listen to me!”
    • Not an apology, usually — expect a lot of fauxpologies and “how
    could I have known?” nonsense — but a bid for continued (or
    even greater) attention.
    • These guys (and yeah, they’re practically always cis white guys)
    never credit the people (often not cis white guys) who had it
    right all along.
    • Indeed, they proceed to suck up all the air in the critique room, leaving no
    space for other voices.
    • Media outlets let them do this. That’s not okay either. Ask them why they
    had to learn this the hard way when there was plenty of evidence already!
    • Examples: Geoffrey Hinton (AI), Tristan Harris (everything ever,
    basically)

    View full-size slide

  29. About them, without them
    • (riff on “Nihil de nobis sine nobis,” which has been around a long time)
    • “We’re going to use this questionably-ethical thing to
    fi
    x all Their
    problems for Them!”
    • Whoever They are, They are either conspicuous by Their absence, or
    “represented” by one token person/case study viewed from a de
    fi
    cit model.
    • They are frequently in a less-powerful, less-privileged, less-voice-y position
    than the speaker. They may in fact have no option to refuse the intervention,
    which is questionably ethical all by itself.
    • So much wrong with this. So much.
    • Othering people is deeply uncool. So is disempowering people while ignoring
    their voices.
    • “De
    fi
    cit model”—They have problems (not strengths!), and We Know Better Than
    They Do how to
    fi
    x Them.
    • Often operating off stereotypes and super
    fi
    cialities—frequently blatantly
    incorrect ones—about Them.

    View full-size slide

  30. Variant: Problematic thing
    for thee, but not for me
    • Ethically-problematic thing is forced onto people with less
    social power, miraculously bypassing people with more.
    • Exceptionally common with surveillance and surveillance-
    based interventions
    • predictive analytics in government
    • surveillance in law enforcement
    • workplace surveillance (“bossware”)
    • learning analytics
    • the entire history of Facebook
    • Beware surveillance creep! Just because you’ve not yet
    been caught in a dragnet doesn’t mean you never will be.

    View full-size slide

  31. Variant: “I have nothing to hide!
    So you must not need privacy either!”
    • Missing the “from whom?” piece
    • Often conceptualized as “from law enforcement” or “from social media.”
    Privacy is much more complicated than that, or context collapse wouldn’t
    be so much of a problem.
    • Also a very narrow sense of context
    • My rejoinder is often “Oh? Cool. Hand over your wallet and unlocked phone,
    please, and the keys to your vehicle and dwelling.”
    • Also ignores bene
    fi
    ts of privacy: even when not strictly necessary,
    it is often useful and bene
    fi
    cial.
    • Also SO PRIVILEGED I CAN’T EVEN—er, falsely universalized
    • You won’t be persecuted for your gender identity, religion (or lack thereof),
    race/ethnicity/ancestry, sexual preferences, entertainment choices? Lucky
    you. Many of us are not in that place, and we matter too.

    View full-size slide

  32. I’m sure there are more!
    • These are just the ethics smells that either have been
    itching me forever, or that I happened to notice recently.
    • Contribute your own!

    View full-size slide

  33. Thanks!
    This presentation copyright 2024 by Dorothea Salo.
    It is available under a Creative Commons Attribution
    4.0 International license.

    View full-size slide