code works, but appears highly likely to cause a problem down the road. •Missing edge cases •Lots of cut-and-pasted code (unmaintainable after a certain point) •Cuts corners on security •Etc etc etc—ask a dozen programmers, get fifty problems cited •These hints at poor/unstable/unmaintainable/hackable code are called “code smells.” •I ruthlessly appropriated this idea into “ethics smells.”
of reasoning that may look reasonable at first glance, but hints at ethical flaws. •Not everyone guilty of an ethics smell is a Bad Person doing Bad Things. It may not be that simple. •Sometimes ethics smells become part of prevailing discourse! Such that omitting or questioning them feels weird! •Sometimes people just haven’t done the work yet, or are seduced by whatever the latest shiny thing is. •Sometimes the ethical issues are legitimately hard to understand! •But yes, sometimes people do use these tropes etc. to forestall or deflect legitimate ethical critique. That’s not okay.
to us.” •Who even believes this now? We’re all like “prove it,” for good reason. •How can you tell a pleasant-seeming ethics platitude is empty? •Is any information given about actual actions in support of the ethics? If not, be suspicious. •If what they’re saying about the ethics issue confuses you… that may be intentional. Be suspicious… especially where they’re using way more words, or way more jargon, than seems warranted. •When in doubt, check into the person or organization’s track record. Is there reason to believe they will act (un)ethically? Are they self-serving?
thing you can be. It’s only a thing you can DO. •When you stop doing good—especially when you start doing bad —you can’t claim the high road. •This one is psychologically understandable; we all desperately want to believe we’re good people. •Ironically, insisting that we are good people no matter what raises the chances we stop governing ourselves aright. •LIBRARIANS, WE ARE PRONE TO THIS ONE. •“To serve our patrons!” can become the door to evil. I’ve seen it.
of this one. So very, very guilty. •It’s also rife in computer science. I don’t entirely know why (and I doubt it’s a legacy from LIS), but I know from experience and reading that it’s so. •Often the subtext here is “how dare you judge me?” or “I shouldn’t have to think about the ethics of what I do!” •Often an attempt to evade ethical questions around power and oppression. •If it’s “neutral,” it can’t be racist or sexist or ableist or ageist or homophobic or transphobic OR GENOCIDAL (Facebook!!!), can it?! •Decades of STS, LIS, ethics work giving the lie to “neutrality.” I don’t have time to recount it all (ask me if you want readings).
[some] people, it must be in the clear ethically!” •This can be nakedly self-serving, of course. •And it fails consequentialist ethics forever! •Can also be power and privilege: “this thing helps some people (often ‘people just like me’), so all the people it hurts don’t matter!” •Sometimes it’s naïve optimism at work. •You may be able to tell the difference based on how long the responsible entity has been up to whatever the thing is. Are they still under the spell of the shiny? Or should they really know better by now? •I say this because you’ll want to tailor your counters. •Take the self-serving and privilege-leveraging down. No mercy. •Ask the optimists questions about how they’re handling their thing’s ethical issues. They don’t have to be gentle questions, necessarily!
have yet to see this be true when said aloud. It is invariably bullying or rationalization, not fact.) •“If you’re not doing this, you’re WrongBadBehind and you’ll die! Unethical-Thing is your only hope!” •Risks, what risks? Ethics, what ethics? •Definitely look for a self-serving (or at least self-justifying) motive here. •“We have an obligation to do this! How can we not?” •Often accompanied by a heartstring-tugging case study •Conspicuous by its absence: any discussion of risks or harms, except possibly for an Empty Platitude or two •(This is so common in learning analytics. So very, very common.)
Bad Stuff will happen and it’ll be YOUR FAULT!” •So nakedly self-serving that you’d think people would see through it instantly… but if the Bad Stuff is bad enough and people are desperate enough to avoid it, we land here. •Sometimes the desire to Do Something, or Be Seen To Be Doing Something, overcomes common sense.
•I don’t think I’ve ever seen this be so much as well-intended, never mind ethics-minded. It exists to delegitimize ethical concerns and those who hold them. •Seeing it? Question “progress,” loudly and often. •“Progress” is not always the word that will be used, of course; “innovation” is another common culprit. •Medicine: Primum non nocere = first, do no harm. •Not the same as a conflicting-ethics dilemma—that’s normal, to be expected, and absolutely a legitimate thing to think through.
are fond of this one. •Do not even get me started about the NISO-sponsored library- technology standard RA21. We will be here for weeks. •I got into such an argument on Twitter once about whether OAIS should require digital archives to pay attention to risks to individuals or groups represented in a dataset, equivalently to “designated community” of data end-users… •The fight ended with an OAIS booster declaring repeatedly that this question was out of scope for OAIS. If so, then OAIS needs to fix its scope, in my not-entirely-humble but trying-to-be-ethical opinion. •The IETF finally, finally, FINALLY repudiated this in 2019: https:// datatracker.ietf.org/doc/draft-iab-for-the-users/ •“The IAB encourages the IETF to explicitly consider user impacts and points of view in any IETF work.” Whoa. That’s a change for the better.
is full of these. “Privacy is dead!” “Only connect.” •Ethics rule of thumb: If you start to sound like The Zuck, stop. •So many Big Data news stories and explainers start with this! •So much that I think it’s now one of those part-of-the-discourse things… •Sometimes, as with The Zuck, this is a naked attempt to set the terms of discourse to avoid ethical responsibility. •In my experience, though, this one is especially likely to occur in the writing and work of people who are basically decent. •If that’s you, my advice is, do not cede ground without a fight. •Is the problematic thing really a fact of life? Is it truly unsubstitutable, unstoppable? Or is it just convenient for some people if we all think so?
was ordered to” perhaps? Since Nuremberg at least, we’ve known that one’s out of bounds! •see also TVTropes “Ain’t No Rule” (often “Ain’t No Law/Regulation”) •So ethically lazy I can’t even. OWN YOUR ACTIONS. But I’ve seen it. •Assumes law/ethics systems are perfect and omniscient, such that if someone were really doing something bad, they’d be stopped. •This is nonsense, of course. Law lags bad behavior even more than ethics! •Not the same as “the ethics system here has gaps/loopholes; here’s how I navigated them, though I may have gotten it wrong.” •Where you see this, it is typically a thoughtful analysis that understands that where ethical guidance is not clear or there is no applicable guidance, we have a duty to think the ethics through on our own.
has been approved by the IRB’: Gayface AI, research hype and the pervasive data ethics gap.” https://medium.com/pervade-team/the-study-has-been-approved-by-the-irb-gayface-ai- research-hype-and-the-pervasive-data-ethics-ed76171b882c
sign of someone who didn’t do their historical or ethical homework. Or said “talk to the hand!” to anyone who DID know. •Also common where the responsible entity is not inclusive. •See e.g. Google, which screwed this up repeatedly with search, image auto-tagging, Buzz, and Google+ (look up the “nymwars”). •Sometimes a sign of a new, naïvely optimistic person or field. •Information security labors under a lot of technology standards and infrastructures that were poorly designed because nobody thought anybody would ever hack or misuse them. (See e.g.: email, BGP.) •I can grudgingly forgive this… for a little while. (For Big Data… it’s way, way too late for any more of this.)
DID WILL BE BAD” •This variant often comes from academic researchers, particularly of the “tech is neutral!” variety. •Often code: “I don’t want to think about ethics! Don’t make me!” •If you see a lot of privacy experts and/or historians head- desking over something… look at their analogies carefully. •Sure, don’t accept anything you hear without checking; that’s fine. •But my experience, for what it’s worth, is that these folks aren’t alarmists, just people who have seen this before and understand the patterns behind it.
nobis,” which has been around a long time) •“We’re going to use this questionably-ethical thing to fix all Their problems for Them!” •They, whoever They are, are either conspicuous by Their total absence, or “represented” by a single token person/case study. •They are frequently in a less-powerful, less-privileged, less-voice-y position than the speaker. They may in fact have no option to refuse the intervention, which is questionably ethical all by itself. •So much wrong with this. So much. •Othering people is deeply uncool. So is disempowering people while ignoring their voices. •“Deficit model”—They have problems (not strengths!), and We Know Better Than They Do how to fix Them. •Often operating off stereotypes and superficialities—frequently blatantly incorrect ones—about Them.