Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Being Good: An Introduction to Robo- and Machine Ethics

Being Good: An Introduction to Robo- and Machine Ethics

Machines are all around us: recommending our TV shows, planning our car trips, and running our day-to-day lives via AI assistants like Siri and Alexa. We know humans should act ethically, but what does that mean when building computer systems? Are machines themselves—autonomous vehicles, neural networks that diagnose cancer, algorithms that identify trends in police data—capable of ethical behavior? In this talk, we'll examine the implications of artificial moral agents and their relationships with the humans who build them through the lens of multiple case studies.

Eric Weinstein

November 13, 2018
Tweet

More Decks by Eric Weinstein

Other Decks in Programming

Transcript

  1. About Me eric_weinstein = { employer: 'AUX', github: 'ericqweinstein', twitter:

    'ericqweinstein', website: 'http://ericweinste.in' }
  2. Agenda What does it mean to be good? Robo-ethics (three

    case studies) Machine ethics (three case studies) Questions?
  3. Please Note This talk contains stories about real people, although

    no one who was injured or killed is specifically named. This talk does not contain images or descriptions of death or gore, but includes content (such as images of medical devices and explanations of deaths/injuries due to software or hardware malfunction) that may be upsetting to some audience members.
  4. Being Good Utilitarianism: most good for the most people Deontological

    ethics: rules (e.g. Hammurabi, Kant) Casuistry: extract rules from specific cases
  5. Being Good For the purposes of this talk, “being good”

    means safeguarding the well-being of moral agents that interact with our software by deriving best practices from specific instances.
  6. Therac-25 Hardware interlocks replaced with software interlocks Code was not

    independently reviewed Failure modes were not thoroughly understood Hardware + software not tested until assembly Arithmetic overflows, race conditions, cargo-culted code
  7. Therac-25 Here, we see a parallel between medicine and writing

    software: when engineers deviate from the standard of care, we endanger the people who depend on us.
  8. Volkswagen Speed, steering wheel position, air pressure, and other factors

    were measured to distinguish tests from real- world conditions “Test mode” sounds innocuous to engineers Moral hazard of “victimless crimes” that save money
  9. Volkswagen We must always ask what our code will be

    used for, and it is not only our right, but our obligation to refuse to write programs that will harm those who interact with it.
  10. The DAO Ethereum smart contracts are Turing-complete State machines with

    invalid states are possible Reentrancy bug allowed repeated withdrawal of funds
  11. The DAO With great power comes great responsibility (RIP, Stan

    Lee!). We are obligated to make programs powerful enough to serve their purpose, and no more powerful.
  12. Face ID Ownership of biometric data & privacy invasion Potential

    for identity theft What does it mean for a machine to “recognize”?
  13. Face ID When we entrust machines with the power to

    perform human acts—decide, recognize, permit, deny—we implicitly give their actions moral weight.
  14. Precrime The machine cannot explain its decisions Biased data result

    in biased machines “It has to be true, the emotionless machine said so!”
  15. Precrime Just as Conway’s Law tells us that organizations are

    constrained to produce software that mirrors their communication structures, machine learning models are constrained to mirror the biases of their data (in this case, human decisions).
  16. Self-Driving s The Trolley Problem: how does the machine decide?

    How do we teach our (robot) children well? Who is ultimately liable?
  17. Self-Driving s Machines’ actions are not only imbued with moral

    dimension when we enter this territory, but everything that comes with it: the need for explanation and the capacity to accept blame.
  18. TL; DPA We need a standard of care (best practices)

    We need the right to refuse (oath) We must imbue artificial agents with sound moral bases We need an organization to fight for all these things