Being Good: An Introduction to Robo- and Machine Ethics

Being Good: An Introduction to Robo- and Machine Ethics

Machines are all around us: recommending our TV shows, planning our car trips, and running our day-to-day lives via AI assistants like Siri and Alexa. We know humans should act ethically, but what does that mean when building computer systems? Are machines themselves—autonomous vehicles, neural networks that diagnose cancer, algorithms that identify trends in police data—capable of ethical behavior? In this talk, we'll examine the implications of artificial moral agents and their relationships with the humans who build them through the lens of multiple case studies.

Facce030b679bda34eb7c64885a741fc?s=128

Eric Weinstein

November 13, 2018
Tweet

Transcript

  1. Being Gd An Introduction to Robo- and Machine Ethics Eric

    Weinstein
  2. About Me eric_weinstein = { employer: 'AUX', github: 'ericqweinstein', twitter:

    'ericqweinstein', website: 'http://ericweinste.in' }
  3. Agenda What does it mean to be good? Robo-ethics (three

    case studies) Machine ethics (three case studies) Questions?
  4. Please Note This talk contains stories about real people, although

    no one who was injured or killed is specifically named. This talk does not contain images or descriptions of death or gore, but includes content (such as images of medical devices and explanations of deaths/injuries due to software or hardware malfunction) that may be upsetting to some audience members.
  5. Being Good Utilitarianism: most good for the most people Deontological

    ethics: rules (e.g. Hammurabi, Kant) Casuistry: extract rules from specific cases
  6. Being Good For the purposes of this talk, “being good”

    means safeguarding the well-being of moral agents that interact with our software by deriving best practices from specific instances.
  7. Robo-Ethics Therac-25 Volkswagen emissions scandal Ethereum DAO hack

  8. Therac-25 Image credit: hackaday.com

  9. Therac-25 Hardware interlocks replaced with software interlocks Code was not

    independently reviewed Failure modes were not thoroughly understood Hardware + software not tested until assembly Arithmetic overflows, race conditions, cargo-culted code
  10. Therac-25 Here, we see a parallel between medicine and writing

    software: when engineers deviate from the standard of care, we endanger the people who depend on us.
  11. Volkswagen Image credit: dw.com

  12. Volkswagen Speed, steering wheel position, air pressure, and other factors

    were measured to distinguish tests from real- world conditions “Test mode” sounds innocuous to engineers Moral hazard of “victimless crimes” that save money
  13. Volkswagen We must always ask what our code will be

    used for, and it is not only our right, but our obligation to refuse to write programs that will harm those who interact with it.
  14. The DAO Image credit: ccn.com

  15. The DAO Ethereum smart contracts are Turing-complete State machines with

    invalid states are possible Reentrancy bug allowed repeated withdrawal of funds
  16. The DAO With great power comes great responsibility (RIP, Stan

    Lee!). We are obligated to make programs powerful enough to serve their purpose, and no more powerful.
  17. Machine Ethics Facial recognition Police data Autonomous vehicles

  18. Face ID Image credit: theverge.com

  19. Face ID Ownership of biometric data & privacy invasion Potential

    for identity theft What does it mean for a machine to “recognize”?
  20. Face ID When we entrust machines with the power to

    perform human acts—decide, recognize, permit, deny—we implicitly give their actions moral weight.
  21. Precrime https://www.youtube.com/watch?v=2Av5n7ffe0M

  22. Precrime The machine cannot explain its decisions Biased data result

    in biased machines “It has to be true, the emotionless machine said so!”
  23. Precrime Just as Conway’s Law tells us that organizations are

    constrained to produce software that mirrors their communication structures, machine learning models are constrained to mirror the biases of their data (in this case, human decisions).
  24. Self-Driving s Image credit: wired.com

  25. Self-Driving s The Trolley Problem: how does the machine decide?

    How do we teach our (robot) children well? Who is ultimately liable?
  26. Self-Driving s Machines’ actions are not only imbued with moral

    dimension when we enter this territory, but everything that comes with it: the need for explanation and the capacity to accept blame.
  27. TL; DPA We need a standard of care (best practices)

    We need the right to refuse (oath) We must imbue artificial agents with sound moral bases We need an organization to fight for all these things
  28. Thanks!

  29. Questions? eric_weinstein = { employer: 'AUX', github: 'ericqweinstein', twitter: 'ericqweinstein',

    website: 'http://ericweinste.in' }