Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Being Good: An Introduction to Robo- and Machine Ethics

Being Good: An Introduction to Robo- and Machine Ethics

Machines are all around us: recommending our TV shows, planning our car trips, and running our day-to-day lives via AI assistants like Siri and Alexa. We know humans should act ethically, but what does that mean when building computer systems? Are machines themselves—autonomous vehicles, neural networks that diagnose cancer, algorithms that identify trends in police data—capable of ethical behavior? In this talk, we'll examine the implications of artificial moral agents and their relationships with the humans who build them through the lens of multiple case studies.

Eric Weinstein

November 13, 2018
Tweet

More Decks by Eric Weinstein

Other Decks in Programming

Transcript

  1. Being Gd
    An Introduction to Robo- and Machine Ethics
    Eric Weinstein

    View Slide

  2. About Me
    eric_weinstein = {
    employer: 'AUX',
    github: 'ericqweinstein',
    twitter: 'ericqweinstein',
    website: 'http://ericweinste.in'
    }

    View Slide

  3. Agenda
    What does it mean to be good?
    Robo-ethics (three case studies)
    Machine ethics (three case studies)
    Questions?

    View Slide

  4. Please Note
    This talk contains stories about real people,
    although no one who was injured or killed is
    specifically named. This talk does not contain
    images or descriptions of death or gore, but
    includes content (such as images of medical
    devices and explanations of deaths/injuries due
    to software or hardware malfunction) that may
    be upsetting to some audience members.

    View Slide

  5. Being Good
    Utilitarianism: most good for the most people
    Deontological ethics: rules (e.g. Hammurabi, Kant)
    Casuistry: extract rules from specific cases

    View Slide

  6. Being Good
    For the purposes of this talk, “being good” means safeguarding
    the well-being of moral agents that interact with our software
    by deriving best practices from specific instances.

    View Slide

  7. Robo-Ethics
    Therac-25
    Volkswagen emissions scandal
    Ethereum DAO hack

    View Slide

  8. Therac-25
    Image credit: hackaday.com

    View Slide

  9. Therac-25
    Hardware interlocks replaced with software interlocks
    Code was not independently reviewed
    Failure modes were not thoroughly understood
    Hardware + software not tested until assembly
    Arithmetic overflows, race conditions, cargo-culted
    code

    View Slide

  10. Therac-25
    Here, we see a parallel between medicine and writing
    software: when engineers deviate from the standard of
    care, we endanger the people who depend on us.

    View Slide

  11. Volkswagen
    Image credit: dw.com

    View Slide

  12. Volkswagen
    Speed, steering wheel position, air pressure, and other
    factors were measured to distinguish tests from real-
    world conditions
    “Test mode” sounds innocuous to engineers
    Moral hazard of “victimless crimes” that save money

    View Slide

  13. Volkswagen
    We must always ask what our code will be used for, and
    it is not only our right, but our obligation to refuse to
    write programs that will harm those who interact with
    it.

    View Slide

  14. The DAO
    Image credit: ccn.com

    View Slide

  15. The DAO
    Ethereum smart contracts are Turing-complete
    State machines with invalid states are possible
    Reentrancy bug allowed repeated withdrawal of funds

    View Slide

  16. The DAO
    With great power comes great responsibility (RIP, Stan
    Lee!). We are obligated to make programs powerful
    enough to serve their purpose, and no more powerful.

    View Slide

  17. Machine Ethics
    Facial recognition
    Police data
    Autonomous vehicles

    View Slide

  18. Face ID
    Image credit: theverge.com

    View Slide

  19. Face ID
    Ownership of biometric data & privacy invasion
    Potential for identity theft
    What does it mean for a machine to “recognize”?

    View Slide

  20. Face ID
    When we entrust machines with the power to perform
    human acts—decide, recognize, permit, deny—we
    implicitly give their actions moral weight.

    View Slide

  21. Precrime
    https://www.youtube.com/watch?v=2Av5n7ffe0M

    View Slide

  22. Precrime
    The machine cannot explain its decisions
    Biased data result in biased machines
    “It has to be true, the emotionless machine said so!”

    View Slide

  23. Precrime
    Just as Conway’s Law tells us that organizations are
    constrained to produce software that mirrors their
    communication structures, machine learning models
    are constrained to mirror the biases of their data (in
    this case, human decisions).

    View Slide

  24. Self-Driving s
    Image credit: wired.com

    View Slide

  25. Self-Driving s
    The Trolley Problem: how does the machine decide?
    How do we teach our (robot) children well?
    Who is ultimately liable?

    View Slide

  26. Self-Driving s
    Machines’ actions are not only imbued with moral
    dimension when we enter this territory, but everything
    that comes with it: the need for explanation and the
    capacity to accept blame.

    View Slide

  27. TL; DPA
    We need a standard of care (best practices)
    We need the right to refuse (oath)
    We must imbue artificial agents with sound moral bases
    We need an organization to fight for all these things

    View Slide

  28. Thanks!

    View Slide

  29. Questions?
    eric_weinstein = {
    employer: 'AUX',
    github: 'ericqweinstein',
    twitter: 'ericqweinstein',
    website: 'http://ericweinste.in'
    }

    View Slide