Slide 1

Slide 1 text

Being Gd An Introduction to Robo- and Machine Ethics Eric Weinstein

Slide 2

Slide 2 text

About Me eric_weinstein = { employer: 'AUX', github: 'ericqweinstein', twitter: 'ericqweinstein', website: 'http://ericweinste.in' }

Slide 3

Slide 3 text

Agenda What does it mean to be good? Robo-ethics (three case studies) Machine ethics (three case studies) Questions?

Slide 4

Slide 4 text

Please Note This talk contains stories about real people, although no one who was injured or killed is specifically named. This talk does not contain images or descriptions of death or gore, but includes content (such as images of medical devices and explanations of deaths/injuries due to software or hardware malfunction) that may be upsetting to some audience members.

Slide 5

Slide 5 text

Being Good Utilitarianism: most good for the most people Deontological ethics: rules (e.g. Hammurabi, Kant) Casuistry: extract rules from specific cases

Slide 6

Slide 6 text

Being Good For the purposes of this talk, “being good” means safeguarding the well-being of moral agents that interact with our software by deriving best practices from specific instances.

Slide 7

Slide 7 text

Robo-Ethics Therac-25 Volkswagen emissions scandal Ethereum DAO hack

Slide 8

Slide 8 text

Therac-25 Image credit: hackaday.com

Slide 9

Slide 9 text

Therac-25 Hardware interlocks replaced with software interlocks Code was not independently reviewed Failure modes were not thoroughly understood Hardware + software not tested until assembly Arithmetic overflows, race conditions, cargo-culted code

Slide 10

Slide 10 text

Therac-25 Here, we see a parallel between medicine and writing software: when engineers deviate from the standard of care, we endanger the people who depend on us.

Slide 11

Slide 11 text

Volkswagen Image credit: dw.com

Slide 12

Slide 12 text

Volkswagen Speed, steering wheel position, air pressure, and other factors were measured to distinguish tests from real- world conditions “Test mode” sounds innocuous to engineers Moral hazard of “victimless crimes” that save money

Slide 13

Slide 13 text

Volkswagen We must always ask what our code will be used for, and it is not only our right, but our obligation to refuse to write programs that will harm those who interact with it.

Slide 14

Slide 14 text

The DAO Image credit: ccn.com

Slide 15

Slide 15 text

The DAO Ethereum smart contracts are Turing-complete State machines with invalid states are possible Reentrancy bug allowed repeated withdrawal of funds

Slide 16

Slide 16 text

The DAO With great power comes great responsibility (RIP, Stan Lee!). We are obligated to make programs powerful enough to serve their purpose, and no more powerful.

Slide 17

Slide 17 text

Machine Ethics Facial recognition Police data Autonomous vehicles

Slide 18

Slide 18 text

Face ID Image credit: theverge.com

Slide 19

Slide 19 text

Face ID Ownership of biometric data & privacy invasion Potential for identity theft What does it mean for a machine to “recognize”?

Slide 20

Slide 20 text

Face ID When we entrust machines with the power to perform human acts—decide, recognize, permit, deny—we implicitly give their actions moral weight.

Slide 21

Slide 21 text

Precrime https://www.youtube.com/watch?v=2Av5n7ffe0M

Slide 22

Slide 22 text

Precrime The machine cannot explain its decisions Biased data result in biased machines “It has to be true, the emotionless machine said so!”

Slide 23

Slide 23 text

Precrime Just as Conway’s Law tells us that organizations are constrained to produce software that mirrors their communication structures, machine learning models are constrained to mirror the biases of their data (in this case, human decisions).

Slide 24

Slide 24 text

Self-Driving s Image credit: wired.com

Slide 25

Slide 25 text

Self-Driving s The Trolley Problem: how does the machine decide? How do we teach our (robot) children well? Who is ultimately liable?

Slide 26

Slide 26 text

Self-Driving s Machines’ actions are not only imbued with moral dimension when we enter this territory, but everything that comes with it: the need for explanation and the capacity to accept blame.

Slide 27

Slide 27 text

TL; DPA We need a standard of care (best practices) We need the right to refuse (oath) We must imbue artificial agents with sound moral bases We need an organization to fight for all these things

Slide 28

Slide 28 text

Thanks!

Slide 29

Slide 29 text

Questions? eric_weinstein = { employer: 'AUX', github: 'ericqweinstein', twitter: 'ericqweinstein', website: 'http://ericweinste.in' }