Upgrade to Pro — share decks privately, control downloads, hide ads and more …

OpenTalks.AI - Максим Карлюк, The ‘Trolley Problem’, Autonomous Agents And The Law

OpenTalks.AI - Максим Карлюк, The ‘Trolley Problem’, Autonomous Agents And The Law

OpenTalks.AI

March 01, 2018
Tweet

More Decks by OpenTalks.AI

Other Decks in Business

Transcript

  1. THE ‘TROLLEY PROBLEM’, AUTONOMOUS AGENTS AND THE LAW OpenTalks.AI Moscow,

    2018 HSE-Skolkovo Institute for Law and Development Maksim Karliuk
  2. THE TROLLEY PROBLEM: LAW Example: • Mercedes - cars designed

    to prioritise occupants over pedestrians • German Ministry of Transportation - car should choose personal safety over property damage - car never distinguishes between humans based on categories such as age or race - if a human removes hands off steering wheel, car’s manufacturer is liable when there is a collision HSE-Skolkovo Institute for Law and Development
  3. AUTONOMOUS AGENTS devices or programs, which perceive their environment and

    perform actions, which maximize their chances of success in attaining certain aims Problems for law: 1. predictability, which is critical to modern legal approaches 2. ability to be the cause for an event (to act independently), but not to be liable HSE-Skolkovo Institute for Law and Development
  4. ALGORITHMIC TRANSPARENCY AND ACCOUNTABILITY • Should users and car owners

    (if any) have the right to understand how the algorithm takes decisions? • In case a specific set of principles is agreed upon, should public authorities be able to check compliance by inspecting algorithms? • Should insurance companies be enabled to audit and inspect algorithms to set their premiums? • Will there be (blockchain-enabled) black boxes that allow us to understand what happened in more detail? HSE-Skolkovo Institute for Law and Development
  5. ALGORITHMIC BIASES • Our society and legal system are already

    biased - exposure in terms of liabilities - tort law bases damage compensation on foregone earnings • Efficiency/privacy trade-off - algorithms cannot be neutral - the more they discriminate, the more they are efficient - would people trade off privacy in exchange for accuracy? Examples: • Google’s algorithm • PredPol • Algorithms in judiciary HSE-Skolkovo Institute for Law and Development
  6. ALGORITHMS AND LIABILITY • Algorithms can be used to distance

    tortfeasors from liability • Strict liability of developers? - we don’t know what we will know - difficult to establish causation, even without having to prove negligence - key problems: distributed responsibility and clash of algorithms - process-based or outcome-based? • Interaction between algorithms: joint and several liability? HSE-Skolkovo Institute for Law and Development
  7. REGULATORY OPTIONS FOR LIABILITY 1. Legal persons with rights and

    duties 2. Robots = animals 3. Robots = slaves 4. Robots like robots? HSE-Skolkovo Institute for Law and Development
  8. DIRECT LIABILITY OF AUTONOMOUS AGENTS (Alleged) difference between humans and

    autonomous agents (in terms of liability): • intention • freedom of will Solution: intention and freedom of will must be understood as what we attribute to each other rather than things in reality HSE-Skolkovo Institute for Law and Development
  9. THE ‘HOW’ AND THE ‘WHY’ QUESTIONS • Which aims are

    being pursued? • Whose interests prevail? • Law as a tool for technological development HSE-Skolkovo Institute for Law and Development