Upgrade to Pro — share decks privately, control downloads, hide ads and more …

IFSC 2200 Ethics in the Profession - Final Pres...

IFSC 2200 Ethics in the Profession - Final Presentation

Anderson Banihirwe

April 29, 2018
Tweet

More Decks by Anderson Banihirwe

Other Decks in Education

Transcript

  1. Artificial Intelligence - Definitions “The study of ideas that enable

    computers to be intelligent.” Intelligence?
  2. Artificial Intelligence Intelligence - Definition: … ability to reason? …

    ability to acquire and apply knowledge? … ability to perceive and manipulate things?
  3. Goals of Artificial Intelligence - Make computer systems more useful

    “Computer scientists and engineers” - Understand the principles that make intelligence possible “ Psychologists, linguists, and philosophers”
  4. What makes Artificial Intelligence a moral issue? - Human welfare

    ( physical safety) - Justice (equality) - Rights (private life, anonymity) - Duties - Ethical problems in Artificial Intelligence can be divided into 3 main sections: - Information - Control - Reasoning
  5. What makes Artificial Intelligence a moral issue? - 1. Information

    Computer systems store information in databases. Management of information and communication between these systems could threaten private life, liberty or dignity of users
  6. What makes Artificial Intelligence a moral issue? - 2. Control

    applications - Robotics Common problem of classical engineering: Guaranteeing personal safety and taking responsibility with the environment. Universal laws stating rules about behaviour between robots and humans … Asimov’s Three Laws of Robotics?
  7. What makes Artificial Intelligence a moral issue? - 2. Control

    applications - Robotics: Asimov three rules of robotics (xkcd: Why Asimov put the Three Laws of Robotics in the order he did)
  8. What makes Artificial Intelligence a moral issue? - 3. Autonomous

    Reasoning Idea: computer systems taking decision by themselves. Problem: Trusting these intelligent systems. Examples: - Medical diagnosis by symptoms - Self-driving cars - Natural Language processing
  9. Autonomous Reasoning - Ethical problems Computers have no consciousness ❖

    They cannot take responsibility of their actions. ❖ Are the creators responsible? The company in charge?
  10. Artificial General Intelligence (AGI) - Definition “AGI is a field

    aiming at the building of “thinking machines”; that is, general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence).” - AGI society (http://www.agi-society.org/) - AGI systems would not only get rights, but also they would want to have rights.
  11. Artificial General Intelligence (AGI) Equality problems: - Could AGI systems

    work for humans? - Would not they become slaves? - Do we have the right to turn off a conscious computer system? Trust: - Autonomous doctor, autonomous judge
  12. Conclusion - The field of Artificial Intelligence wants to create

    something that is not really understood: intelligence - Current AI ethics problems are quite undefined. - Everyday new controversial and polarizing discussions are held around usage of AI in the future. - We can not think about Artificial Intelligence without ethics.