Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Navigating the Moral Maze | Ethical Principles for Al-driven Product Design

Navigating the Moral Maze | Ethical Principles for Al-driven Product Design

Presented to SoftServe Design Center of Excellence on 24 Jan 2024

Skipper Chong Warson

January 24, 2024
Tweet

More Decks by Skipper Chong Warson

Other Decks in Design

Transcript

  1. AI here, there, & everywhere These days, AI is everywhere,

    talked about by everyone, and larger than life. From personalized e- commerce recommendations to manufacturing automation, AI's adaptive capabilities claim to drive efficiency, customization, and "improve" user experiences. Whether embedded in smartphones, smart home devices, or enterprise applications, AI is changing how products operate, promising to learn, adapt, and anticipate user needs.
  2. But who's minding AI? If AI is so very powerful

    and capable, what precautions are in place to safeguard those who come into contact with it? At right, you see an illustration from Dr. Seuss' book, Did I Ever Tell You How Lucky You Are? The person there is a Hawtch Hawtcher whose job is to watch a bee all day as the bee does its job in getting nectar from a flower to make honey. But who watches the Hawtch Hawtcher to make sure that they're watching the bee appropriately?
  3. How do we not make Skynet? Sure, it's science fiction

    — emphasis on the fiction. But the matter remains: Is establishing ethical guardrails for artificial intelligence (AI) important? Yes, absolutely. As AI systems become increasingly sophisticated and embedded in various aspects of our lives, whether they deliver on these blue sky promises or not, ethical considerations become crucial for responsible development and deployment. Ethical guardrails provide a framework to address bias, transparency, accountability, and privacy concerns, ensuring that AI applications align with societal values and norms.
  4. I, Robot No, not the 2004 Will Smith movie. This

    is the collection of science fiction short stories by American writer Isaac Asimov where he first considered the issue of machine ethics, published in 1950. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior ultimately suggesting that no set of fixed laws can sufficiently anticipate all possible circumstances.
  5. The three laws of robotics from 1942 A robot may

    not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 1. 2. 3.
  6. And then a 0th one came later In 2014, he

    added the 'Zeroth Law,' above all the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  7. From robots to tools and beyond In 1981, Asimov wrote

    an article for Compute! magazine about how the three laws could be used for tools. A tool must not be unsafe to use. A tool must perform its function efficiently unless this would harm the user. A tool must remain intact during its use unless its destruction is required for its use or for safety. And more — even country's constitutions and other paradigms. 1. 2. 3.
  8. Balancing innovation and responsibility Leveraging AI in our work seems

    to sit at the fulcrum of responsibility on one side and advancing our design work at a high level of craft and quality. Never mind the overblown promises (looking for venture capital, etc.), this delicate equilibrium demands a nuanced approach that recognizes the potential for advancements and a high ethical standard. Our work must be built on a solid foundation of responsibility, ethical considerations, and a commitment to minimizing unintended consequences.
  9. AI ethical challenges in product design Then, there are the

    inherent challenges of ethical AI product design, with its complexities in safeguarding against biases, maintaining transparency, and upholding user privacy. Striking this balance requires technical expertise and a profound understanding of the societal impacts of AI, emphasizing the need for thoughtful, inclusive, and principled approaches to product development in the ever- evolving landscape of artificial intelligence.
  10. Bias and fairness AI systems can inherit biases present in

    training data or in the model itself, leading to discriminatory outcomes. Product designers must grapple with the challenge of mitigating biases and ensuring fairness to prevent unjust or unequal treatment of different user groups. Imagine you’re building a model to predict the next word in a sequence of text. And make sure you’ve got enough training data, you give it every book written in the last 50 years. Then, you ask it to predict the next word in this sentence: “The CEOs name is ____”. Somehow, a current model is much more likely to predict male names than female ones.
  11. Privacy Imagine a series of gas stations were robbed at

    gunpoint in your city last night, and the police caught a glimpse of the perpetrator on some security camera footage. So they feed that image into a facial recognition system that scans driver’s license databases, mugshot databases and every other database they have to see if there’s a match, and your face comes up. But you didn’t commit the crime— you were in bed by 10 p.m., already sleeping. Why did your face come up as a match then? This is exactly what happened to Robert Julian- Borchak Williams when he was wrongfully arrested by police in Michigan after being falsely identified in a case where AI facial recognition was used.
  12. Transparency AI being in a black box was acceptable to

    some degree in the early days of AI technology, but has lost its its merit with algorithmic bias. For example, AI that was developed to sort resumes disqualified people for certain jobs based on their race, and AI used in banking disqualified loan applicants based on their age. The data the AI was trained on was not balanced to include sufficient data of all kinds of people, and the historical bias that lived in the human decisions was passed to the models.
  13. Security risks and job displacement As AI becomes integrated into

    products, there is an increased risk of malicious use, where AI systems can be manipulated to produce incorrect results. As makers, designers must use our own set of ethics and robust security measures to safeguard against potential exploits and protect users from harm. The widespread adoption of AI can lead to job displacement as automation replaces certain tasks. Then, we have potential economic inequality coming in the door. We must ensure that AI- driven products contribute to societal well- being rather than exacerbating disparities in employment opportunities and income.
  14. Deep fakes Deepfakes, the sophisticated manipulation of audiovisual content using

    artificial intelligence, often resulting in hyper- realistic, deceptive simulations that can mimic the appearance and behavior of real individuals, are often indistinguishable from authentic footage. These sophisticated outputs can be misused to deceive, manipulate public opinion, and fabricate events, which erode public trust and increase the potential for misinformation. Thus, we need robust ethical frameworks to mitigate the harmful impact of this technology on individuals and society.
  15. Monkey see AI, monkey do AI Ethical design is fundamental

    to cultivating responsible AI. By starting with principles of transparency and accountability into the design process, developers can create AI systems that prioritize fairness, inclusivity, and individual well- being. This approach guards against unintended biases, ensuring that AI aligns with moral values and contributes positively to society. Whether it's a dark pattern or issues related to privacy concerns and inclusive design, ethical considerations are paramount in shaping AI that respects user rights and promotes a positive societal impact.
  16. IEEE AI ethics principles The IEEE AI ethics principles provide

    comprehensive guidance for the responsible development and deployment of artificial intelligence. Emphasizing principles such as fairness, transparency, accountability, and inclusivity, the IEEE framework aims to ensure that AI technologies align with ethical standards and prioritize societal well- being.
  17. Google AI ethics principles Google's AI ethics principles underscore the

    importance of developing artificial intelligence in a responsible and socially beneficial manner. Focused on ensuring fairness, accountability, transparency, and privacy, these principles guide Google's approach to AI innovation, emphasizing the importance of minimizing biases and addressing the broader ethical implications of AI technologies.
  18. UNESCO AI ethics principles UNESCO's AI ethics principles focuses on

    responsible AI development by emphasizing the importance of human rights, inclusivity, and cultural diversity. Designed to guide the ethical use of AI technologies, UNESCO's framework underscores principles such as transparency, accountability, and respect for fundamental human values.
  19. The ordering is also important According to Randall Munroe who

    produces the web comic xkcd, Asimov's laws are in a particular order, for good reason.
  20. Remember the three laws for tools? At the end of

    the Compute! article, Asimov writes: I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior. My answer is, 'Yes, the Three Laws are the only way in which rational human beings can deal with robots— or with anything else.' But when I say that, I always remember (sadly) that human beings are not always rational. "
  21. Is the genie out of the bottle? Amidst the enthusiasm

    fueled by venture capital and the envisioned potential utopian (or dystopian) future, the significance of continuous vigilance, flexible regulatory frameworks, and ethical deliberations cannot be overstated. These elements play a crucial role in guiding the development of AI technologies, ensuring they align with the well- being of individuals and the broader interests of society, irrespective of one's perspective on their future impact.
  22. Who's minding AI? We're minding AI! Setting the guardrails for

    AI falls squarely within our responsibility, emphasizing humans' crucial role in establishing ethical guidelines, regulatory frameworks, and ensuring the responsible development and deployment of artificial intelligence. As stewards of technological advancement, it is incumbent upon us to proactively shape AI's trajectory in a manner that prioritizes societal well- being and aligns with our ethical values.
  23. What can we do? AI systems will generate increasingly sophisticated

    images, audio, and text — and human oversight is absolutely necessary to ensure AI outputs are not harmful or unethical. Aside from clear harms, we must ask ourselves, is it important to understand the difference between a human- made creations or AI- constructed work?
  24. One more example And then this from Lauren Celenza from

    her newsletter Tech Without Losing Your Soul — "Speaking of Instagram, I received a DM from a simulacrum of Kendall Jenner, my new 'older sister and confidante,' encouraging me to vent away my therapy data to Meta. Synthetic social networks exploded this year, artfully arranged to assuage loneliness. I know I shouldn’t engage in a therapy session with AI Kendall Jenner, but does my thirteen- year- old self know this?"
  25. https://seuss.fandom.com/wiki/Hawtch- Hawtcher_Bee_Watcher https://en.wikipedia.org/wiki/Three_Laws_of_Robotics https://www.eweek.com/artificial- intelligence/ai- companies/ https://www.sitecore.com/blog/ai/ai- bias- what- it-

    is- and- why- it- matters https://hbr.org/2022/06/building- transparency- into- ai- projects https://www.unite.ai/u- s- sees- first- case- of- wrongful- arrest- due- to- bad- algorithm/ https://www.businessinsider.com/openais- latest- chatgpt- version- hides- training- on- copyrighted- material- 2023- 8 https://www.foxbusiness.com/entertainment/sarah- silverman- authors- sue- meta- openai- alleged- copyright- infringement https://goliathresearch.com/blog/ai- and- ethics https://standards.ieee.org/industry- connections/ec/autonomous- systems/ https://blog.google/technology/ai/ai- principles/ https://www.unesco.org/en/artificial- intelligence/recommendation- ethics https://research.aimultiple.com/generative- ai- ethics/ https://xkcd.com/1613/ https://www.nytimes.com/2023/12/03/technology/ai- openai- musk- page- altman.html https://standards.ieee.org/industry- connections/ec/autonomous- systems/ https://laurencelenza.substack.com/p/the- year- in- power- struggles ChatGPT, Bing, and Bard have been used for some data points — like companies working in AI (though I ended up finding that through a few different Internet searches) and ethics frameworks. Most text has been processed through SoftServe's corporate instance of Grammarly and most images come from Unsplash. Dr. Seuss images and others found online via Duck Duck Go search. And Lauren Celenza's images come from her 18 Dec 2023 edition of Tech Without Losing Your Soul. Credits
  26. Thank you Дякую Dziękuję Благодарим ви Gracias Merci Danke Vă

    mulțumesc اﺮﻜﺷ Terima kasih Xiè xiè Romba Nandri ध यवाद् ধন বাদ Obrigado ਧਨਵਾਦ Grazie ขอบคุณ ꦩꦠꦸ ꦂ ꦤꦸꦮꦸꦤ꧀ Asante Na gode Teşekkür ederim