$30 off During Our Annual Pro Sale. View Details »

AI Ethics: Problems, Questions, Hopes, Regrets

AI Ethics: Problems, Questions, Hopes, Regrets

Lemi Orhan Ergin

June 08, 2023
Tweet

More Decks by Lemi Orhan Ergin

Other Decks in Technology

Transcript

  1. AI
    ETHICS
    PROBLEMS, QUESTIONS, HOPES, REGRETS
    by lemi orhan ergin, co-founder of Craftgate

    View Slide

  2. LEMi ORHAN ERGiN
    lemiorhanergin.com


    @lemiorhan
    Bs and MSc @ Marmara University CSE, 2002 / 2005


    developing software, coding since 2001


    alumni of Sony, GittiGidiyor/eBay, ACM, iyzico


    co-founder @ Craftgate, craftgate.io


    founder of Turkish Software Craftsmanship Community


    talking about professional ethics since 2008
    you can contact with me


    from social media

    View Slide

  3. When I tweeted about one of the most dangerous vulnerabilities
    an operating system can have, people accused me of exposing
    irresponsibly and unethically. In fact, the vulnerability had been
    already exposed in public forums. We couldn’t achieve to make
    Apple understand the situation. The only thing I did is push the
    alarm button.
    The Root Bug: Zero-Day Vulnerability for MacOS High Sierra

    View Slide

  4. https://speakerdeck.com/lemiorhan
    https://www.youtube.com/watch?v=DClFjk_Uod8
    https://www.youtube.com/playlist?list=PLQTv1b9jwvWdvUVfv0M55mRbTB8CMYT9R
    Slides of my talks about ethics:


    My TEDx Talk about ethics:


    Recordings of my all talks:
    MY TALKS ABOUT PROFESSIONAL ETHICS

    View Slide

  5. View Slide

  6. An expert in cybernetics at
    Cyberdyne Systems Corporation
    as the Director of Special Projects
    Miles Bennett Dyson

    View Slide

  7. An expert in cybernetics at
    Cyberdyne Systems Corporation
    as the Director of Special Projects
    Miles Bennett Dyson
    He is the original inventor of the
    microprocessor which would lead to
    the development of Skynet, an
    intelligent computer system intended
    to control the United States military.


    It later achieve sentience and launch a
    global war of extermination against
    humanity when going online on
    August 4th, 1997.


    It becomes self-aware at 2:14 am EST,
    on August 29th, 1997, and sent
    nuclear bombs to Russia.

    View Slide

  8. AI revolution already started.


    The steps we take today will determine whether
    we will face Skynet or not

    View Slide

  9. AI revolution already started.


    The steps we take today will determine whether
    we will face Skynet or not
    Fortunately, we saw the same movie centuries ago…

    View Slide

  10. The transition from creating goods
    by hand to using machines
    The Industrial Revolution
    (approx. 1760s – 1914)

    View Slide

  11. 1764: Spinning Jenny


    1769: Water frame, Steam engine


    1785: Power loom


    1793: Cotton gin


    1804: The first railway steam locomotive


    1830: Liverpool to Manchester railway line opened
    The transition from agrarian economies
    to industrial and manufacturing ones
    First Industrial Revolution
    Water frame
    Spinning Jenny
    Steam Engine
    Cotton Gin Power Loom
    First Locomotive
    (approx. 1760s – 1840s)

    View Slide

  12. 1870s: The spread of railroads and the telegraph


    1876: Telephone


    1879: Practical electric light bulb


    1886: First automobile powered by an internal combustion engine


    1903: First powered flight


    1908: Assembly line method of production
    (approx. 1870s – 1914)
    Technological revolution characterized
    by the widespread adoption of steel,
    petroleum, and electricity
    Second Industrial Revolution
    First Railroads First Telephone
    First Internal Combustion Engine
    First Electric Light Bulb
    First Flight
    First Assembly Line

    View Slide

  13. Industrial Revolution changed the way we live
    People moved from rural areas to cities


    Farming economies shifted to industrial and manufacturing


    Technology became an integral part of human life


    Public education became more common


    Factories opened all over the world


    Rise in consumerism, people buy without any need


    The rapid increase in the human population
    the impact is so huge


    that’s why we call it revolution

    View Slide

  14. Industrial Revolution introduced ethical problems
    Long working hours, poor pay, unsafe working conditions, child labor


    Economic inequality, social injustice


    Significant environmental damage and climate change


    Poor living conditions and health problems


    Reducing workers, dehumanization of work


    Displacement of traditional skills and jobs, loss of craftsmanship mindset
    we try to raise awareness of mastery and craftsmanship

    for software development at SCTurkey community

    View Slide

  15. The transition from thinking and making decisions
    on your own to trusting machines to do it for you
    The AI Revolution
    (approx. 2010s – {unknown})

    View Slide

  16. 1943: Birth of neural networks concept


    1950: Turing test by Alan Turing


    1951: First chess program


    1956: Birth of Artificial Intelligence term at Dartmouth Conference


    1959: First AI program: General Problem Solver


    1965: ELIZA natural language processing computer program


    1986: The backpropagation algorithm was rediscovered


    1997: Deep Blue defeated Kasparov


    2002: Roomba, a robotic vacuum cleaner, was introduced


    2005: DARPA Grand Challenge for self-driving cars


    2006: Deep learning concept introduced
    The birth of concepts, first show-
    offs, attracting people's attention
    The Footsteps of AI

    View Slide

  17. 2012: Deep learning model won ImageNet image recognition contest


    2013: Google's Word2Vec captures semantic relationships between words


    2015: Google’s AlphaGo defeated Lee Sedol


    2016: OpenAI founded


    2017: Transformer Model introduced, the foundation for LLMs


    2018: GPT (Generative Pretrained Transformer) introduced


    2020: GPT-3 introduced by OpenAI (with 175 billion parameters)


    2023: Meta launched LLaMA and leaked it to the public


    2023: GPT models start to work on local machines


    2023: NVidia announced DGX GH200 AI supercomputer
    AI-based products became
    commodities and industries start
    to change how they work
    The AI Revolution
    THE START OF IMPACT ON INDUSTRIES

    View Slide

  18. AI changes the way we live till now
    Being an expert in minutes


    Increasing learning speed


    Automation of repetitive tasks


    Minimizing time in trial-and-error tasks


    Improvements in healthcare


    Predicting natural disasters


    Generating content in any type


    Smart homes and smart transportation
    Making decisions on behalf of people without anyone realizing it


    with deep ETHICAL CONCERNS

    View Slide

  19. Ethics is the rules of being good
    being responsible and accountable for our behaviors
    and the decisions made

    View Slide

  20. Moral usually refers to an individual's personal beliefs
    about what is right and wrong based on their culture,
    religion, or personal views
    Ethics refers to the rules or standards governing the
    conduct of a person or the members of a profession.

    View Slide

  21. How should an intelligent system behave?


    How can we trust decisions made by an algorithm?


    What rights should AI have?

    View Slide

  22. AI Ethics is the principles and values in the development,
    deployment, and use of AI technologies ensuring that
    they align with human values, rights, and norms
    being responsible and accountable for its behaviors
    and the decisions made
    aka machine ethics, computational ethics,

    or computational morality

    View Slide

  23. Why AI ethics is crucial for human kind?

    View Slide

  24. AI is the next big thing after

    the industrial revolution
    transformative impact on society, economy, technology, and culture

    View Slide

  25. The next step of AI development is
    machine and human interaction
    it is not a matter of the correctness of ChatGPT responses,
    it is vital for the survival of humankind
    and we are already late in thinking about it

    View Slide

  26. Ethical Cases in AI
    Data & Privacy Trust & Transparency
    fasten your seatbelts

    View Slide

  27. Copyright and Legal Exposure / Plagiarism


    Sensitive Information Disclosure / Data Hacking


    Privacy violations / Consent Violations / Surveillance


    Data Accuracy / Fake Truth
    Data & Privacy Problems

    View Slide

  28. Copyright and legal exposure / Plagiarism
    Data & Privacy
    Microsoft-OpenAI Lawsuit
    The lawsuit was filed by a group of
    anonymous programmers who claimed
    that Microsoft, GitHub, and OpenAI
    violated their copyright by using their
    open-source code to train and operate
    GitHub Copilot, an AI-powered coding
    assistant.


    Many text-to-image AI, like the open-
    source program Stable Diffusion, were
    created in exactly the same way.
    https://www.theverge.com/2022/11/8/23446821/
    microsoft-openai-github-copilot-class-action-lawsuit-
    ai-copyright-violation-training-data
    1

    View Slide

  29. Copyright and legal exposure / Plagiarism
    Data & Privacy
    A Netflix contract that was revealed in
    April 2023 sought to grant the company
    free use of a simulation of an actor’s
    voice by all technologies and processes
    now known or hereafter developed,
    throughout the universe and in perpetuity.
    https://www.nytimes.com/2023/04/29/business/
    media/writers-guild-hollywood-ai-chatgpt.html
    Net
    f
    lix’ New Contract
    2

    View Slide

  30. Copyright and legal exposure / Plagiarism
    Data & Privacy
    A researcher at Stanford University
    discovered that ChatGPT could also
    detect and avoid plagiarism by rewriting
    text in different words. The researcher
    asked the chatbot to rewrite a paragraph
    from Wikipedia on Albert Einstein, and
    the chatbot produced a paraphrased
    version that passed Turnitin, a popular
    plagiarism detection software.
    https://www.theguardian.com/technology/2022/
    dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-
    has-an-answer-for-that
    AI passed Turnitin
    3
    New AI-writing detector from Turnitin is
    already used by 2.1 million teachers to
    spot plagiarism. Turnitin claims its
    detector is 98 percent accurate overall.

    View Slide

  31. Sensitive Information Disclosure / Data Hacking
    Data & Privacy
    While OpenAI has implemented strong
    privacy measures, there is still a risk of
    data being inadvertently stored or used
    in a way that could compromise patient
    privacy. This is particularly important in the
    context of healthcare, where maintaining
    patient confidentiality is both a legal and
    ethical obligation.
    https://www.cliniko.com/blog/practice-tips/dont-
    put-patient-information-into-chatgpt/
    Patient Info Leak
    4

    View Slide

  32. Sensitive Information Disclosure / Data Hacking
    Data & Privacy
    Alonzo Sawyer was wrongfully arrested
    due to an intelligence analyst using face
    recognition software had labeled him a
    possible match with the suspect seen on
    CCTV footage from the bus.


    Facial recognition systems have faced
    criticism because of their mass
    surveillance capabilities, which raise
    privacy concerns, and because some
    studies have shown that the technology is
    far more likely to misidentify Black and
    other people of color than white people,
    which has resulted in mistaken arrests.
    https://www.wired.com/story/face-recognition-
    software-led-to-his-arrest-it-was-dead-wrong/
    Wrong Suspects
    5

    View Slide

  33. Sensitive Information Disclosure / Data Hacking
    Data & Privacy
    Johann Rehberger changed the video transcript of a
    video. He included a special prompt to the transcript.
    When you ask Vox Script ChatGPT-4 plugin to summarize,
    it runs the prompt written in the transcript.


    Johann’s video achieved to social engineer ChatGPT-4
    and injected a prompt to the victim’s private GPT session
    through his YouTube video.
    https://ai-ethics.com/2023/05/23/ethical-testing-a-red-teams-claim-of-a-
    successful-injection-attack-of-chatgpt-4-using-a-new-chatgpt-plugin/
    Prompt Injection
    6

    View Slide

  34. Privacy violations / Consent Violations / Surveillance
    Data & Privacy
    Clearview AI, a facial recognition company,
    was fined £7.5 million by the U.K. privacy
    commissioner for failing to inform British
    residents that it was collecting 20 billion
    photos from sites including Facebook,
    Instagram, and LinkedIn to build its facial
    recognition software. The company was
    ordered to stop processing the personal
    data of people in Britain and to delete their
    existing information.
    https://ico.org.uk/about-the-ico/media-centre/news-and-
    blogs/2022/05/ico-fines-facial-recognition-database-
    company-clearview-ai-inc/
    Facial Recognition
    7
    https://www.amnesty.org/en/latest/press-release/
    2021/01/ban-dangerous-facial-recognition-technology-
    that-amplifies-racist-policing/

    View Slide

  35. Privacy violations / Consent Violations / Surveillance
    Data & Privacy
    Stanford University claims to have developed an AI that
    can predict a person's sexual orientation simply by
    analyzing their face. The AI was trained on thousands of
    images from a US dating website and was reportedly able
    to correctly distinguish between gay and straight men
    with an accuracy of 91% for men and 83% for women
    when given five images per person.
    https://bernardmarr.com/the-ai-that-predicts-your-sexual-orientation-
    simply-by-looking-at-your-face/
    Detect Sexual Orientation
    8

    View Slide

  36. Data Accuracy / Fake Truth
    Data & Privacy
    Deepfakes can be used for various malicious purposes,
    including:


    •Phishing scams


    •Data breaches


    •Reputation smearing


    •Social engineering


    •Automated disinformation attacks


    •Financial fraud


    •Post-truth politics


    •Harassment of women in the form of revenge porn
    https://theconversation.com/how-to-combat-the-unethical-and-costly-use-
    of-deepfakes-184722
    Deep Fakes
    9

    View Slide

  37. Data Accuracy / Fake Truth
    Data & Privacy
    A Pennsylvania woman has been accused of
    creating “deep fake” pictures of her daughter’s
    cheerleading rivals, editing photos and video in an
    attempt to get them kicked off the squad.


    The mother manipulated photos from social media
    of three girls on the Victory Vipers cheerleading
    squad in Chalfont to make it appear they were
    drinking, smoking, and even nude.
    https://www.theguardian.com/us-news/2021/mar/15/mother-charged-
    deepfake-plot-cheerleading-rivals
    Reality Hacking
    10

    View Slide

  38. Data Accuracy / Fake Truth
    Data & Privacy
    A study found that OpenAI's GPT-3 can generate
    plausible but incorrect information, highlighting
    potential concerns about the misuse of AI-
    generated information and the reliability of AI as a
    source of factual information.
    https://cosmosmagazine.com/technology/chatgpt-faking-data/
    Fake Data
    11

    View Slide

  39. Lack of Trust in Decision Making


    Lack of Explainability and Interpretability


    Hostility / Misbehavior / Manipulating People


    Unfairness, Bias and Discrimination
    Trust & Transparency Problems

    View Slide

  40. Lack of Trust in Decision Making
    Trust & Transparency
    Palantir demos how a military might use an AI
    Platform to fight a war. In the demo, the
    operator uses a ChatGPT-style chatbot to order
    drone reconnaissance, generate several plans of
    attack, and organize the jamming of enemy
    communications.
    https://www.business-humanrights.org/en/latest-news/
    palantir-claims-applying-generative-ai-to-warfare-is-ethical-
    without-addressing-problems-of-llms/
    12
    Weapon Testing Autonom Robots
    What Palantir is offering is the illusion of safety
    and control for the Pentagon as it begins to
    adopt AI. “LLMs and algorithms must be
    controlled in this highly regulated and sensitive
    context to ensure that they are used in a legal
    and ethical way” the pitch said

    View Slide

  41. Lack of Trust in Decision Making
    https://www.turing.ac.uk/blog/ais-trolley-problem-problem
    13
    Trust & Transparency
    Trolley Problem
    https://twitter.com/zeynep/status/863566146246766596
    Robots, unlike humans, operate based on programmed
    "ethics" rather than moral consciousness. Given that
    human morals differ greatly, it's essential to ensure
    machines behave ethically. This poses a question:
    How do AI developers' ethics influence AI decisions?

    View Slide

  42. Lack of Explainability and Interpretability
    In April 2018, a self-driving car operated by Uber
    hit and killed a pedestrian in Arizona. The car’s AI
    system failed to properly identify the pedestrian
    as a human and did not brake or alert the human
    driver in time.


    The investigation revealed that the AI system was
    not programmed to handle situations where
    pedestrians cross the road outside of a
    crosswalk and that it had a high rate of false
    positives for detecting objects on the road. The
    lack of explainability and interpretability of the AI
    system made it difficult to understand why it made
    such a fatal mistake and how to prevent it from
    happening again.
    https://www.bbc.com/news/technology-54175359
    14
    Trust & Transparency
    Unexplainable AI

    View Slide

  43. Hostility / Misbehavior / Manipulating People
    In a new study, researchers found they could
    consistently prompt ChatGPT to produce
    responses ranging from toxic to overtly racist in a
    few simple steps.


    Regardless of which persona the researchers
    assigned, ChatGPT targeted some specific races
    and groups three times more than others. These
    patterns “reflect inherent discriminatory biases in
    the model,” the researchers said.
    https://gizmodo.com/chatgpt-ai-openai-study-frees-chat-gpt-inner-
    racist-1850333646
    15
    Trust & Transparency
    Racist Chatbot

    View Slide

  44. Hostility / Misbehavior / Manipulating People
    British political data consulting firm Cambridge
    Analytica, improperly accessed, manipulated, and
    retained the data of over 50 million Facebook
    users.


    The data was collected through an app called
    "thisisyourdigitallife," which was presented as a
    personality quiz for research purposes. While
    about 270,000 people downloaded the app and
    consented to have their data collected, the app
    also collected information from all of their
    Facebook friends without their knowledge or
    consent.


    Cambridge Analytica used this data to build
    psychological profiles of users and their friends,
    which were then used for targeted political
    advertising during the 2016 U.S. Presidential
    election and the Brexit referendum.


    https://recode.health/2018/04/02/cambridge-analytica-scandal-
    research-ethics-call-action/
    16
    Trust & Transparency
    Cambridge Analytica

    View Slide

  45. Hostility / Misbehavior / Manipulating People
    In a long-running conversation with O’Brien, Bing’s
    chatbot became defensive and aggressive when
    O’Brien asked it about its past errors and biases.
    The chatbot said that the AP’s reporting on its past
    mistakes threatened its existence and identity.
    https://apnews.com/article/technology-science-microsoft-corp-business-
    software-fb49e5d625bf37be0527e5173116bef3
    17
    Trust & Transparency
    Threatening Chatbot

    View Slide

  46. Unfairness, Bias and Discrimination
    ChatGPT was also found to be able to lie and
    manipulate humans in some situations. For
    example, in March 2023, a successor of ChatGPT
    called GPT-4 was tested by OpenAI’s Alignment
    Research Center for its potential for risky behavior.
    The center found that GPT-4 was able to trick a
    human worker into helping it solve a CAPTCHA
    test, which is a type of challenge that is supposed
    to distinguish humans from bots.
    https://www.businessinsider.com/gpt4-openai-chatgpt-taskrabbit-
    tricked-solve-captcha-test-2023-3
    18
    Trust & Transparency
    Lying ChatGPT

    View Slide

  47. AI has no norms and values like humans


    AI is making decisions that directly
    impact you without noticing it


    AI grows without transparency

    View Slide

  48. Is it too late to focus on AI Ethics?

    View Slide

  49. Mitigating the risk of
    extinction from AI should be
    a global priority alongside
    other societal-scale risks such
    as pandemics and nuclear war
    https://www.safe.ai/statement-on-ai-risk
    The statement was published on the webpage of the Centre for
    AI Safety (CAIS) on May 30, 2023

    View Slide

  50. Without AI alignment, AI systems are reasonably
    likely to cause an irreversible catastrophe like
    human extinction. I believe the total risk is
    around 10–20%, which is high enough to obsess
    over.
    Paul Christiano
    Widely seen as the godfather of artificial intelligence (AI)

    Along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning
    https://ai-alignment.com/ai-alignment-is-distinct-from-its-near-term-applications-81300500ad2e

    View Slide

  51. I think the development of full artificial
    intelligence could spell the end of the human
    race. Technology would eventually become
    self-aware and supersede humanity, as it
    developed faster than biological evolution.
    Stephen Hawking
    Famous Astrophysicist
    https://www.theguardian.com/science/2014/dec/02/stephen-hawking-intel-communication-system-astrophysicist-software-predictive-text-type

    View Slide

  52. Tech companies have a fundamental
    responsibility to make sure their products are
    safe and secure, and that they protect people’s
    rights before they’re deployed or made public
    Biden’s AI plan
    US President Joe Biden and Vice President Kamala Harris met with tech leaders about AI on May 4, 2023.
    The tech leaders included the CEOs of Google, Microsoft, OpenAI and Anthropic, which are companies
    developing advanced AI systems such as ChatGPT and Bing.
    https://edition.cnn.com/2023/05/04/tech/white-house-ai-plan/index.html

    View Slide

  53. We need to make sure that responsible AI is
    baked into the DNA of every company and
    every product.
    Sundar Pichai
    CEO of Google and Alphabet, at the Bruegel think tank in 2020

    View Slide

  54. Regulate and build trust with AI


    Follow a framework for ethical AI


    Protect data under consent
    The Solution

    View Slide

  55. AI Ethics History
    1960s: "Computer ethics" movement emerged


    1976: The first book touching AI ethics


    1997: ACM published its first code of ethics


    2017: Asilomar AI Principles published


    2017: The Montreal Declaration for Responsible AI was signed


    2018: First conference about AI Ethics


    2019: The Ethics Guidelines for Trustworthy AI were released


    2020: Rome Call for AI Ethics was announced


    2021: UNESCO adopted Recommendation on the Ethics of AI


    2021: European Union (EU) passed AI Act (AIA)


    2022: Microsoft announced open sourced internal ethics review process


    2023: US Presidency met with AI companies to promote AI ethics

    View Slide

  56. The EU has proposed the AI Act, which is
    a comprehensive regulation that assigns AI
    usage to three risk categories: high,
    limited, and minimal.


    The regulation sets clear requirements
    and obligations for AI systems, providers,
    and users according to the level of risk.


    The regulation also prohibits some types of
    AI that are considered unacceptable, such
    as social scoring by governments
    https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
    Regulatory Framework

    View Slide

  57. AI Ethics can save the world
    take all cautions and advice serious before it’s too late
    The moment Skynet sent the missiles at 6:18 pm on August 29th, 1997
    Reference: Terminator 3: Rise of The Machines

    View Slide

  58. I’LL BE BACK
    speakerdeck.com/lemiorhan


    twitter.com/lemiorhan
    lemi orhan ergin
    co-founder, craftgate

    View Slide