Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Fighting Fire with Fire

Hannah Ono
December 26, 2024
3

Fighting Fire with Fire

Hannah Ono

December 26, 2024
Tweet

Transcript

  1. About Me Georgia Severson, ‘25 Management & Statistics at MIT

    NROTC Spring 2024: UROP on genAI & management consulting Summer 2024: UROP on (1) genAI and education and (2) case studies on genAI use cases in the workplace
  2. Introduction Purpose: Explore how AI plays a role in both

    creating and combating disinformation, with a focus on implications for businesses. Focus Areas: 1. How AI creates disinformation 2. How AI combats disinformation 3. Implications for businesses
  3. Context Definition of 'disinformation': False information which is deliberately intended

    to mislead—intentionally misstating the facts Definition of 'misinformation': False or inaccurate information—getting the facts wrong Why It Matters: Targeted attacks through disinformation can damage reputations, erode consumer trust, and cause financial losses (MIT News, 2021)
  4. Why does this matter? Public Concern Over AI-Driven Misinformation A

    global survey across 29 countries reveals that a significant portion of the population is worried about AI's potential to facilitate the spread of fake news. - This concern underscores the necessity for businesses to implement robust AI governance and ethical standards to maintain public trust. (IPSOS, 2023) Increased Credibility of AI-Generated Disinformation Research indicates that individuals are 3% less likely to identify false information when it's generated by AI compared to human-created content. - AI-generated disinformation may be more convincing, posing heightened risks to businesses in terms of reputation and consumer trust. (MIT Technology Review, 2023)
  5. Research Questions What role do AI tools play in both

    spreading and combating disinformation? What are the implications of disinformation and AI for businesses?
  6. Methodology 2. Conducted a review of academic, industry, and government

    reports on AI and disinformation. 1. Reviewed news articles on the role of AI in creating and preventing disinformation. 3. Analyzed key findings, grouping them into key themes. 4. After finalizing my own analysis, utilizing genAI tools such as Stanford's STORM and Google NotebookLM to push the analysis to the next level.
  7. Stanford STORM - STORM is a LLM from Stanford that

    writes Wikipedia-like articles from 'trusted' Internet sources. - Great for finding additional sources and summarizing the main research on the topic. - Less useful for analyzing/querying.
  8. Google NotebookLM - Sources include PDFs, website links, Google Drive

    files, YouTube videos that you input - Option to query NotebookLM about sources with citations, listen to a 'podcast' about your materials - Great for enhancing understanding of studies, articles, etc. and developing an overall understanding of the topic.
  9. Types of Misinformation Fabricated Content Entirely false information created with

    no factual basis Manipulated Content Genuine information or imagery that has been altered to mislead Imposter Content Content that impersonates genuine sources, for example, using the branding of established news agencies without authorization Misleading Content Information presented in a deceptive manner, such as opinion pieces portrayed as factual reports False Context Accurate information shared with incorrect contextual details, like headlines that do not accurately reflect the article's content Satire and Parody Humorous but false stories presented as truth, which can unintentionally mislead readers (House of Commons Select Committee on Culture, Media, and Sport, 2018)
  10. Where does AI play a role? (Note: These examples are

    theoretical to explain the concept.) Disinformation Misinformation AI Involvement Generative AI tools are used to create false press releases or videos claiming a company engages in unethical practices, such as exploiting labor. An AI-powered chatbot provides incorrect information about a company’s product warranty due to outdated training data. Impact Damages the company’s reputation and leads to a loss of trust among consumers and stakeholders. Customers receive wrong advice, resulting in confusion and potential dissatisfaction. Type Fabricated Content + Imposter Content False Context
  11. Case Study 1: AI chatbot fabricates information affecting Air Canada

    Summary In February 2024, an AI chatbot provided fabricated information regarding Air Canada's policies, falsely claiming that the airline offered certain services and compensations that did not exist. AI Involvement The chatbot generated responses based on predictive text algorithms without access to accurate, up-to-date company data. Impact on Business (1) Customer Trust: eroded customer confidence in Air Canada's services. (2) Operational Strain: increased customer service inquiries, allocating resources to address misinformation Lessons Learned (1) Data Integration: ensure AI systems are connected to current and accurate databases (2) Transparency: communicate the capabilities and limitations of AI tools to customers to manage expectations
  12. Case Study 2: Deepfake scam targets Hong Kong company Summary

    In February 2024, cybercriminals used AI-generated audio to impersonate the company's CFO, convincing a subordinate to transfer $35 million to a fraudulent account. AI Involvement Attackers employed deepfake technology to create a convincing audio replica of the CFO's voice. Impact on Business (1) Financial Loss: substantial monetary loss due to the unauthorized transfer (2) Reputational Damage: concerns about the company's internal security protocols Lessons Learned (1) Verification Protocols: implementing multi-factor authentication for financial transactions (2) Employee Training: educating staff about AI-driven threats
  13. misinformation disinformation internal external Understanding the key types of misinformation/disinformation

    businesses face Internal Misinformation: Misinformation originating from within the company, often due to failures or limitations in internal AI systems. Key Characteristics: - Results from poor data integration or inadequate system oversight. - Damages customer trust and increases operational strain.
  14. misinformation disinformation internal external Understanding the key types of misinformation/disinformation

    businesses face External Disinformation: Disinformation targeting a company from external sources, often through malicious use of AI tools like deepfakes. Key Characteristics: - Created by third parties with malicious intent. - Impacts a company’s finances, security, and reputation.
  15. misinformation disinformation internal external Air Canada case study Hong Kong

    case study Examples could include malicious actions taken by employees. Examples could include hallucinations by generative AI tools being used to produce news articles. Understanding the key types of misinformation/disinformation businesses face
  16. How AI enables disinformation Capabilities: 1. Text Generation: generating high-quality

    fake news articles, social media posts, and email campaigns. 2. Deepfake Technology: creating realistic videos and audio impersonating individuals. 3. Social Media AI Bots: amplifying disinformation by targeting specific audiences with fabricated narratives. - 70% of fake news on Twitter during the 2016 U.S. elections was shared by bots (MIT Technology Review, 2023) Implications for Businesses: - Reputational Damage: misinformation can erode consumer trust and result in boycotts or lawsuits - Operational Challenges: companies must allocate resources for crisis management and customer service to address misinformation - Financial Impact: The global economy incurs an annual loss of approximately $78 billion due to the proliferation of fake news. (CHEQ)
  17. 3 ways AI can act as a tool for combating

    disinformation 1. Image and Video Verification Deepfake Detection AI models use facial recognition and audio matching to flag manipulated videos or photos Metadata Analysis AI reviews file metadata to verify the authenticity of images and videos
  18. 3 ways AI can act as a tool for combating

    disinformation 2. Real-Time Fact-Checking AI-Powered Chatbots Engage users in real time to debunk false claims and redirect them to verified sources Content Verification Cross-referencing claims with trusted databases or fact-checking organizations
  19. 3 ways AI can act as a tool for combating

    disinformation 3. Detection and Analysis Natural Language Processing (NLP) AI systems analyze text for patterns, tone, and inconsistencies to identify disinformation campaigns Sentiment Analysis AI evaluates the emotional tone of content to detect potential influence campaigns Behavioral Analysis Tools track abnormal user activity, such as bots amplifying false narratives
  20. One company's weakness is another's opportunity Opens market opportunities AI-driven

    disinformation and internal misinformation create demand for services that detect and combat these issues. Service offerings Companies combine AI detection tools and human expertise to provide robust solutions. Key capabilities 1. Visual Analysis: Detecting deepfakes and verifying media authenticity. 2. Text Management: Ensuring chatbot accuracy and preventing misinformation. 3. Fact-Checking: Verifying claims and correcting false information in real time. Learning from case studies Air Canada: Could have benefited from proactive partnerships to ensure chatbots provided accurate responses, preventing customer confusion. Hong Kong Deepfake Scam: Highlights the need for robust voice and video verification tools to prevent financial fraud caused by AI-generated audio manipulations.
  21. Challenges and Ethical Considerations AI Limitations Adversarial Attacks Disinformation creators

    adapt tactics to bypass AI detection tools. 85% of cybersecurity professionals believe deepfake scams will increase significantly by 2025 (Reuters). False Positives Legitimate content flagged as disinformation damages credibility and public trust. Ethical Concerns Privacy vs. Surveillance Monitoring for disinformation raises concerns about user privacy and free speech. Global Regulation Lack of consistent international laws complicates efforts to combat disinformation.
  22. Future Research Directions & Business Implications Research Direction Business Implications

    Advancing Detection Capabilities Improve AI’s ability to detect nuanced disinformation like subtle language manipulation and bot campaigns using advanced NLP and sentiment analysis. Enhances detection of complex disinformation, protecting reputation and operations. Understanding Economic Impact Quantify financial and operational costs of AI-driven disinformation to guide better risk management. Provides actionable insights for investment in mitigation strategies. Policy and Regulation Development Develop global frameworks for ethical AI use in disinformation across industries like media and advertising. Ensures compliance with regulations, reducing legal risks and boosting trust. AI Explainability and Transparency Create tools to make AI systems more interpretable and accountable for decisions to prevent misinformation. Builds trust among consumers and stakeholders through greater accountability.
  23. Conclusions Key Takeaways AI serves as both a creator and

    combatant of disinformation, making it a double-edged sword for businesses. Implications for Businesses 1. Internal Misinformation: Failures in AI systems within companies can mislead customers and harm trust. 2. External Disinformation: Malicious actors leverage AI to target businesses, leading to reputational and financial damage. Proactive Measures Are Essential Businesses must adopt AI-driven detection tools, train employees to recognize disinformation threats, and implement robust data governance practices.
  24. Sources Freedom House. (2023). The repressive power of artificial intelligence.

    Retrieved from https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence MIT News. (2021). Artificial intelligence system could help counter the spread of disinformation. Retrieved from https://news.mit.edu/2021/artificial-intelligence-system-could-help-counter-spread-disinformation-0527 Technology Review. (2023). How generative AI is boosting the spread of disinformation and propaganda. Retrieved from https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/ The Conversation. (2024). AI tools are generating convincing misinformation: Engaging with them means being on high alert. Retrieved from https://theconversation.com/ai-tools-are-generating-convincing-misinformation-engaging-with-them-means-being-on-high-alert-202062 Wired. (2023). How AI may be used to create custom disinformation ahead of 2024. Retrieved from https://www.wired.com/story/generative-ai-custom-disinformation/ IJNet. (2021). Tracking disinformation? These AI tools can help. Retrieved from https://ijnet.org/en/story/tracking-disinformation-these-ai-tools-can-help Virginia Tech News. (2024). AI and the spread of fake news sites: Experts explain how to counteract. Retrieved from https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html Unite.AI. (2024). Tackling misinformation: How AI chatbots are helping debunk conspiracy theories. Retrieved from https://www.unite.ai/tackling-misinformation-how-ai-chatbots-are-helping-debunk-conspiracy-theories/ Cambridge Data & Policy Journal. (2023). The role of artificial intelligence in disinformation. Retrieved from https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6 CSET Georgetown. (2023). AI and the future of disinformation campaigns. Retrieved from https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/ Harvard Business Review. (2024). AI’s trust problem. Retrieved from https://hbr.org/2024/05/ais-trust-problem Privacy International. (2023). Privacy and freedom of expression in the age of artificial intelligence. Retrieved from https://privacyinternational.org/report/1752/privacy-and-freedom-expression-age-artificial-intelligence American Psychological Association. (n.d.). Misinformation and disinformation. Retrieved from https://www.apa.org/topics/journalism-facts/misinformation-disinformation House of Commons Select Committee on Culture, Media, and Sport. (2018, July 29). Disinformation and 'fake news': Interim Report. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36304.htm#_idTextAnchor002 Ipsos. (2023). Data dive: Fake news in the age of AI. Retrieved from https://www.ipsos.com/en/data-dive-fake-news-age-ai Hao, K. (2023, June 28). Humans may be more likely to believe disinformation generated by AI. MIT Technology Review. Retrieved from https://www.technologyreview.com/2023/06/28/1075683/humans-may-be-more-likely-to-believe-disinformation-generated-by-ai/