Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Design solutions for fake news: Detection doesn...

Joni
May 29, 2024

Design solutions for fake news: Detection doesn’t matter

Are Algorithms Enough? Analyzing Fake News Solutions Designed by Students (2024). Milica Milenkovic, Essi Häyhänen, Joni Salminen, Bernard J. Jansen. ACHI’24, Barcelona, Spain. Full article in Thinkmind Digital Library.

Joni

May 29, 2024
Tweet

More Decks by Joni

Other Decks in Research

Transcript

  1. Design solutions for fake news: Detection doesn’t matter Dr. Joni

    Salminen Associate professor, University of Vaasa, Finland [email protected]
  2. Contents • The fake news problem • Not a new

    problem • Not a technical problem • It’s easy to achieve ML accuracy of 99% in fake news detection, but so what? • Lack of implementation in real systems • Lack of intervention studies (user studies, experiments, user research…basically HCI perspective) • Lack of consideration for human factors • Is detection enough? Should we even bother with it? • Why is detection the dominant paradigm? • Tentative root causes to the fake news problem • Elusive definition of fake news (Trump’s perspective) • Possible solutions • Framework for categorizing and assessing fake news solutions
  3. The fake news problem • Fake news is an umbrella

    term for online misinformation, often of political nature. It can be generated and circulated by organizations or individuals. • Technical problem = fake news can be squashed by developing better algorithms, especially detection ones • Wicked problem = “In planning and policy, a wicked problem is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.” (Wikipedia, accessed May 27, 2024) • Socio-technical problem = “A socio-technical problem is an issue that arises from the complex interplay between social systems, human behavior, and technical systems or technology.” (ChatGPT, accessed May 27, 2024)
  4. Not a technical problem • People create and circulate fake

    news, not AI • Why? If we don’t understand why, how will detection help? • Current paradigm: ”Let’s develop better ML models to detect fake news and remove it or limit its circulation.” • Redefined paradigm: ”Let’s think of something else.”
  5. It’s easy to achieve ML accuracy of 99% in fake

    news detection, but so what? • Performance figures from our lit review: • 93.6% • 99.4% • 98.36% • 98.6% • 96.05% ...two questions: (1) how much better can it get? (2) we reach 100% - then what? …if it’s ”solved”, then why it’s not solved?
  6. Lack of implementation in real systems • Papers typically test

    the models using the ML train-test paradigm • The models are not implemented in real systems. • Yet, it is well known that ML research models perform very differently when put into production (the POC gap) …for the sake of clarity, social media platforms use automation (ML models) and manual fact checking, but these results are almost never made public.
  7. Lack of intervention studies (user studies, experiments, user research…basically HCI

    perspective) • 0% of the articles we reviewed for our paper included an intervention study • Intervention = apply a technology and observe what happens (typically controlled random trials, A/B tests) • Where are the user studies? Where is human-computer interaction perspective? …interestingly, even *business students* who designed solution proposals for fake news often rely on the “magic” of algorithms to solve this problem! (“38.9% of teams devised algorithm-focused solutions, 27.8% proposed human-focused solutions, while 33.3% designed solutions that incorporated both algorithmic and human-centered approaches to addressing the misinformation problem.”)
  8. Lack of consideration for human factors • Design of systems

    that involve features that mitigate fake news circulation (but depends on detection?) • Root causes are human factors, not algorithmic: • People reject the ”official truth” – why? • People hold misguided beliefs – why? • People ignore contrary evidence and prefer information that validates their current beliefs – why? • People seek the company of likeminded thinkers instead of being open minded – why?
  9. Is detection enough? Should we even bother with it? •

    Detection does not address understanding • Detection does not address root causes • Detection is not really interesting, neither theoretically (it’s purely empirical) or empirically (just a type of a text-classification problem)
  10. Why is detection the dominant paradigm? • Because it’s convenient.

    • Detection is a well-defined problem: when you define it as a text classification problem, you can apply the ”scientific” method: collect data with ground truth, train classifiers, evaluate their performance, and report • This is ’cozy and familiar’ computer science approach (but it has now been done 100s of times… how many more times is it needed?) • Detection appears like a magic bullet: ”if we can detect, somebody else can do something and fix this.” (usually implied moderation / censoring) • “Algorithmic scapegoating” also explains this: people are uncomfortable with taking accountability, they rather blame platforms and algorithms of their own behavior.
  11. Tentative root causes to the fake news problem • People

    want enemies • People want to be in groups • People have distrust in authorities • People are bored • Reality is too complex • People are intrigued and entertained by authorative figures and curious theories …so, how can technology help address these? Let’s work on that, instead of spending tens of thousands of hours by smart human brains working on the on the dead end of detection.
  12. Elusive definition of fake news (Trump’s perspective) • Trump defined

    fake news as news that attacks him by taking quotes out of context (jokes, purposeful misinterpretation) • When media loses objectivity, downstream effect is leaving more space for ”taking sides” on information correctness • Anyway, concepts like fake news can be weaponized and the concept can be generalized to, ”argument that I disagree with and that contains an opinion or moral statement is fake news”.
  13. Possible solutions • Adopting and enforcing classic journalistic principles (objectivity)

    – not technical • Teaching media criticism – not technical • Teaching critical thinking – not technical • Correcting for clickbait logic (trap of engagement) – technical • Detection for understanding – combined • Holistic fake new research = cross-disciplinary, acknowledges both technical and social factors, evaluates real-world impact. (Or we could call this ‘human-centered fake news research’.)
  14. Framework for categorizing and assessing fake news solutions Step 1:

    Determine whether a solution is: Algorithm-focused, Human-focused, or Mixed-focused. Step 2: Extract assumptions underlying the solution (e.g., human behavior, system behavior). Step 3: Evaluate the level of realism (1-7) of the solution. Step 4: Evaluate the level of clarity (unknowns) of the solution (1-7). Step 5: List and assess the likelihood of risks for successful implementation of the solution. Step 6: List quantitative and qualitative metrics and measures for evaluating solution effectiveness (NOT only accuracy, F1 score!).
  15. Concluding remarks Detection is akin to magic bullet thinking. Gives

    a sense of control and predictability. Makes quantitative assessment possible. So, these are logical reasons why detection is the most popular approach. In turn, other solution research is "too hard": -no access to platforms -"it's not ethical" -it's not immediately clear how to evaluate Given the nature of the problem, we could have multiple partial solutions that together address the problem "enough" for it to not be "too big". (Detection can be one of these, sure.) But we don't have "the" solution. Best we can hope for is partial solution.
  16. Thanks! Dr. Joni Salminen Email: [email protected] Are Algorithms Enough? Analyzing

    Fake News Solutions Designed by Students (2024). Milica Milenkovic, Essi Häyhänen, Joni Salminen, Bernard J. Jansen. ACHI’24, Barcelona, Spain. Full article in Thinkmind Digital Library.