Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Intentionally Biasing User Representation?: Investigating the Pros and Cons of Removing Toxic Quotes from Social Media Personas

Joni
October 10, 2022

Intentionally Biasing User Representation?: Investigating the Pros and Cons of Removing Toxic Quotes from Social Media Personas

Presentation at NordiCHI'22, by Dr. Joni Salminen
Systems mentioned:
Automatic Persona Generation (https://persona.qcri.org)
Survey2Persona (https://s2p.qcri.org)

Joni

October 10, 2022
Tweet

More Decks by Joni

Other Decks in Research

Transcript

  1. Intentionally Biasing User Representation?: Investigating the Pros and Cons of

    Removing Toxic Quotes from Social Media Personas Joni Salminen University of Vaasa Soon-gyo Jung Qatar Computing Research Institute, Hamad Bin Khalifa University Bernard J. Jansen Qatar Computing Research Institute, Hamad Bin Khalifa University Presented at NordiCHI’22 Aarhus, Denmark
  2. Algorithmically generated personas can help organizations understand their social media

    audiences • Scalable (N = 1,000,000? No problem!) • Objective (based on quantitative data, not opinions or stereotypical thinking) • Updatable (when user behavior or attitudes change, so do the personas) …therefore, a range of studies in data-driven persona development (a.k.a., quantitative personas, algorithmic personas) have taken place. Often, these studies utilize social media data.
  3. Toxic Quote Problem in Social Media Personas When using algorithms

    to create personas from social media user data, the resulting personas may contain toxic quotes that negatively affect content creators’ perceptions of the personas. For example, a persona can say, “I wish all Zionists would die.” Even though this is an isolated comment that can be understood as an outlier (radical view), when such a comment appears in the persona profile, it can contaminate the persona user’s view of the persona and decrease acceptance of using the persona. …the point of personas is to have the user empathize with the persona, which can be hindered by negative attitudes towards the persona.
  4. There is empirical evidence of this effect, ”Toxic Text in

    Personas” We conducted a 2 × 2 user experiment with 496 participants that showed participants toxic and non-toxic versions of data-driven personas. We found that participants gave higher credibility, likability, empathy, similarity, and willingness-to-use scores to non-toxic personas. Also, gender affected toxicity perceptions in that female toxic data-driven personas scored lower in likability, empathy, and similarity than their male counterparts. Female participants gave higher perceptions scores to non-toxic personas and lower scores to toxic personas than male participants. Salminen, J., Jung, S.-G., Santos, J., & Jansen, B. J. (2021). Toxic Text in Personas: An Experiment on User Perceptions. AIS Transactions on Human-Computer Interaction, 13(4), 453–478. https://doi.org/10.17705/1thci.00157
  5. Current study • To address the toxic quote problem in

    personas, we have implemented toxicity detection in an algorithmic persona generation system capable of using tens of millions of social media interactions and user comments for persona creation. • On the system’s user interface, we provide Hate Filter, a feature for content creators using the personas to turn on or off toxic quotes, depending on their preferences. • To investigate the feasibility of this feature, we conducted a survey study with 50 professionals in the online publishing domain.
  6. Current study • The results show varied reactions, including hate-filter

    critics (~1/3 of the participants), hate-filter advocates (~1/3 of the participants), and those in between (~1/3 of the participants). • Although personal preferences play a role, the usefulness of toxicity filtering appears primarily driven by the work task – specifically the type and topic of stories the content creator seeks to create. • We identify six use cases where a toxicity filter is beneficial.
  7. Implications for data-driven persona systems For system development, the results

    imply that it is beneficial to give content creators the option to view or not view toxic comments in personas, rather than making this decision in their stead. In some cases, toxic quotes can be useful information for journalists or other content creators.