of Hate ◦ Mitigation of Hate ◦ Diffusion of Hate ◦ Tooling and Visualization of Diffusion • Future Work • Student Profile Disclaimer: Subsequent content has extreme language (verbatim from social media), which does not reflect the opinions of myself or my collaborators. Reader’s discretion is advised.
[3], [4]: Anti-Sematics Schooling [5]: Radio and Rawanda, Image Fig 1 : List of Extremist/Controversial SubReddits [1] Fig3, 4: Twitter hate Speech [3] Fig 2: Youtube Video Incident to Violence and Hate Crime [2] Fig 5: Rwanda Genocide, 1994 [5] “I will surely kill thee“ Story of Cain and Abel
• Instagram • Youtube Semi- Moderated • Reddit Unmoderated • Gab • 4chan • BitChute • Parler • StormFront • Anonymity has lead to increase in anti-social behaviour [1], hate speech being one of them. • They can be studied at a macroscopic as well as microscopic level. [2] • Exists in various mediums. [1]: Super, John, CyberPsychology & Behavior, 2004 [2]: Luke Munn, Humanities and Social Sciences Communication, Article 53
cultural in nature. • UN defines hate speech as “any kind of communication that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are.” [1] • Need sensitisation of social media users. [1]: UN hate [2]: Pyramid of Hate Fig 1: Pyramid of Hate [2]
Sarah, Nipping in the bud: detection, diffusion and mitigation of hate speech on social media, ACM SIGWEB Winter, Invited Publication Our Contributions so far
on the topic under consideration? ◦ Takeaway: Yes, topical information drives hate. ◦ Takeaway: Additionally, exogenous signals are as important as endogenous (in platform) signals to influence the spread of hate. • Question: Is there a middle ground to help users transition from extreme hate to non-hate? ◦ Takeaway: The way to curbing hate speech is more speech. ◦ Takeaway: Free speech and equal opportunity to speech are not same. • Question: How do different endogenous information help in detection of hate? ◦ Takeaway: Context matter in determining hatefulness. ◦ Takeaway: User’s recent history around a tweet captures similar psycho-linguistic patterns.
users and their neighbours [1] Fig 1: Affinity of different user types [1] [1]: Ribeiro et al., Websci’18 • Employing Twitter retweet network. • Hateful neighbours tend to follow other hateful users. • Hateful users & their neighbours tend to tweet more and in short intervals, follow more. Fig 3: Centrality of hateful users and their neighbours [1]
Mathew et al., WebSci '19 Fig 1: Belief Propagation to determine hatefulness of users [1] Fig 2: Repost DAG [2] • Source: GAB as it promotes “free speech”. • User and Network Level Features. • They curated their own list of hateful lexicons. • Initial hateful users were enlisted based on hate lexicon mapping of users. Fig 3: Difference in hateful and non-hateful cascades [2]
Consider the hate, non-hate to be separate groups. [1] • Generic Information Cascade models do not take content into account, only who follows whom. [2, 3] • How different topics can lead to generation and spread of hate speech in a user network? • How a hateful tweet diffuses via retweets? Motivativation [1]: Mathew et al., WebSci '19 [2]: Wang et al., ICDM’17 [3]: Yang et al., IJCAI,19
dataset. ◦ Timeline ◦ Follow network (2-hops) ◦ Meta data • Manually annotated a total of 17k tweets. • Trained a Hate Detection model for our dataset. • Additionally crawled online news articles (600k). [1]: Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
different hashtags in RETINA [1] Fig 2: Retweet cascades for hateful and non-hate tweets in RETINA [1] • Different users show varying tendencies to engage in hateful content depending on the topic. • Hate speech spreads faster in a shorter period. [1]: Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
a given time window predict if the given user (a follower account) will retweet the given hateful tweet. [1] [1]: Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
2: Static Retweet Prediction [1] Fig 3: Dynamic Retweet Prediction [1] [1]: Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
Behaviour of cascade for different baselines. Darker bars are hate [1]. [1]: Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
of hate speech. • High Intensity hate is more likely to contain offensive lexicon, and offensive spans, direct attacks and mentions of target entity. • Low intensity hate is more subtle, usually employing sarcasm and humour. Consuming Coffee is bad, I hate it! (the world can live with this opinion) Lets bomb every coffee shop and kill all coffee makers (this is a threat) Fig 1: Pyramid of Hate [1] [1]: Pyramid of Hate
in the study. 50% randomly assigned to the control group • H1: Are prompted users less likely to post the current offensive content. • H2: Are prompted users less likely to post content in future. [1]: Katsaros et al., ICWSM ‘22 Fig 1: User behaviour statistics as a part of intervention study [1] Fig 2: Twitter reply test for offense replies. [1]
datasets. • Manually annotated for Hate intensity and hateful spans. • Hate Intensity is marked on a scale of 1-10. • Manual generation of normalised counter-part and its intensity. (k = 0.88) Fig 1: Original and Normalised Intensity Distribution [1] Fig 2: Dataset Stats [1] [1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
towards non-hate. • Does not force to change sentiment or opinion. • Evidently leads to less virality. Fig 1: Difference in predicted number of comments per set per iteration. [1] [1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
is to obtain its normalized (sensitised) form 𝑡` such that the intensity of hatred 𝜙𝑡 is reduced while the meaning still conveys. [1] 𝜙 𝑡` < 𝜙 𝑡 Fig [1]: Example of original high intensity vs normalised sentence [1] [1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
(HIP) Hate Span Prediction (HSI) Hate Intensity Reduction (HIR) Fig 1: Flowchart of NACL [1] [1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022 Extremely Hateful Input (ORIGINAL) Less Hateful Input (SUGGESTIVE) HATE NORMALIZATION Extremely Hateful Input (ORIGINAL) User’s Choice
[1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022 BERT + BiLSTM + Self Attention + Linear Activation
[1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022 ELMO + BiLSTM + Self Attention + CRF
quantitativeness of the generated texts. • Metric: ◦ Intensity ◦ Fluency ◦ Adequacy Fig 1: Results of Human Evaluation for NACL-HSR [1] [1]: Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
speech datasets using hate lexicons. [1, 2] • The hate speech in real world goes beyond hateful slurs. [3] • Limited Study in Hinglish context. [1]: Waseem & Hovy, NAACL’16 [2]: Davidson et al., WebSci’17 [3]: ElSheried et al., EMNLP’21 REDACTED INFORMATION FOR A WIP
Information Diffusion on Large Networks Dhruv Sahnan, Vasu Goel, Sarah Masud, Chhavi Jain, Vikram Goyal, Tanmoy Chakraborty Published at ACM TKDD 2022 Tool’s Demo Video: https://www.youtube.com/watch?v=yu1DsfnJk10
tools. • Inactive open-source tools. • Handling large graphs. • Level of modularity. Research Challenges • Test multiple hypotheses. • Experiments with the newer version of a diffusion model. • Reproducible and extendable results.
you may find while using some of the said tools? • (C1): Lack of support for large networks. • (C2): Lack of support for different graph-input formats. • (C3): Resource and memory-intensive tools are hard to set up. • (C4): Lack of scriptability and customizability, and less interactive UI. According to your use cases, can you list three most important features that you feel must be present in any network diffusion visualization tool that you may use? • (F1): Easy to use, customized visualization of diffusion. • (F2): Spatio-temporal analysis of the information flow. • (F3): Availability of key network and diffusion statistics at a glance. • (F4): Ability to save and load checkpoints. Complete Survey Available At [1] [1]: Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD
supported formats. • Run standard as well as user defined epidemic models. • Dual diffusion analysis mode for comparative study • Provision of saving results as network, diffusion raw outputs, PDF, extensible to dashboard. • Web-based. [1]: Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD
in time to load. [1]: Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD
Study [1] [1]: Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD • Despite being more feature DiVA is pretty easy to use and rates high on overall capabilities. • Expert and novice SUS scores are 83.1 and 74.6
established to be harder for the system to detect. • This is different from obfuscation. • As a result of improving our understanding of Implicit hate we can ◦ Improve hate speech classification via auxiliary modeling of explanation. ◦ Use the explanation as standalone service for practitioners and content moderators.
[1] 6306 3900 Latent Hate [2] 1089 7100 AbuseEval [3] 2129 798 Samples from implicit hate corpuses: • Latent Hate: they don't look human. they look like a bunch of ugly monkeys, sorry monkeys r better looking !!! [2] • Gab: If you're white and like Niggers you've got serious fucking mental issues. [1] [1]: Kennedy et al. , PsyArXiv’18 [2]: ElSheried et al., EMNLP’21 [3]: Caselli et al., LREC’20 Fig 1: Intra-class JS Distance for I, E, N classes in Latent Hate [2]
Modeling of Hate Speech Diffusion on Twitter (ICDE 2021) • Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization (KDD 2022) • DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks (ACM TKDD 2022) Miscellaneous • Survey: Handling Bias in Toxic Speech Detection: A Survey (Under Review) • Essay: Nipping in the bud: detection, diffusion and mitigation of hate speech on social media (ACM SIGWEB Newsletter, Invited Publication) • Tutorials Conducted: Combating Online Hate Speech (WSDM 2021, ECML PKDD 2021) • Dashboard: RobinWatch (robinwatch.github.io)