of Hate ◦ Mitigation of Hate ◦ Diffusion of Hate ◦ Tooling and Visualization of Diffusion • Future Work • Student Profile Disclaimer: Subsequent content has extreme language (verbatim from social media), which does not reflect the opinions of myself or my collaborators. Reader’s discretion is advised.
, : Anti-Sematics Schooling : Radio and Rawanda, Image Fig 1 : List of Extremist/Controversial SubReddits  Fig3, 4: Twitter hate Speech  Fig 2: Youtube Video Incident to Violence and Hate Crime  Fig 5: Rwanda Genocide, 1994  “I will surely kill thee“ Story of Cain and Abel
• Instagram • Youtube Semi- Moderated • Reddit Unmoderated • Gab • 4chan • BitChute • Parler • StormFront • Anonymity has lead to increase in anti-social behaviour , hate speech being one of them. • They can be studied at a macroscopic as well as microscopic level.  • Exists in various mediums. : Super, John, CyberPsychology & Behavior, 2004 : Luke Munn, Humanities and Social Sciences Communication, Article 53
cultural in nature. • UN defines hate speech as “any kind of communication that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are.”  • Need sensitisation of social media users. : UN hate : Pyramid of Hate Fig 1: Pyramid of Hate 
on the topic under consideration? ◦ Takeaway: Yes, topical information drives hate. ◦ Takeaway: Additionally, exogenous signals are as important as endogenous (in platform) signals to influence the spread of hate. • Question: Is there a middle ground to help users transition from extreme hate to non-hate? ◦ Takeaway: The way to curbing hate speech is more speech. ◦ Takeaway: Free speech and equal opportunity to speech are not same. • Question: How do different endogenous information help in detection of hate? ◦ Takeaway: Context matter in determining hatefulness. ◦ Takeaway: User’s recent history around a tweet captures similar psycho-linguistic patterns.
users and their neighbours  Fig 1: Affinity of different user types  : Ribeiro et al., Websci’18 • Employing Twitter retweet network. • Hateful neighbours tend to follow other hateful users. • Hateful users & their neighbours tend to tweet more and in short intervals, follow more. Fig 3: Centrality of hateful users and their neighbours 
Mathew et al., WebSci '19 Fig 1: Belief Propagation to determine hatefulness of users  Fig 2: Repost DAG  • Source: GAB as it promotes “free speech”. • User and Network Level Features. • They curated their own list of hateful lexicons. • Initial hateful users were enlisted based on hate lexicon mapping of users. Fig 3: Difference in hateful and non-hateful cascades 
Consider the hate, non-hate to be separate groups.  • Generic Information Cascade models do not take content into account, only who follows whom. [2, 3] • How different topics can lead to generation and spread of hate speech in a user network? • How a hateful tweet diffuses via retweets? Motivativation : Mathew et al., WebSci '19 : Wang et al., ICDM’17 : Yang et al., IJCAI,19
dataset. ◦ Timeline ◦ Follow network (2-hops) ◦ Meta data • Manually annotated a total of 17k tweets. • Trained a Hate Detection model for our dataset. • Additionally crawled online news articles (600k). : Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
different hashtags in RETINA  Fig 2: Retweet cascades for hateful and non-hate tweets in RETINA  • Different users show varying tendencies to engage in hateful content depending on the topic. • Hate speech spreads faster in a shorter period. : Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
a given time window predict if the given user (a follower account) will retweet the given hateful tweet.  : Masud et al., Hate is the New Infodemic: A Topic-aware Modeling of Hate Speech Diffusion on Twitter, ICDE 2021
of hate speech. • High Intensity hate is more likely to contain offensive lexicon, and offensive spans, direct attacks and mentions of target entity. • Low intensity hate is more subtle, usually employing sarcasm and humour. Consuming Coffee is bad, I hate it! (the world can live with this opinion) Lets bomb every coffee shop and kill all coffee makers (this is a threat) Fig 1: Pyramid of Hate  : Pyramid of Hate
in the study. 50% randomly assigned to the control group • H1: Are prompted users less likely to post the current offensive content. • H2: Are prompted users less likely to post content in future. : Katsaros et al., ICWSM ‘22 Fig 1: User behaviour statistics as a part of intervention study  Fig 2: Twitter reply test for offense replies. 
datasets. • Manually annotated for Hate intensity and hateful spans. • Hate Intensity is marked on a scale of 1-10. • Manual generation of normalised counter-part and its intensity. (k = 0.88) Fig 1: Original and Normalised Intensity Distribution  Fig 2: Dataset Stats  : Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
towards non-hate. • Does not force to change sentiment or opinion. • Evidently leads to less virality. Fig 1: Difference in predicted number of comments per set per iteration.  : Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
is to obtain its normalized (sensitised) form 𝑡` such that the intensity of hatred 𝜙𝑡 is reduced while the meaning still conveys.  𝜙 𝑡` < 𝜙 𝑡 Fig : Example of original high intensity vs normalised sentence  : Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
(HIP) Hate Span Prediction (HSI) Hate Intensity Reduction (HIR) Fig 1: Flowchart of NACL  : Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022 Extremely Hateful Input (ORIGINAL) Less Hateful Input (SUGGESTIVE) HATE NORMALIZATION Extremely Hateful Input (ORIGINAL) User’s Choice
quantitativeness of the generated texts. • Metric: ◦ Intensity ◦ Fluency ◦ Adequacy Fig 1: Results of Human Evaluation for NACL-HSR  : Masud et al., Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization, KDD 2022
speech datasets using hate lexicons. [1, 2] • The hate speech in real world goes beyond hateful slurs.  • Limited Study in Hinglish context. : Waseem & Hovy, NAACL’16 : Davidson et al., WebSci’17 : ElSheried et al., EMNLP’21 REDACTED INFORMATION FOR A WIP
Information Diffusion on Large Networks Dhruv Sahnan, Vasu Goel, Sarah Masud, Chhavi Jain, Vikram Goyal, Tanmoy Chakraborty Published at ACM TKDD 2022 Tool’s Demo Video: https://www.youtube.com/watch?v=yu1DsfnJk10
tools. • Inactive open-source tools. • Handling large graphs. • Level of modularity. Research Challenges • Test multiple hypotheses. • Experiments with the newer version of a diffusion model. • Reproducible and extendable results.
you may find while using some of the said tools? • (C1): Lack of support for large networks. • (C2): Lack of support for different graph-input formats. • (C3): Resource and memory-intensive tools are hard to set up. • (C4): Lack of scriptability and customizability, and less interactive UI. According to your use cases, can you list three most important features that you feel must be present in any network diffusion visualization tool that you may use? • (F1): Easy to use, customized visualization of diffusion. • (F2): Spatio-temporal analysis of the information flow. • (F3): Availability of key network and diffusion statistics at a glance. • (F4): Ability to save and load checkpoints. Complete Survey Available At  : Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD
supported formats. • Run standard as well as user defined epidemic models. • Dual diffusion analysis mode for comparative study • Provision of saving results as network, diffusion raw outputs, PDF, extensible to dashboard. • Web-based. : Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD
Study  : Dhruv et al., DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks, Accepted in TKDD • Despite being more feature DiVA is pretty easy to use and rates high on overall capabilities. • Expert and novice SUS scores are 83.1 and 74.6
established to be harder for the system to detect. • This is different from obfuscation. • As a result of improving our understanding of Implicit hate we can ◦ Improve hate speech classification via auxiliary modeling of explanation. ◦ Use the explanation as standalone service for practitioners and content moderators.
 6306 3900 Latent Hate  1089 7100 AbuseEval  2129 798 Samples from implicit hate corpuses: • Latent Hate: they don't look human. they look like a bunch of ugly monkeys, sorry monkeys r better looking !!!  • Gab: If you're white and like Niggers you've got serious fucking mental issues.  : Kennedy et al. , PsyArXiv’18 : ElSheried et al., EMNLP’21 : Caselli et al., LREC’20 Fig 1: Intra-class JS Distance for I, E, N classes in Latent Hate 
Modeling of Hate Speech Diffusion on Twitter (ICDE 2021) • Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization (KDD 2022) • DiVA: A Scalable, Interactive and Customizable Visual Analytics Platform for Information Diffusion on Large Networks (ACM TKDD 2022) Miscellaneous • Survey: Handling Bias in Toxic Speech Detection: A Survey (Under Review) • Essay: Nipping in the bud: detection, diffusion and mitigation of hate speech on social media (ACM SIGWEB Newsletter, Invited Publication) • Tutorials Conducted: Combating Online Hate Speech (WSDM 2021, ECML PKDD 2021) • Dashboard: RobinWatch (robinwatch.github.io)