Arjun Arunasalam (Purdue University), Habiba Farrukh (University of California, Irvine), Eliz Tekcan (Purdue University), Z. Berkay Celik (Purdue University)

Refugees form a vulnerable population due to their forced displacement, facing many challenges in the process, such as language barriers and financial hardship. Recent world events such as the Ukrainian and Afgan refugee crises have centered this population in online discourse, especially in social media, e.g., TikTok and Twitter. Although discourse can be benign, hateful and malicious discourse also emerges. Thus, refugees often become targets of toxic content, where malicious attackers post online hate targeting this population. Such online toxicity can vary in nature; e.g., toxicity can differ in scale (individual vs. group), and intent (embarrassment vs. harm), and the varying types of toxicity targeting refugees largely remain unexplored. We seek to understand the types of toxic content targeting refugees in online spaces. To do so, we carefully curate seed queries to collect a corpus of ∼3 M Twitter posts targeting refugees. We semantically sample this corpus to produce an annotated dataset of 1,400 posts against refugees from seven different languages. We additionally use a deductive approach to qualitatively analyze the motivating sentiments (reasons) behind toxic posts. We discover that trolling and hate speech are the predominant toxic content that targets refugees. Furthermore, we uncover four main motivating sentiments (e.g., perceived ungratefulness, perceived fear of safety). Our findings synthesize important lessons for moderating toxic content, especially for vulnerable communities.

View More Papers

More Lightweight, yet Stronger: Revisiting OSCORE’s Replay Protection

Konrad-Felix Krentz (Uppsala University), Thiemo Voigt (Uppsala University, RISE Computer Science)

Read More

Securing the Satellite Software Stack

Samuel Jero (MIT Lincoln Laboratory), Juliana Furgala (MIT Lincoln Laboratory), Max A Heller (MIT Lincoln Laboratory), Benjamin Nahill (MIT Lincoln Laboratory), Samuel Mergendahl (MIT Lincoln Laboratory), Richard Skowyra (MIT Lincoln Laboratory)

Read More

WIP: Shadow Hack: Adversarial Shadow Attack Against LiDAR Object...

Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

A Preliminary Study on Using Large Language Models in...

Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Jay Ligatti, Larrence Hall (University of South Florida), S. Raj Rajagoppalan (Resideo), Armin Ziaie Tabari (CipherArmor)

Read More