Friedemann Lipphardt (MPI-INF), Moonis Ali (MPI-INF), Martin Banzer (MPI-INF), Anja Feldmann (MPI-INF), Devashish Gosain (IIT Bombay)

Large language models (LLMs) are widely used for information access, yet their content moderation behavior varies sharply across geographic and linguistic contexts. This paper presents a first comprehensive analysis of content moderation patterns detected in over 700,000 replies from 15 leading LLMs evaluated from 12 locations using 1,118 sensitive queries spanning five categories in 13 languages.

We find substantial geographic variation, with moderation rates showing relative differences up to 60% across locations—for instance, soft moderation (e.g., evasive replies) appears in 14.3% of German contexts versus 24.9% in Zulu contexts. Category-wise, misc. (generally unsafe), hate speech, and sexual content are more heavily moderated than political or religious content, with political content showing the most geographic variability. We also observe discrepancies between online and offline model versions, such as DeepSeek exhibiting 15.2% higher relative soft moderation rates when deployed locally than via API. The response length (and time) analysis reveals that moderated responses are, on average, about 50% shorter than the unmoderated ones.

These findings have important implications for AI fairness and digital equity, as users in different locations receive inconsistent access to information. We provide the first systematic evidence of geographic cross-language bias in LLM content moderation and showcase how model selection vastly impacts user experience.

View More Papers

Consensus in the Known Participation Model with Byzantine Faults...

Chenxu Wang (Shandong University), Sisi Duan (Tsinghua University), Minghui Xu (Shandong University), Feng Li (Shandong University), Xiuzhen Cheng (Shandong University)

Read More

IsolatOS: Detecting Double Fetch Bugs in COTS RTOS by...

Yingjie Cao (The Hong Kong Polytechnic University), Xiaogang Zhu (The University of Adelaide), Dean Sullivan (University of New Hampshire), Haowei Yang (360 Security Technology Inc.), Lei Xue (Sun Yat-sen University), Xian Li (Swinburne University of Technology), Chenxiong Qian (University of Hong Kong), Minrui Yan (Swinburne University of Technology), Xiapu Luo (The Hong Kong Polytechnic University)

Read More