Xigao Li (Stony Brook University), Amir Rahmati (Stony Brook University), Nick Nikiforakis (Stony Brook University)

Given the meteoric rise of large media platforms (such as YouTube) on the web, it is no surprise that attackers seek to abuse them in order to easily reach hundreds of millions of users. Among other social-engineering attacks perpetrated on these platforms, comment scams have increased in popularity despite the presence of mechanisms that purportedly give content creators control over their channel comments. In a comment scam, attackers set up script-controlled accounts that automatically post or reply to comments on media platforms, enticing users to contact them. Through the promise of free prizes and investment opportunities, attackers aim to steal financial assets from the end users who contact them.

In this paper, we present the first systematic, large-scale study of comment scams. We design and implement an infrastructure to collect a dataset of 8.8 million comments from 20 different YouTube channels over a 6-month period. We develop filters based on textual, graphical, and temporal features of comments and identify 206K scam comments from 10K unique accounts. Using this dataset, we present our analysis of scam campaigns, comment dynamics, and evasion techniques used by scammers. Lastly, through an IRB-approved study, we interact with 50 scammers to gain insights into their social-engineering tactics and payment preferences. Using transaction records on public blockchains, we perform a quantitative analysis of the financial assets stolen by scammers, finding that just the scammers that were part of our user study have stolen funds equivalent to millions of dollars. Our study demonstrates that existing scam-detection mechanisms are insufficient for curbing abuse, pointing to the need for better comment-moderation tools as well as other changes that would make it difficult for attackers to obtain tens of thousands of accounts on these large platforms.

View More Papers

WIP: Towards Practical LiDAR Spoofing Attack against Vehicles Driving...

Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

WIP: A Trust Assessment Method for In-Vehicular Networks using...

Artur Hermann, Natasa Trkulja (Ulm University - Institute of Distributed Systems), Anderson Ramon Ferraz de Lucena, Alexander Kiening (DENSO AUTOMOTIVE Deutschland GmbH), Ana Petrovska (Huawei Technologies), Frank Kargl (Ulm University - Institute of Distributed Systems)

Read More

Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks

Shu Wang (George Mason University), Kun Sun (George Mason University), Qi Li (Tsinghua University)

Read More

Eavesdropping on Black-box Mobile Devices via Audio Amplifier's EMR

Huiling Chen (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Wenqiang Jin (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Yupeng Hu (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Zhenyu Ning (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Kenli Li (College…

Read More