-
On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Authors:
Shuai Yang,
Qi Yang,
Luoxi Tang,
Jeremy Blackburn,
Zhaohan Xi
Abstract:
Counterfactual reasoning has emerged as a crucial technique for generalizing the reasoning capabilities of large language models (LLMs). By generating and analyzing counterfactual scenarios, researchers can assess the adaptability and reliability of model decision-making. Although prior work has shown that LLMs often struggle with counterfactual reasoning, it remains unclear which factors most sig…
▽ More
Counterfactual reasoning has emerged as a crucial technique for generalizing the reasoning capabilities of large language models (LLMs). By generating and analyzing counterfactual scenarios, researchers can assess the adaptability and reliability of model decision-making. Although prior work has shown that LLMs often struggle with counterfactual reasoning, it remains unclear which factors most significantly impede their performance across different tasks and modalities. In this paper, we propose a decompositional strategy that breaks down the counterfactual generation from causality construction to the reasoning over counterfactual interventions. To support decompositional analysis, we investigate 11 datasets spanning diverse tasks, including natural language understanding, mathematics, programming, and vision-language tasks. Through extensive evaluations, we characterize LLM behavior across each decompositional stage and identify how modality type and intermediate reasoning influence performance. By establishing a structured framework for analyzing counterfactual reasoning, this work contributes to the development of more reliable LLM-based reasoning systems and informs future elicitation strategies.
△ Less
Submitted 17 May, 2025;
originally announced May 2025.
-
Evolving Hate Speech Online: An Adaptive Framework for Detection and Mitigation
Authors:
Shiza Ali,
Jeremy Blackburn,
Gianluca Stringhini
Abstract:
The proliferation of social media platforms has led to an increase in the spread of hate speech, particularly targeting vulnerable communities. Unfortunately, existing methods for automatically identifying and blocking toxic language rely on pre-constructed lexicons, making them reactive rather than adaptive. As such, these approaches become less effective over time, especially when new communitie…
▽ More
The proliferation of social media platforms has led to an increase in the spread of hate speech, particularly targeting vulnerable communities. Unfortunately, existing methods for automatically identifying and blocking toxic language rely on pre-constructed lexicons, making them reactive rather than adaptive. As such, these approaches become less effective over time, especially when new communities are targeted with slurs not included in the original datasets. To address this issue, we present an adaptive approach that uses word embeddings to update lexicons and develop a hybrid model that adjusts to emerging slurs and new linguistic patterns. This approach can effectively detect toxic language, including intentional spelling mistakes employed by aggressors to avoid detection. Our hybrid model, which combines BERT with lexicon-based techniques, achieves an accuracy of 95% for most state-of-the-art datasets. Our work has significant implications for creating safer online environments by improving the detection of toxic content and proactively updating the lexicon. Content Warning: This paper contains examples of hate speech that may be triggering.
△ Less
Submitted 21 February, 2025; v1 submitted 15 February, 2025;
originally announced February 2025.
-
Exploring Climate Change Discourse: Measurements and Analysis of Reddit Data
Authors:
Smriti Janaswamy,
Jeremy Blackburn
Abstract:
Social media is very popular for facilitating conversations about important topics and bringing forth insights and issues related to these topics. Reddit serves as a platform that fosters social interactions and hosts engaging discussions on a wide array of topics, thus forming narratives around these topics. One such topic is climate change. There are extensive discussions on Reddit about climate…
▽ More
Social media is very popular for facilitating conversations about important topics and bringing forth insights and issues related to these topics. Reddit serves as a platform that fosters social interactions and hosts engaging discussions on a wide array of topics, thus forming narratives around these topics. One such topic is climate change. There are extensive discussions on Reddit about climate change, indicating high interest in its various aspects. In this paper, we explore 11 subreddits that discuss climate change for the duration of 2014 to 2022 and conduct a data-driven analysis of the posts on these subreddits. We present a basic characterization of the data and show the distribution of the posts and authors across our dataset for all years. Additionally, we analyze user engagement metrics like scores for the posts and how they change over time. We also offer insights into the topics of discussion across the subreddits, followed by entities referenced throughout the dataset.
△ Less
Submitted 1 December, 2024;
originally announced December 2024.
-
A Data-Driven Analysis of the Sovereign Citizens Movement on Telegram
Authors:
Satrio Yudhoatmojo,
Utkucan Balci,
Jeremy Blackburn
Abstract:
Online communities of known extremist groups like the alt-right and QAnon have been well explored in past work. However, we find that an extremist group called Sovereign Citizens is relatively unexplored despite its existence since the 1970s. Their main belief is delegitimizing the established government with a tactic called paper terrorism, clogging courts with pseudolegal claims. In recent years…
▽ More
Online communities of known extremist groups like the alt-right and QAnon have been well explored in past work. However, we find that an extremist group called Sovereign Citizens is relatively unexplored despite its existence since the 1970s. Their main belief is delegitimizing the established government with a tactic called paper terrorism, clogging courts with pseudolegal claims. In recent years, their activities have escalated to threats like forcefully claiming property ownership and participating in the Capitol Riot. This paper aims to shed light on Sovereign Citizens' online activities by examining two Telegram channels, each belonging to an identified Sovereign Citizen individual. We collect over 888K text messages and apply NLP techniques. We find that the two channels differ in the topics they discussed, demonstrating different focuses. Further, the two channels exhibit less toxic content compared to other extremist groups like QAnon. Finally, we find indications of overlapping beliefs between the two channels and QAnon, suggesting a merging or complementing of beliefs.
△ Less
Submitted 29 October, 2024;
originally announced October 2024.
-
Vision Language Models Can Parse Floor Plan Maps
Authors:
David DeFazio,
Hrudayangam Mehta,
Jeremy Blackburn,
Shiqi Zhang
Abstract:
Vision language models (VLMs) can simultaneously reason about images and texts to tackle many tasks, from visual question answering to image captioning. This paper focuses on map parsing, a novel task that is unexplored within the VLM context and particularly useful to mobile robots. Map parsing requires understanding not only the labels but also the geometric configurations of a map, i.e., what a…
▽ More
Vision language models (VLMs) can simultaneously reason about images and texts to tackle many tasks, from visual question answering to image captioning. This paper focuses on map parsing, a novel task that is unexplored within the VLM context and particularly useful to mobile robots. Map parsing requires understanding not only the labels but also the geometric configurations of a map, i.e., what areas are like and how they are connected. To evaluate the performance of VLMs on map parsing, we prompt VLMs with floorplan maps to generate task plans for complex indoor navigation. Our results demonstrate the remarkable capability of VLMs in map parsing, with a success rate of 0.96 in tasks requiring a sequence of nine navigation actions, e.g., approaching and going through doors. Other than intuitive observations, e.g., VLMs do better in smaller maps and simpler navigation tasks, there was a very interesting observation that its performance drops in large open areas. We provide practical suggestions to address such challenges as validated by our experimental results. Webpage: https://shorturl.at/OUkEY
△ Less
Submitted 19 September, 2024;
originally announced September 2024.
-
PIXELMOD: Improving Soft Moderation of Visual Misleading Information on Twitter
Authors:
Pujan Paudel,
Chen Ling,
Jeremy Blackburn,
Gianluca Stringhini
Abstract:
Images are a powerful and immediate vehicle to carry misleading or outright false messages, yet identifying image-based misinformation at scale poses unique challenges. In this paper, we present PIXELMOD, a system that leverages perceptual hashes, vector databases, and optical character recognition (OCR) to efficiently identify images that are candidates to receive soft moderation labels on Twitte…
▽ More
Images are a powerful and immediate vehicle to carry misleading or outright false messages, yet identifying image-based misinformation at scale poses unique challenges. In this paper, we present PIXELMOD, a system that leverages perceptual hashes, vector databases, and optical character recognition (OCR) to efficiently identify images that are candidates to receive soft moderation labels on Twitter. We show that PIXELMOD outperforms existing image similarity approaches when applied to soft moderation, with negligible performance overhead. We then test PIXELMOD on a dataset of tweets surrounding the 2020 US Presidential Election, and find that it is able to identify visually misleading images that are candidates for soft moderation with 0.99% false detection and 2.06% false negatives.
△ Less
Submitted 30 July, 2024;
originally announced July 2024.
-
Unraveling the Web of Disinformation: Exploring the Larger Context of State-Sponsored Influence Campaigns on Twitter
Authors:
Mohammad Hammas Saeed,
Shiza Ali,
Pujan Paudel,
Jeremy Blackburn,
Gianluca Stringhini
Abstract:
Social media platforms offer unprecedented opportunities for connectivity and exchange of ideas; however, they also serve as fertile grounds for the dissemination of disinformation. Over the years, there has been a rise in state-sponsored campaigns aiming to spread disinformation and sway public opinion on sensitive topics through designated accounts, known as troll accounts. Past works on detecti…
▽ More
Social media platforms offer unprecedented opportunities for connectivity and exchange of ideas; however, they also serve as fertile grounds for the dissemination of disinformation. Over the years, there has been a rise in state-sponsored campaigns aiming to spread disinformation and sway public opinion on sensitive topics through designated accounts, known as troll accounts. Past works on detecting accounts belonging to state-backed operations focus on a single campaign. While campaign-specific detection techniques are easier to build, there is no work done on developing systems that are campaign-agnostic and offer generalized detection of troll accounts unaffected by the biases of the specific campaign they belong to. In this paper, we identify several strategies adopted across different state actors and present a system that leverages them to detect accounts from previously unseen campaigns. We study 19 state-sponsored disinformation campaigns that took place on Twitter, originating from various countries. The strategies include sending automated messages through popular scheduling services, retweeting and sharing selective content and using fake versions of verified applications for pushing content. By translating these traits into a feature set, we build a machine learning-based classifier that can correctly identify up to 94% of accounts from unseen campaigns. Additionally, we run our system in the wild and find more accounts that could potentially belong to state-backed operations. We also present case studies to highlight the similarity between the accounts found by our system and those identified by Twitter.
△ Less
Submitted 25 July, 2024;
originally announced July 2024.
-
Podcast Outcasts: Understanding Rumble's Podcast Dynamics
Authors:
Utkucan Balci,
Jay Patel,
Berkan Balci,
Jeremy Blackburn
Abstract:
Podcasting on Rumble, an alternative video-sharing platform, attracts controversial figures known for spreading divisive and often misleading content, which sharply contrasts with YouTube's more regulated environment. Motivated by the growing impact of podcasts on political discourse, as seen with figures like Joe Rogan and Andrew Tate, this paper explores the political biases and content strategi…
▽ More
Podcasting on Rumble, an alternative video-sharing platform, attracts controversial figures known for spreading divisive and often misleading content, which sharply contrasts with YouTube's more regulated environment. Motivated by the growing impact of podcasts on political discourse, as seen with figures like Joe Rogan and Andrew Tate, this paper explores the political biases and content strategies used by these platforms. In this paper, we conduct a comprehensive analysis of over 13K podcast videos from both YouTube and Rumble, focusing on their political content and the dynamics of their audiences. Using advanced speech-to-text transcription, topic modeling, and contrastive learning techniques, we explore three critical aspects: the presence of political bias in podcast channels, the nature of content that drives podcast views, and the usage of visual elements in these podcasts. Our findings reveal a distinct right-wing orientation in Rumble's podcasts, contrasting with YouTube's more diverse and apolitical content.
△ Less
Submitted 23 June, 2024; v1 submitted 20 June, 2024;
originally announced June 2024.
-
iDRAMA-Scored-2024: A Dataset of the Scored Social Media Platform from 2020 to 2023
Authors:
Jay Patel,
Pujan Paudel,
Emiliano De Cristofaro,
Gianluca Stringhini,
Jeremy Blackburn
Abstract:
Online web communities often face bans for violating platform policies, encouraging their migration to alternative platforms. This migration, however, can result in increased toxicity and unforeseen consequences on the new platform. In recent years, researchers have collected data from many alternative platforms, indicating coordinated efforts leading to offline events, conspiracy movements, hate…
▽ More
Online web communities often face bans for violating platform policies, encouraging their migration to alternative platforms. This migration, however, can result in increased toxicity and unforeseen consequences on the new platform. In recent years, researchers have collected data from many alternative platforms, indicating coordinated efforts leading to offline events, conspiracy movements, hate speech propagation, and harassment. Thus, it becomes crucial to characterize and understand these alternative platforms. To advance research in this direction, we collect and release a large-scale dataset from Scored -- an alternative Reddit platform that sheltered banned fringe communities, for example, c/TheDonald (a prominent right-wing community) and c/GreatAwakening (a conspiratorial community). Over four years, we collected approximately 57M posts from Scored, with at least 58 communities identified as migrating from Reddit and over 950 communities created since the platform's inception. Furthermore, we provide sentence embeddings of all posts in our dataset, generated through a state-of-the-art model, to further advance the field in characterizing the discussions within these communities. We aim to provide these resources to facilitate their investigations without the need for extensive data collection and processing efforts.
△ Less
Submitted 16 May, 2024;
originally announced May 2024.
-
Gun Culture in Fringe Social Media
Authors:
Fatemeh Tahmasbi,
Aakarsha Chug,
Barry Bradlyn,
Jeremy Blackburn
Abstract:
The increasing frequency of mass shootings in the United States has, unfortunately, become a norm. While the issue of gun control in the US involves complex legal concerns, there are also societal issues at play. One such social issue is so-called "gun culture," i.e., a general set of beliefs and actions related to gun ownership. However relatively little is known about gun culture, and even less…
▽ More
The increasing frequency of mass shootings in the United States has, unfortunately, become a norm. While the issue of gun control in the US involves complex legal concerns, there are also societal issues at play. One such social issue is so-called "gun culture," i.e., a general set of beliefs and actions related to gun ownership. However relatively little is known about gun culture, and even less is known when it comes to fringe online communities. This is especially worrying considering the aforementioned rise in mass shootings and numerous instances of shooters being radicalized online.
To address this gap, we explore gun culture on /k/, 4chan's weapons board. More specifically, using a variety of quantitative techniques, we examine over 4M posts on /k/ and position their discussion within the larger body of theoretical understanding of gun culture. Among other things, our findings suggest that gun culture on /k/ covers a relatively diverse set of topics (with a particular focus on legal discussion), some of which are signals of fetishism.
△ Less
Submitted 18 March, 2024; v1 submitted 14 March, 2024;
originally announced March 2024.
-
"Here's Your Evidence": False Consensus in Public Twitter Discussions of COVID-19 Science
Authors:
Alexandros Efstratiou,
Marina Efstratiou,
Satrio Yudhoatmojo,
Jeremy Blackburn,
Emiliano De Cristofaro
Abstract:
The COVID-19 pandemic brought about an extraordinary rate of scientific papers on the topic that were discussed among the general public, although often in biased or misinformed ways. In this paper, we present a mixed-methods analysis aimed at examining whether public discussions were commensurate with the scientific consensus on several COVID-19 issues. We estimate scientific consensus based on s…
▽ More
The COVID-19 pandemic brought about an extraordinary rate of scientific papers on the topic that were discussed among the general public, although often in biased or misinformed ways. In this paper, we present a mixed-methods analysis aimed at examining whether public discussions were commensurate with the scientific consensus on several COVID-19 issues. We estimate scientific consensus based on samples of abstracts from preprint servers and compare against the volume of public discussions on Twitter mentioning these papers. We find that anti-consensus posts and users, though overall less numerous than pro-consensus ones, are vastly over-represented on Twitter, thus producing a false consensus effect. This transpires with favorable papers being disproportionately amplified, along with an influx of new anti-consensus user sign-ups. Finally, our content analysis highlights that anti-consensus users misrepresent scientific findings or question scientists' integrity in their efforts to substantiate their claims.
△ Less
Submitted 7 June, 2024; v1 submitted 24 January, 2024;
originally announced January 2024.
-
From HODL to MOON: Understanding Community Evolution, Emotional Dynamics, and Price Interplay in the Cryptocurrency Ecosystem
Authors:
Kostantinos Papadamou,
Jay Patel,
Jeremy Blackburn,
Philipp Jovanovic,
Emiliano De Cristofaro
Abstract:
This paper presents a large-scale analysis of the cryptocurrency community on Reddit, shedding light on the intricate relationship between the evolution of their activity, emotional dynamics, and price movements. We analyze over 130M posts on 122 cryptocurrency-related subreddits using temporal analysis, statistical modeling, and emotion detection. While /r/CryptoCurrency and /r/dogecoin are the m…
▽ More
This paper presents a large-scale analysis of the cryptocurrency community on Reddit, shedding light on the intricate relationship between the evolution of their activity, emotional dynamics, and price movements. We analyze over 130M posts on 122 cryptocurrency-related subreddits using temporal analysis, statistical modeling, and emotion detection. While /r/CryptoCurrency and /r/dogecoin are the most active subreddits, we find an overall surge in cryptocurrency-related activity in 2021, followed by a sharp decline. We also uncover a strong relationship in terms of cross-correlation between online activity and the price of various coins, with the changes in the number of posts mostly leading the price changes. Backtesting analysis shows that a straightforward strategy based on the cross-correlation where one buys/sells a coin if the daily number of posts about it is greater/less than the previous would have led to a 3x return on investment. Finally, we shed light on the emotional dynamics of the cryptocurrency communities, finding that joy becomes a prominent indicator during upward market performance, while a decline in the market manifests an increase in anger.
△ Less
Submitted 12 December, 2023;
originally announced December 2023.
-
TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities
Authors:
Mohammad Hammas Saeed,
Kostantinos Papadamou,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini
Abstract:
Alas, coordinated hate attacks, or raids, are becoming increasingly common online. In a nutshell, these are perpetrated by a group of aggressors who organize and coordinate operations on a platform (e.g., 4chan) to target victims on another community (e.g., YouTube). In this paper, we focus on attributing raids to their source community, paving the way for moderation approaches that take the conte…
▽ More
Alas, coordinated hate attacks, or raids, are becoming increasingly common online. In a nutshell, these are perpetrated by a group of aggressors who organize and coordinate operations on a platform (e.g., 4chan) to target victims on another community (e.g., YouTube). In this paper, we focus on attributing raids to their source community, paving the way for moderation approaches that take the context (and potentially the motivation) of an attack into consideration. We present TUBERAIDER, an attribution system achieving over 75% accuracy in detecting and attributing coordinated hate attacks on YouTube videos. We instantiate it using links to YouTube videos shared on 4chan's /pol/ board, r/The_Donald, and 16 Incels-related subreddits. We use a peak detector to identify a rise in the comment activity of a YouTube video, which signals that an attack may be occurring. We then train a machine learning classifier based on the community language (i.e., TF-IDF scores of relevant keywords) to perform the attribution. We test TUBERAIDER in the wild and present a few case studies of actual aggression attacks identified by it to showcase its effectiveness.
△ Less
Submitted 22 June, 2024; v1 submitted 9 August, 2023;
originally announced August 2023.
-
Roll in the Tanks! Measuring Left-wing Extremism on Reddit at Scale
Authors:
Utkucan Balcı,
Michael Sirivianos,
Jeremy Blackburn
Abstract:
Social media's role in the spread and evolution of extremism is a focus of intense study. Online extremists have been involved in the spread of online hate, mis- and disinformation, and real-world violence. However, most existing work has focuses on right-wing extremism. In this paper, we perform a first of its kind large-scale measurement study exploring left-wing extremism. We focus on "tankies,…
▽ More
Social media's role in the spread and evolution of extremism is a focus of intense study. Online extremists have been involved in the spread of online hate, mis- and disinformation, and real-world violence. However, most existing work has focuses on right-wing extremism. In this paper, we perform a first of its kind large-scale measurement study exploring left-wing extremism. We focus on "tankies," a left-wing community that first arose in the 1950s in support of hardline actions of the USSR and has evolved to support what they call "Actually Existing Socialist" countries, e.g., CCP-run China, the USSR, and North Korea. We collect and analyze 1.3M posts from 53K authors from tankie subreddits, and explore the position of tankies within the broader far-left community on Reddit. Among other things, we find that tankies are clearly on the periphery of the larger far-left community. When examining the contents of posts, we find misalignments and conceptual homomorphisms that confirm the description of tankies in the theoretical work. We also discover that tankies focus more on state-level political events rather than social issues. Our findings provide empirical evidence of the distinct positioning and discourse of left-wing extremist groups on social media.
△ Less
Submitted 13 June, 2025; v1 submitted 13 July, 2023;
originally announced July 2023.
-
Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data
Authors:
Dominik Klepl,
Fei He,
Min Wu,
Daniel J. Blackburn,
Ptolemaios G. Sarrigiannis
Abstract:
Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the…
▽ More
Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a correlation-based measure of power spectral density similarity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.
△ Less
Submitted 27 September, 2023; v1 submitted 12 April, 2023;
originally announced April 2023.
-
Beyond Fish and Bicycles: Exploring the Varieties of Online Women's Ideological Spaces
Authors:
Utkucan Balci,
Chen Ling,
Emiliano De Cristofaro,
Megan Squire,
Gianluca Stringhini,
Jeremy Blackburn
Abstract:
The Internet has been instrumental in connecting under-represented and vulnerable groups of people. Platforms built to foster social interaction and engagement have enabled historically disenfranchised groups to have a voice. One such vulnerable group is women. In this paper, we explore the diversity in online women's ideological spaces using a multi-dimensional approach. We perform a large-scale,…
▽ More
The Internet has been instrumental in connecting under-represented and vulnerable groups of people. Platforms built to foster social interaction and engagement have enabled historically disenfranchised groups to have a voice. One such vulnerable group is women. In this paper, we explore the diversity in online women's ideological spaces using a multi-dimensional approach. We perform a large-scale, data-driven analysis of over 6M Reddit comments and submissions from 14 subreddits. We elicit a diverse taxonomy of online women's ideological spaces, ranging from counterparts to the so-called Manosphere to Gender-Critical Feminism. We then perform content analysis, finding meaningful differences across topics and communities. Finally, we shed light on two platforms, ovarit.com and thepinkpill.co, where two toxic communities of online women's ideological spaces (Gender-Critical Feminism and Femcels) migrated after their ban on Reddit.
△ Less
Submitted 13 March, 2023;
originally announced March 2023.
-
CoRL: Environment Creation and Management Focused on System Integration
Authors:
Justin D. Merrick,
Benjamin K. Heiner,
Cameron Long,
Brian Stieber,
Steve Fierro,
Vardaan Gangal,
Madison Blake,
Joshua Blackburn
Abstract:
Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards,…
▽ More
Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.
△ Less
Submitted 3 March, 2023;
originally announced March 2023.
-
Lung airway geometry as an early predictor of autism: A preliminary machine learning-based study
Authors:
Asef Islam,
Anthony Ronco,
Stephen M. Becker,
Jeremiah Blackburn,
Johannes C. Schittny,
Kyoungmi Kim,
Rebecca Stein-Wexler,
Anthony S. Wexler
Abstract:
The goal of this study is to assess the feasibility of airway geometry as a biomarker for ASD. Chest CT images of children with a documented diagnosis of ASD as well as healthy controls were identified retrospectively. 54 scans were obtained for analysis, including 31 ASD cases and 23 age and sex-matched controls. A feature selection and classification procedure using principal component analysis…
▽ More
The goal of this study is to assess the feasibility of airway geometry as a biomarker for ASD. Chest CT images of children with a documented diagnosis of ASD as well as healthy controls were identified retrospectively. 54 scans were obtained for analysis, including 31 ASD cases and 23 age and sex-matched controls. A feature selection and classification procedure using principal component analysis (PCA) and support vector machine (SVM) achieved a peak cross validation accuracy of nearly 89% using a feature set of 8 airway branching angles. Sensitivity was 94%, but specificity was only 78%. The results suggest a measurable difference in airway branchpoint angles between children with ASD and the control population. Under review at Scientific Reports
△ Less
Submitted 9 February, 2023; v1 submitted 13 January, 2023;
originally announced January 2023.
-
LAMBRETTA: Learning to Rank for Twitter Soft Moderation
Authors:
Pujan Paudel,
Jeremy Blackburn,
Emiliano De Cristofaro,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
To curb the problem of false information, social media platforms like Twitter started adding warning labels to content discussing debunked narratives, with the goal of providing more context to their audiences. Unfortunately, these labels are not applied uniformly and leave large amounts of false content unmoderated. This paper presents LAMBRETTA, a system that automatically identifies tweets that…
▽ More
To curb the problem of false information, social media platforms like Twitter started adding warning labels to content discussing debunked narratives, with the goal of providing more context to their audiences. Unfortunately, these labels are not applied uniformly and leave large amounts of false content unmoderated. This paper presents LAMBRETTA, a system that automatically identifies tweets that are candidates for soft moderation using Learning To Rank (LTR). We run LAMBRETTA on Twitter data to moderate false claims related to the 2020 US Election and find that it flags over 20 times more tweets than Twitter, with only 3.93% false positives and 18.81% false negatives, outperforming alternative state-of-the-art methods based on keyword extraction and semantic search. Overall, LAMBRETTA assists human moderators in identifying and flagging false information on social media.
△ Less
Submitted 12 December, 2022;
originally announced December 2022.
-
Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers and Hostile Intergroup Interactions on Reddit
Authors:
Alexandros Efstratiou,
Jeremy Blackburn,
Tristan Caulfield,
Gianluca Stringhini,
Savvas Zannettou,
Emiliano De Cristofaro
Abstract:
Previous research has documented the existence of both online echo chambers and hostile intergroup interactions. In this paper, we explore the relationship between these two phenomena by studying the activity of 5.97M Reddit users and 421M comments posted over 13 years. We examine whether users who are more engaged in echo chambers are more hostile when they comment on other communities. We then c…
▽ More
Previous research has documented the existence of both online echo chambers and hostile intergroup interactions. In this paper, we explore the relationship between these two phenomena by studying the activity of 5.97M Reddit users and 421M comments posted over 13 years. We examine whether users who are more engaged in echo chambers are more hostile when they comment on other communities. We then create a typology of relationships between political communities based on whether their users are toxic to each other, whether echo chamber-like engagement with these communities is associated with polarization, and on the communities' political leanings. We observe both the echo chamber and hostile intergroup interaction phenomena, but neither holds universally across communities. Contrary to popular belief, we find that polarizing and toxic speech is more dominant between communities on the same, rather than opposing, sides of the political spectrum, especially on the left; however, this mainly points to the collective targeting of political outgroups.
△ Less
Submitted 25 November, 2022;
originally announced November 2022.
-
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
Authors:
Wai Man Si,
Michael Backes,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Savvas Zannettou,
Yang Zhang
Abstract:
Chatbots are used in many applications, e.g., automated agents, smart home assistants, interactive characters in online games, etc. Therefore, it is crucial to ensure they do not behave in undesired manners, providing offensive or toxic responses to users. This is not a trivial task as state-of-the-art chatbot models are trained on large, public datasets openly collected from the Internet. This pa…
▽ More
Chatbots are used in many applications, e.g., automated agents, smart home assistants, interactive characters in online games, etc. Therefore, it is crucial to ensure they do not behave in undesired manners, providing offensive or toxic responses to users. This is not a trivial task as state-of-the-art chatbot models are trained on large, public datasets openly collected from the Internet. This paper presents a first-of-its-kind, large-scale measurement of toxicity in chatbots. We show that publicly available chatbots are prone to providing toxic responses when fed toxic queries. Even more worryingly, some non-toxic queries can trigger toxic responses too. We then set out to design and experiment with an attack, ToxicBuddy, which relies on fine-tuning GPT-2 to generate non-toxic queries that make chatbots respond in a toxic manner. Our extensive experimental evaluation demonstrates that our attack is effective against public chatbot models and outperforms manually-crafted malicious queries proposed by previous work. We also evaluate three defense mechanisms against ToxicBuddy, showing that they either reduce the attack performance at the cost of affecting the chatbot's utility or are only effective at mitigating a portion of the attack. This highlights the need for more research from the computer security and online safety communities to ensure that chatbot models do not hurt their users. Overall, we are confident that ToxicBuddy can be used as an auditing tool and that our work will pave the way toward designing more effective defenses for chatbot safety.
△ Less
Submitted 9 September, 2022; v1 submitted 7 September, 2022;
originally announced September 2022.
-
On Xing Tian and the Perseverance of Anti-China Sentiment Online
Authors:
Xinyue Shen,
Xinlei He,
Michael Backes,
Jeremy Blackburn,
Savvas Zannettou,
Yang Zhang
Abstract:
Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time. The outbreak of COVID-19 and the extended quarantine has further amplified it. However, we lack a quantitative understanding of the cause of Sinophobia as well as how it evolves over time. In this paper, we conduct a large-scale longitudinal measurement of Sinophobia, between 2016 and 2021, on two mainstream and fringe Web…
▽ More
Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time. The outbreak of COVID-19 and the extended quarantine has further amplified it. However, we lack a quantitative understanding of the cause of Sinophobia as well as how it evolves over time. In this paper, we conduct a large-scale longitudinal measurement of Sinophobia, between 2016 and 2021, on two mainstream and fringe Web communities. By analyzing 8B posts from Reddit and 206M posts from 4chan's /pol/, we investigate the origins, evolution, and content of Sinophobia. We find that, anti-Chinese content may be evoked by political events not directly related to China, e.g., the U.S. withdrawal from the Paris Agreement. And during the COVID-19 pandemic, daily usage of Sinophobic slurs has significantly increased even with the hate-speech ban policy. We also show that the semantic meaning of the words "China" and "Chinese" are shifting towards Sinophobic slurs with the rise of COVID-19 and remain the same in the pandemic period. We further use topic modeling to show the topics of Sinophobic discussion are pretty diverse and broad. We find that both Web communities share some common Sinophobic topics like ethnics, economics and commerce, weapons and military, foreign relations, etc. However, compared to 4chan's /pol/, more daily life-related topics including food, game, and stock are found in Reddit. Our finding also reveals that the topics related to COVID-19 and blaming the Chinese government are more prevalent in the pandemic period. To the best of our knowledge, this paper is the longest quantitative measurement of Sinophobia.
△ Less
Submitted 19 April, 2022;
originally announced April 2022.
-
Feels Bad Man: Dissecting Automated Hateful Meme Detection Through the Lens of Facebook's Challenge
Authors:
Catherine Jennifer,
Fatemeh Tahmasbi,
Jeremy Blackburn,
Gianluca Stringhini,
Savvas Zannettou,
Emiliano De Cristofaro
Abstract:
Internet memes have become a dominant method of communication; at the same time, however, they are also increasingly being used to advocate extremism and foster derogatory beliefs. Nonetheless, we do not have a firm understanding as to which perceptual aspects of memes cause this phenomenon. In this work, we assess the efficacy of current state-of-the-art multimodal machine learning models toward…
▽ More
Internet memes have become a dominant method of communication; at the same time, however, they are also increasingly being used to advocate extremism and foster derogatory beliefs. Nonetheless, we do not have a firm understanding as to which perceptual aspects of memes cause this phenomenon. In this work, we assess the efficacy of current state-of-the-art multimodal machine learning models toward hateful meme detection, and in particular with respect to their generalizability across platforms. We use two benchmark datasets comprising 12,140 and 10,567 images from 4chan's "Politically Incorrect" board (/pol/) and Facebook's Hateful Memes Challenge dataset to train the competition's top-ranking machine learning models for the discovery of the most prominent features that distinguish viral hateful memes from benign ones. We conduct three experiments to determine the importance of multimodality on classification performance, the influential capacity of fringe Web communities on mainstream social platforms and vice versa, and the models' learning transferability on 4chan memes. Our experiments show that memes' image characteristics provide a greater wealth of information than its textual content. We also find that current systems developed for online detection of hate speech in memes necessitate further concentration on its visual elements to improve their interpretation of underlying cultural connotations, implying that multimodal models fail to adequately grasp the intricacies of hate speech in memes and generalize across social media platforms.
△ Less
Submitted 17 February, 2022;
originally announced February 2022.
-
TROLLMAGNIFIER: Detecting State-Sponsored Troll Accounts on Reddit
Authors:
Mohammad Hammas Saeed,
Shiza Ali,
Jeremy Blackburn,
Emiliano De Cristofaro,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
Growing evidence points to recurring influence campaigns on social media, often sponsored by state actors aiming to manipulate public opinion on sensitive political topics. Typically, campaigns are performed through instrumented accounts, known as troll accounts; despite their prominence, however, little work has been done to detect these accounts in the wild. In this paper, we present TROLLMAGNIF…
▽ More
Growing evidence points to recurring influence campaigns on social media, often sponsored by state actors aiming to manipulate public opinion on sensitive political topics. Typically, campaigns are performed through instrumented accounts, known as troll accounts; despite their prominence, however, little work has been done to detect these accounts in the wild. In this paper, we present TROLLMAGNIFIER, a detection system for troll accounts. Our key observation, based on analysis of known Russian-sponsored troll accounts identified by Reddit, is that they show loose coordination, often interacting with each other to further specific narratives. Therefore, troll accounts controlled by the same actor often show similarities that can be leveraged for detection. TROLLMAGNIFIER learns the typical behavior of known troll accounts and identifies more that behave similarly. We train TROLLMAGNIFIER on a set of 335 known troll accounts and run it on a large dataset of Reddit accounts. Our system identifies 1,248 potential troll accounts; we then provide a multi-faceted analysis to corroborate the correctness of our classification. In particular, 66% of the detected accounts show signs of being instrumented by malicious actors (e.g., they were created on the same exact day as a known troll, they have since been suspended by Reddit, etc.). They also discuss similar topics as the known troll accounts and exhibit temporal synchronization in their activity. Overall, we show that using TROLLMAGNIFIER, one can grow the initial knowledge of potential trolls provided by Reddit by over 300%.
△ Less
Submitted 1 December, 2021;
originally announced December 2021.
-
Understanding the Use of e-Prints on Reddit and 4chan's Politically Incorrect Board
Authors:
Satrio Baskoro Yudhoatmojo,
Emiliano De Cristofaro,
Jeremy Blackburn
Abstract:
The dissemination and reach of scientific knowledge have increased at a blistering pace. In this context, e-Print servers have played a central role by providing scientists with a rapid and open mechanism for disseminating research without waiting for the (lengthy) peer review process. While helping the scientific community in several ways, e-Print servers also provide scientific communicators and…
▽ More
The dissemination and reach of scientific knowledge have increased at a blistering pace. In this context, e-Print servers have played a central role by providing scientists with a rapid and open mechanism for disseminating research without waiting for the (lengthy) peer review process. While helping the scientific community in several ways, e-Print servers also provide scientific communicators and the general public with access to a wealth of knowledge without paying hefty subscription fees. This motivates us to study how e-Prints are positioned within Web community discussions.
In this paper, we analyze data from two Web communities: 14 years of Reddit data and over 4 from 4chan's Politically Incorrect board. Our findings highlight the presence of e-Prints in both science-enthusiast and general-audience communities. Real-world events and distinct factors influence the e-Prints people's discussions; e.g., there was a surge of COVID-19-related research publications during the early months of the outbreak and increased references to e-Prints in online discussions. Text in e-Prints and in online discussions referencing them has a low similarity, suggesting that the latter are not exclusively talking about the findings in the former. Further, our analysis of a sample of threads highlights: 1) misinterpretation and generalization of research findings, 2) early research findings being amplified as a source for future predictions, and 3) questioning findings from a pseudoscientific e-Print. Overall, our work emphasizes the need to quickly and effectively validate non-peer-reviewed e-Prints that get substantial press/social media coverage to help mitigate wrongful interpretations of scientific outputs.
△ Less
Submitted 8 March, 2023; v1 submitted 3 November, 2021;
originally announced November 2021.
-
Slapping Cats, Bopping Heads, and Oreo Shakes: Understanding Indicators of Virality in TikTok Short Videos
Authors:
Chen Ling,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini
Abstract:
Short videos have become one of the leading media used by younger generations to express themselves online and thus a driving force in shaping online culture. In this context, TikTok has emerged as a platform where viral videos are often posted first. In this paper, we study what elements of short videos posted on TikTok contribute to their virality. We apply a mixed-method approach to develop a c…
▽ More
Short videos have become one of the leading media used by younger generations to express themselves online and thus a driving force in shaping online culture. In this context, TikTok has emerged as a platform where viral videos are often posted first. In this paper, we study what elements of short videos posted on TikTok contribute to their virality. We apply a mixed-method approach to develop a codebook and identify important virality features. We do so vis-à-vis three research hypotheses; namely, that: 1) the video content, 2) TikTok's recommendation algorithm, and 3) the popularity of the video creator contribute to virality.
We collect and label a dataset of 400 TikTok videos and train classifiers to help us identify the features that influence virality the most. While the number of followers is the most powerful predictor, close-up and medium-shot scales also play an essential role. So does the lifespan of the video, the presence of text, and the point of view. Our research highlights the characteristics that distinguish viral from non-viral TikTok videos, laying the groundwork for developing additional approaches to create more engaging online content and proactively identify possibly risky content that is likely to reach a large audience.
△ Less
Submitted 3 November, 2021;
originally announced November 2021.
-
Soros, Child Sacrifices, and 5G: Understanding the Spread of Conspiracy Theories on Web Communities
Authors:
Pujan Paudel,
Jeremy Blackburn,
Emiliano De Cristofaro,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
This paper presents a multi-platform computational pipeline geared to identify social media posts discussing (known) conspiracy theories. We use 189 conspiracy claims collected by Snopes, and find 66k posts and 277k comments on Reddit, and 379k tweets discussing them. Then, we study how conspiracies are discussed on different Web communities and which ones are particularly influential in driving t…
▽ More
This paper presents a multi-platform computational pipeline geared to identify social media posts discussing (known) conspiracy theories. We use 189 conspiracy claims collected by Snopes, and find 66k posts and 277k comments on Reddit, and 379k tweets discussing them. Then, we study how conspiracies are discussed on different Web communities and which ones are particularly influential in driving the discussion about them. Our analysis sheds light on how conspiracy theories are discussed and spread online, while highlighting multiple challenges in mitigating them.
△ Less
Submitted 3 November, 2021;
originally announced November 2021.
-
An Early Look at the Gettr Social Network
Authors:
Pujan Paudel,
Jeremy Blackburn,
Emiliano De Cristofaro,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
This paper presents the first data-driven analysis of Gettr, a new social network platform launched by former US President Donald Trump's team. Among other things, we find that users on the platform heavily discuss politics, with a focus on the Trump campaign in the US and Bolsonaro's in Brazil. Activity on the platform has steadily been decreasing since its launch, although a core of verified use…
▽ More
This paper presents the first data-driven analysis of Gettr, a new social network platform launched by former US President Donald Trump's team. Among other things, we find that users on the platform heavily discuss politics, with a focus on the Trump campaign in the US and Bolsonaro's in Brazil. Activity on the platform has steadily been decreasing since its launch, although a core of verified users and early adopters kept posting and become central to it. Finally, although toxicity has been increasing over time, the average level of toxicity is still lower than the one recently observed on other fringe social networks like Gab and 4chan. Overall, we provide a first quantitative look at this new community, observing a lack of organic engagement and activity.
△ Less
Submitted 12 August, 2021;
originally announced August 2021.
-
"I'm a Professor, which isn't usually a dangerous job": Internet-Facilitated Harassment and its Impact on Researchers
Authors:
Periwinkle Doerfler,
Andrea Forte,
Emiliano De Cristofaro,
Gianluca Stringhini,
Jeremy Blackburn,
Damon McCoy
Abstract:
While the Internet has dramatically increased the exposure that research can receive, it has also facilitated harassment against scholars. To understand the impact that these attacks can have on the work of researchers, we perform a series of systematic interviews with researchers including academics, journalists, and activists, who have experienced targeted, Internet-facilitated harassment. We pr…
▽ More
While the Internet has dramatically increased the exposure that research can receive, it has also facilitated harassment against scholars. To understand the impact that these attacks can have on the work of researchers, we perform a series of systematic interviews with researchers including academics, journalists, and activists, who have experienced targeted, Internet-facilitated harassment. We provide a framework for understanding the types of harassers that target researchers, the harassment that ensues, and the personal and professional impact on individuals and academic freedom. We then study preventative and remedial strategies available, and the institutions that prevent some of these strategies from being more effective. Finally, we discuss the ethical structures that could facilitate more equitable access to participating in research without serious personal suffering.
△ Less
Submitted 22 April, 2021; v1 submitted 22 April, 2021;
originally announced April 2021.
-
A Multi-Platform Analysis of Political News Discussion and Sharing on Web Communities
Authors:
Yuping Wang,
Savvas Zannettou,
Jeremy Blackburn,
Barry Bradlyn,
Emiliano De Cristofaro,
Gianluca Stringhini
Abstract:
The news ecosystem has become increasingly complex, encompassing a wide range of sources with varying levels of trustworthiness, and with public commentary giving different spins to the same stories. In this paper, we present a multi-platform measurement of this ecosystem. We compile a list of 1,073 news websites and extract posts from four Web communities (Twitter, Reddit, 4chan, and Gab) that co…
▽ More
The news ecosystem has become increasingly complex, encompassing a wide range of sources with varying levels of trustworthiness, and with public commentary giving different spins to the same stories. In this paper, we present a multi-platform measurement of this ecosystem. We compile a list of 1,073 news websites and extract posts from four Web communities (Twitter, Reddit, 4chan, and Gab) that contain URLs from these sources. This yields a dataset of 38M posts containing 15M news URLs, spanning almost three years.
We study the data along several axes, assessing the trustworthiness of shared news, designing a method to group news articles into stories, analyzing these stories are discussed and measuring the influence various Web communities have in that. Our analysis shows that different communities discuss different types of news, with polarized communities like Gab and /r/The_Donald subreddit disproportionately referencing untrustworthy sources. We also find that fringe communities often have a disproportionate influence on other platforms w.r.t. pushing narratives around certain news, for example about political elections, immigration, or foreign policy.
△ Less
Submitted 5 March, 2021;
originally announced March 2021.
-
Characterising Alzheimer's Disease with EEG-based Energy Landscape Analysis
Authors:
Dominik Klepl,
Fei He,
Min Wu,
Matteo De Marco,
Daniel J. Blackburn,
Ptolemaios Sarrigiannis
Abstract:
Alzheimer's disease (AD) is one of the most common neurodegenerative diseases, with around 50 million patients worldwide. Accessible and non-invasive methods of diagnosing and characterising AD are therefore urgently required. Electroencephalography (EEG) fulfils these criteria and is often used when studying AD. Several features derived from EEG were shown to predict AD with high accuracy, e.g. s…
▽ More
Alzheimer's disease (AD) is one of the most common neurodegenerative diseases, with around 50 million patients worldwide. Accessible and non-invasive methods of diagnosing and characterising AD are therefore urgently required. Electroencephalography (EEG) fulfils these criteria and is often used when studying AD. Several features derived from EEG were shown to predict AD with high accuracy, e.g. signal complexity and synchronisation. However, the dynamics of how the brain transitions between stable states have not been properly studied in the case of AD and EEG data. Energy landscape analysis is a method that can be used to quantify these dynamics. This work presents the first application of this method to both AD and EEG. Energy landscape assigns energy value to each possible state, i.e. pattern of activations across brain regions. The energy is inversely proportional to the probability of occurrence. By studying the features of energy landscapes of 20 AD patients and 20 healthy age-matched counterparts, significant differences were found. The dynamics of AD patients' brain networks were shown to be more constrained - with more local minima, less variation in basin size, and smaller basins. We show that energy landscapes can predict AD with high accuracy, performing significantly better than baseline models.
△ Less
Submitted 13 July, 2021; v1 submitted 19 February, 2021;
originally announced February 2021.
-
The Gospel According to Q: Understanding the QAnon Conspiracy from the Perspective of Canonical Information
Authors:
Antonis Papasavva,
Max Aliapoulios,
Cameron Ballard,
Emiliano De Cristofaro,
Gianluca Stringhini,
Savvas Zannettou,
Jeremy Blackburn
Abstract:
The QAnon conspiracy theory claims that a cabal of (literally) blood-thirsty politicians and media personalities are engaged in a war to destroy society. By interpreting cryptic "drops" of information from an anonymous insider calling themself Q, adherents of the conspiracy theory believe that Donald Trump is leading them in an active fight against this cabal. QAnon has been covered extensively by…
▽ More
The QAnon conspiracy theory claims that a cabal of (literally) blood-thirsty politicians and media personalities are engaged in a war to destroy society. By interpreting cryptic "drops" of information from an anonymous insider calling themself Q, adherents of the conspiracy theory believe that Donald Trump is leading them in an active fight against this cabal. QAnon has been covered extensively by the media, as its adherents have been involved in multiple violent acts, including the January 6th, 2021 seditious storming of the US Capitol building. Nevertheless, we still have relatively little understanding of how the theory evolved and spread on the Web, and the role played in that by multiple platforms.
To address this gap, we study QAnon from the perspective of "Q" themself. We build a dataset of 4,949 canonical Q drops collected from six "aggregation sites," which curate and archive them from their original posting to anonymous and ephemeral image boards. We expose that these sites have a relatively low (overall) agreement, and thus at least some Q drops should probably be considered apocryphal. We then analyze the Q drops' contents to identify topics of discussion and find statistically significant indications that drops were not authored by a single individual. Finally, we look at how posts on Reddit are used to disseminate Q drops to wider audiences. We find that dissemination was (initially) limited to a few sub-communities and that, while heavy-handed moderation decisions have reduced the overall issue, the "gospel" of Q persists on the Web.
△ Less
Submitted 29 April, 2022; v1 submitted 21 January, 2021;
originally announced January 2021.
-
Dissecting the Meme Magic: Understanding Indicators of Virality in Image Memes
Authors:
Chen Ling,
Ihab AbuHilal,
Jeremy Blackburn,
Emiliano De Cristofaro,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing fro…
▽ More
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users.
We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
△ Less
Submitted 16 January, 2021;
originally announced January 2021.
-
An Early Look at the Parler Online Social Network
Authors:
Max Aliapoulios,
Emmi Bevensee,
Jeremy Blackburn,
Barry Bradlyn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Savvas Zannettou
Abstract:
Parler is as an "alternative" social network promoting itself as a service that allows to "speak freely and express yourself openly, without fear of being deplatformed for your views." Because of this promise, the platform become popular among users who were suspended on mainstream social networks for violating their terms of service, as well as those fearing censorship. In particular, the service…
▽ More
Parler is as an "alternative" social network promoting itself as a service that allows to "speak freely and express yourself openly, without fear of being deplatformed for your views." Because of this promise, the platform become popular among users who were suspended on mainstream social networks for violating their terms of service, as well as those fearing censorship. In particular, the service was endorsed by several conservative public figures, encouraging people to migrate from traditional social networks. After the storming of the US Capitol on January 6, 2021, Parler has been progressively deplatformed, as its app was removed from Apple/Google Play stores and the website taken down by the hosting provider.
This paper presents a dataset of 183M Parler posts made by 4M users between August 2018 and January 2021, as well as metadata from 13.25M user profiles. We also present a basic characterization of the dataset, which shows that the platform has witnessed large influxes of new users after being endorsed by popular figures, as well as a reaction to the 2020 US Presidential Election. We also show that discussion on the platform is dominated by conservative topics, President Trump, as well as conspiracy theories like QAnon.
△ Less
Submitted 18 February, 2021; v1 submitted 11 January, 2021;
originally announced January 2021.
-
"It is just a flu": Assessing the Effect of Watch History on YouTube's Pseudoscientific Video Recommendations
Authors:
Kostantinos Papadamou,
Savvas Zannettou,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Michael Sirivianos
Abstract:
The role played by YouTube's recommendation algorithm in unwittingly promoting misinformation and conspiracy theories is not entirely understood. Yet, this can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, such as the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTu…
▽ More
The role played by YouTube's recommendation algorithm in unwittingly promoting misinformation and conspiracy theories is not entirely understood. Yet, this can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, such as the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTube. We collect 6.6K videos related to COVID-19, the Flat Earth theory, as well as the anti-vaccination and anti-mask movements. Using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant and train a deep learning classifier to detect pseudoscientific videos with an accuracy of 0.79.
We quantify user exposure to this content on various parts of the platform and how this exposure changes based on the user's watch history. We find that YouTube suggests more pseudoscientific content regarding traditional pseudoscientific topics (e.g., flat earth, anti-vaccination) than for emerging ones (like COVID-19). At the same time, these recommendations are more common on the search results page than on a user's homepage or in the recommendation section when actively watching videos. Finally, we shed light on how a user's watch history substantially affects the type of recommended videos.
△ Less
Submitted 12 October, 2021; v1 submitted 22 October, 2020;
originally announced October 2020.
-
Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
Authors:
Manoel Horta Ribeiro,
Shagun Jhaver,
Savvas Zannettou,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Robert West
Abstract:
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also ho…
▽ More
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
△ Less
Submitted 20 August, 2021; v1 submitted 20 October, 2020;
originally announced October 2020.
-
Understanding the Use of Fauxtography on Social Media
Authors:
Yuping Wang,
Fatemeh Tahmasbi,
Jeremy Blackburn,
Barry Bradlyn,
Emiliano De Cristofaro,
David Magerman,
Savvas Zannettou,
Gianluca Stringhini
Abstract:
Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and ide…
▽ More
Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation.
△ Less
Submitted 25 September, 2020; v1 submitted 24 September, 2020;
originally announced September 2020.
-
"Is it a Qoincidence?": An Exploratory Study of QAnon on Voat
Authors:
Antonis Papasavva,
Jeremy Blackburn,
Gianluca Stringhini,
Savvas Zannettou,
Emiliano De Cristofaro
Abstract:
Online fringe communities offer fertile grounds for users seeking and sharing ideas fueling suspicion of mainstream news and conspiracy theories. Among these, the QAnon conspiracy theory emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. Simultaneously, governments are thought to be controlle…
▽ More
Online fringe communities offer fertile grounds for users seeking and sharing ideas fueling suspicion of mainstream news and conspiracy theories. Among these, the QAnon conspiracy theory emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. Simultaneously, governments are thought to be controlled by "puppet masters," as democratically elected officials serve as a fake showroom of democracy.
This paper provides an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit), to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word embeddings to identify narratives around QAnon-specific keywords. Our graph visualization shows that some of the QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and so-called drops by "Q." Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community.
△ Less
Submitted 14 February, 2021; v1 submitted 10 September, 2020;
originally announced September 2020.
-
A First Look at Zoombombing
Authors:
Chen Ling,
Utkucan Balcı,
Jeremy Blackburn,
Gianluca Stringhini
Abstract:
Online meeting tools like Zoom and Google Meet have become central to our professional, educational, and personal lives. This has opened up new opportunities for large scale harassment. In particular, a phenomenon known as zoombombing has emerged, in which aggressors join online meetings with the goal of disrupting them and harassing their participants. In this paper, we conduct the first data-dri…
▽ More
Online meeting tools like Zoom and Google Meet have become central to our professional, educational, and personal lives. This has opened up new opportunities for large scale harassment. In particular, a phenomenon known as zoombombing has emerged, in which aggressors join online meetings with the goal of disrupting them and harassing their participants. In this paper, we conduct the first data-driven analysis of calls for zoombombing attacks on social media. We identify ten popular online meeting tools and extract posts containing meeting invitations to these platforms on a mainstream social network, Twitter, and on a fringe community known for organizing coordinated attacks against online users, 4chan. We then perform manual annotation to identify posts that are calling for zoombombing attacks, and apply thematic analysis to develop a codebook to better characterize the discussion surrounding calls for zoombombing. During the first seven months of 2020, we identify over 200 calls for zoombombing between Twitter and 4chan, and analyze these calls both quantitatively and qualitatively. Our findings indicate that the vast majority of calls for zoombombing are not made by attackers stumbling upon meeting invitations or bruteforcing their meeting ID, but rather by insiders who have legitimate access to these meetings, particularly students in high school and college classes. This has important security implications, because it makes common protections against zoombombing, such as password protection, ineffective. We also find instances of insiders instructing attackers to adopt the names of legitimate participants in the class to avoid detection, making countermeasures like setting up a waiting room and vetting participants less effective. Based on these observations, we argue that the only effective defense against zoombombing is creating unique join links for each participant.
△ Less
Submitted 8 September, 2020;
originally announced September 2020.
-
Reading In-Between the Lines: An Analysis of Dissenter
Authors:
Erik Rye,
Jeremy Blackburn,
Robert Beverly
Abstract:
Efforts by content creators and social networks to enforce legal and policy-based norms, e.g. blocking hate speech and users, has driven the rise of unrestricted communication platforms. One such recent effort is Dissenter, a browser and web application that provides a conversational overlay for any web page. These conversations hide in plain sight - users of Dissenter can see and participate in t…
▽ More
Efforts by content creators and social networks to enforce legal and policy-based norms, e.g. blocking hate speech and users, has driven the rise of unrestricted communication platforms. One such recent effort is Dissenter, a browser and web application that provides a conversational overlay for any web page. These conversations hide in plain sight - users of Dissenter can see and participate in this conversation, whereas visitors using other browsers are oblivious to their existence. Further, the website and content owners have no power over the conversation as it resides in an overlay outside their control.
In this work, we obtain a history of Dissenter comments, users, and the websites being discussed, from the initial release of Dissenter in Feb. 2019 through Apr. 2020 (14 months). Our corpus consists of approximately 1.68M comments made by 101k users commenting on 588k distinct URLs. We first analyze macro characteristics of the network, including the user-base, comment distribution, and growth. We then use toxicity dictionaries, Perspective API, and a Natural Language Processing model to understand the nature of the comments and measure the propensity of particular websites and content to elicit hateful and offensive Dissenter comments. Using curated rankings of media bias, we examine the conditional probability of hateful comments given left and right-leaning content. Finally, we study Dissenter as a social network, and identify a core group of users with high comment toxicity.
△ Less
Submitted 26 September, 2020; v1 submitted 3 September, 2020;
originally announced September 2020.
-
"Go eat a bat, Chang!": On the Emergence of Sinophobic Behavior on Web Communities in the Face of COVID-19
Authors:
Fatemeh Tahmasbi,
Leonard Schild,
Chen Ling,
Jeremy Blackburn,
Gianluca Stringhini,
Yang Zhang,
Savvas Zannettou
Abstract:
The outbreak of the COVID-19 pandemic has changed our lives in unprecedented ways. In the face of the projected catastrophic consequences, many countries have enacted social distancing measures in an attempt to limit the spread of the virus. Under these conditions, the Web has become an indispensable medium for information acquisition, communication, and entertainment. At the same time, unfortunat…
▽ More
The outbreak of the COVID-19 pandemic has changed our lives in unprecedented ways. In the face of the projected catastrophic consequences, many countries have enacted social distancing measures in an attempt to limit the spread of the virus. Under these conditions, the Web has become an indispensable medium for information acquisition, communication, and entertainment. At the same time, unfortunately, the Web is being exploited for the dissemination of potentially harmful and disturbing content, such as the spread of conspiracy theories and hateful speech towards specific ethnic groups, in particular towards Chinese people since COVID-19 is believed to have originated from China. In this paper, we make a first attempt to study the emergence of Sinophobic behavior on the Web during the outbreak of the COVID-19 pandemic. We collect two large-scale datasets from Twitter and 4chan's Politically Incorrect board (/pol/) over a time period of approximately five months and analyze them to investigate whether there is a rise or important differences with regard to the dissemination of Sinophobic content. We find that COVID-19 indeed drives the rise of Sinophobia on the Web and that the dissemination of Sinophobic content is a cross-platform phenomenon: it exists on fringe Web communities like \dspol, and to a lesser extent on mainstream ones like Twitter. Also, using word embeddings over time, we characterize the evolution and emergence of new Sinophobic slurs on both Twitter and /pol/. Finally, we find interesting differences in the context in which words related to Chinese people are used on the Web before and after the COVID-19 outbreak: on Twitter we observe a shift towards blaming China for the situation, while on /pol/ we find a shift towards using more (and new) Sinophobic slurs.
△ Less
Submitted 3 March, 2021; v1 submitted 8 April, 2020;
originally announced April 2020.
-
The Pushshift Telegram Dataset
Authors:
Jason Baumgartner,
Savvas Zannettou,
Megan Squire,
Jeremy Blackburn
Abstract:
Messaging platforms, especially those with a mobile focus, have become increasingly ubiquitous in society. These mobile messaging platforms can have deceivingly large user bases, and in addition to being a way for people to stay in touch, are often used to organize social movements, as well as a place for extremists and other ne'er-do-well to congregate. In this paper, we present a dataset from on…
▽ More
Messaging platforms, especially those with a mobile focus, have become increasingly ubiquitous in society. These mobile messaging platforms can have deceivingly large user bases, and in addition to being a way for people to stay in touch, are often used to organize social movements, as well as a place for extremists and other ne'er-do-well to congregate. In this paper, we present a dataset from one such mobile messaging platform: Telegram. Our dataset is made up of over 27.8K channels and 317M messages from 2.2M unique users. To the best of our knowledge, our dataset comprises the largest and most complete of its kind. In addition to the raw data, we also provide the source code used to collect it, allowing researchers to run their own data collection instance. We believe the Pushshift Telegram dataset can help researchers from a variety of disciplines interested in studying online social movements, protests, political extremism, and disinformation.
△ Less
Submitted 23 January, 2020;
originally announced January 2020.
-
The Pushshift Reddit Dataset
Authors:
Jason Baumgartner,
Savvas Zannettou,
Brian Keegan,
Megan Squire,
Jeremy Blackburn
Abstract:
Social media data has become crucial to the advancement of scientific understanding. However, even though it has become ubiquitous, just collecting large-scale social media data involves a high degree of engineering skill set and computational resources. In fact, research is often times gated by data engineering problems that must be overcome before analysis can proceed. This has resulted recognit…
▽ More
Social media data has become crucial to the advancement of scientific understanding. However, even though it has become ubiquitous, just collecting large-scale social media data involves a high degree of engineering skill set and computational resources. In fact, research is often times gated by data engineering problems that must be overcome before analysis can proceed. This has resulted recognition of datasets as meaningful research contributions in and of themselves. Reddit, the so called "front page of the Internet," in particular has been the subject of numerous scientific studies. Although Reddit is relatively open to data acquisition compared to social media platforms like Facebook and Twitter, the technical barriers to acquisition still remain. Thus, Reddit's millions of subreddits, hundreds of millions of users, and hundreds of billions of comments are at the same time relatively accessible, but time consuming to collect and analyze systematically. In this paper, we present the Pushshift Reddit dataset. Pushshift is a social media data collection, analysis, and archiving platform that since 2015 has collected Reddit data and made it available to researchers. Pushshift's Reddit dataset is updated in real-time, and includes historical data back to Reddit's inception. In addition to monthly dumps, Pushshift provides computational tools to aid in searching, aggregating, and performing exploratory analysis on the entirety of the dataset. The Pushshift Reddit dataset makes it possible for social media researchers to reduce time spent in the data collection, cleaning, and storage phases of their projects.
△ Less
Submitted 23 January, 2020;
originally announced January 2020.
-
"How over is it?" Understanding the Incel Community on YouTube
Authors:
Kostantinos Papadamou,
Savvas Zannettou,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Michael Sirivianos
Abstract:
YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we s…
▽ More
YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube.
Among other things, we find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content.
△ Less
Submitted 23 August, 2021; v1 submitted 22 January, 2020;
originally announced January 2020.
-
The Evolution of the Manosphere Across the Web
Authors:
Manoel Horta Ribeiro,
Jeremy Blackburn,
Barry Bradlyn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Summer Long,
Stephanie Greenberg,
Savvas Zannettou
Abstract:
In this paper, we present a large-scale characterization of the Manosphere, a conglomerate of Web-based misogynist movements roughly focused on "men's issues," which has seen significant growth over the past years. We do so by gathering and analyzing 28.8M posts from 6 forums and 51 subreddits. Overall, we paint a comprehensive picture of the evolution of the Manosphere on the Web, showing the lin…
▽ More
In this paper, we present a large-scale characterization of the Manosphere, a conglomerate of Web-based misogynist movements roughly focused on "men's issues," which has seen significant growth over the past years. We do so by gathering and analyzing 28.8M posts from 6 forums and 51 subreddits. Overall, we paint a comprehensive picture of the evolution of the Manosphere on the Web, showing the links between its different communities over the years. We find that milder and older communities, such as Pick Up Artists and Men's Rights Activists, are giving way to more extremist ones like Incels and Men Going Their Own Way, with a substantial migration of active users. Moreover, our analysis suggests that these newer communities are more toxic and misogynistic than the former.
△ Less
Submitted 8 April, 2021; v1 submitted 21 January, 2020;
originally announced January 2020.
-
Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board
Authors:
Antonis Papasavva,
Savvas Zannettou,
Emiliano De Cristofaro,
Gianluca Stringhini,
Jeremy Blackburn
Abstract:
This paper presents a dataset with over 3.3M threads and 134.5M posts from the Politically Incorrect board (/pol/) of the imageboard forum 4chan, posted over a period of almost 3.5 years (June 2016-November 2019). To the best of our knowledge, this represents the largest publicly available 4chan dataset, providing the community with an archive of posts that have been permanently deleted from 4chan…
▽ More
This paper presents a dataset with over 3.3M threads and 134.5M posts from the Politically Incorrect board (/pol/) of the imageboard forum 4chan, posted over a period of almost 3.5 years (June 2016-November 2019). To the best of our knowledge, this represents the largest publicly available 4chan dataset, providing the community with an archive of posts that have been permanently deleted from 4chan and are otherwise inaccessible. We augment the data with a set of additional labels, including toxicity scores and the named entities mentioned in each post. We also present a statistical analysis of the dataset, providing an overview of what researchers interested in using it can expect, as well as a simple content analysis, shedding light on the most prominent discussion topics, the most popular entities mentioned, and the toxicity level of each post. Overall, we are confident that our work will motivate and assist researchers in studying and understanding 4chan, as well as its role on the greater Web. For instance, we hope this dataset may be used for cross-platform studies of social media, as well as being useful for other types of research like natural language processing. Finally, our dataset can assist qualitative work focusing on in-depth case studies of specific narratives, events, or social theories.
△ Less
Submitted 1 April, 2020; v1 submitted 21 January, 2020;
originally announced January 2020.
-
Detecting Cyberbullying and Cyberaggression in Social Media
Authors:
Despoina Chatzakou,
Ilias Leontiadis,
Jeremy Blackburn,
Emiliano De Cristofaro,
Gianluca Stringhini,
Athena Vakali,
Nicolas Kourtellis
Abstract:
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and/or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which em…
▽ More
Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and/or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts.
In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90% accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future.
△ Less
Submitted 20 July, 2019;
originally announced July 2019.
-
Diffusing Your Mobile Apps: Extending In-Network Function Virtualization to Mobile Function Offloading
Authors:
Mario Almeida,
Liang Wang,
Jeremy Blackburn,
Konstantina Papagiannaki,
Jon Crowcroft
Abstract:
Motivated by the huge disparity between the limited battery capacity of user devices and the ever-growing energy demands of modern mobile apps, we propose INFv. It is the first offloading system able to cache, migrate and dynamically execute on demand functionality from mobile devices in ISP networks. It aims to bridge this gap by extending the promising NFV paradigm to mobile applications in orde…
▽ More
Motivated by the huge disparity between the limited battery capacity of user devices and the ever-growing energy demands of modern mobile apps, we propose INFv. It is the first offloading system able to cache, migrate and dynamically execute on demand functionality from mobile devices in ISP networks. It aims to bridge this gap by extending the promising NFV paradigm to mobile applications in order to exploit in-network resources. In this paper, we present the overall design, state-of-the-art technologies adopted, and various engineering details in the INFv system. We also carefully study the deployment configurations by investigating over 20K Google Play apps, as well as thorough evaluations with realistic settings. In addition to a significant improvement in battery life (up to 6.9x energy reduction) and execution time (up to 4x faster), INFv has two distinct advantages over previous systems: 1) a non-intrusive offloading mechanism transparent to existing apps; 2) an inherent framework support to effectively balance computation load and exploit the proximity of in-network resources. Both advantages together enable a scalable and incremental deployment of computation offloading framework in practical ISPs' networks.
△ Less
Submitted 14 June, 2019;
originally announced June 2019.
-
EYEORG: A Platform For Crowdsourcing Web Quality Of Experience Measurements
Authors:
Matteo Varvello,
Jeremy Blackburn,
David Naylor,
Kostantina Papagiannaki
Abstract:
Tremendous effort has gone into the ongoing battle to make webpages load faster. This effort has culminated in new protocols (QUIC, SPDY, and HTTP/2) as well as novel content delivery mechanisms. In addition, companies like Google and SpeedCurve investigated how to measure "page load time" (PLT) in a way that captures human perception. In this paper we present Eyeorg, a platform for crowdsourcing…
▽ More
Tremendous effort has gone into the ongoing battle to make webpages load faster. This effort has culminated in new protocols (QUIC, SPDY, and HTTP/2) as well as novel content delivery mechanisms. In addition, companies like Google and SpeedCurve investigated how to measure "page load time" (PLT) in a way that captures human perception. In this paper we present Eyeorg, a platform for crowdsourcing web quality of experience measurements. Eyeorg overcomes the scaling and automation challenges of recruiting users and collecting consistent user-perceived quality measurements. We validate Eyeorg's capabilities via a set of 100 trusted participants. Next, we showcase its functionalities via three measurement campaigns, each involving 1,000 paid participants, to 1) study the quality of several PLT metrics, 2) compare HTTP/1.1 and HTTP/2 performance, and 3) assess the impact of online advertisements and ad blockers on user experience. We find that commonly used, and even novel and sophisticated PLT metrics fail to represent actual human perception of PLT, that the performance gains from HTTP/2 are imperceivable in some circumstances, and that not all ad blockers are created equal.
△ Less
Submitted 7 February, 2019;
originally announced February 2019.
-
"And We Will Fight For Our Race!" A Measurement Study of Genetic Testing Conversations on Reddit and 4chan
Authors:
Alexandros Mittos,
Savvas Zannettou,
Jeremy Blackburn,
Emiliano De Cristofaro
Abstract:
Progress in genomics has enabled the emergence of a booming market for "direct-to-consumer" genetic testing. Nowadays, companies like 23andMe and AncestryDNA provide affordable health, genealogy, and ancestry reports, and have already tested tens of millions of customers. At the same time, alt- and far-right groups have also taken an interest in genetic testing, using them to attack minorities and…
▽ More
Progress in genomics has enabled the emergence of a booming market for "direct-to-consumer" genetic testing. Nowadays, companies like 23andMe and AncestryDNA provide affordable health, genealogy, and ancestry reports, and have already tested tens of millions of customers. At the same time, alt- and far-right groups have also taken an interest in genetic testing, using them to attack minorities and prove their genetic "purity." In this paper, we present a measurement study shedding light on how genetic testing is being discussed on Web communities in Reddit and 4chan. We collect 1.3M comments posted over 27 months on the two platforms, using a set of 280 keywords related to genetic testing. We then use NLP and computer vision tools to identify trends, themes, and topics of discussion.
Our analysis shows that genetic testing attracts a lot of attention on Reddit and 4chan, with discussions often including highly toxic language expressed through hateful, racist, and misogynistic comments. In particular, on 4chan's politically incorrect board (/pol/), content from genetic testing conversations involves several alt-right personalities and openly antisemitic rhetoric, often conveyed through memes. Finally, we find that discussions build around user groups, from technology enthusiasts to communities promoting fringe political views.
△ Less
Submitted 4 October, 2019; v1 submitted 28 January, 2019;
originally announced January 2019.