Technical infrastructure as a hidden terrain of disinformation
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Technical infrastructure as a hidden terrain of disinformation

While social media disinformation has received significant academic and policy attention, more consequential forms of intentional manipulation target the underlying digital infrastructures upon which society depends. Infrastructure-based deception, less visible than deception targeting content and platforms, has consequences for internet security, stability and trust. This article examines terrains of disinformation in digital infrastructure, including in the Domain Name System, access and interconnection, public key infrastructures, cyber-physical systems and emerging technologies. Infrastructure disinformation is largely a cybersecurity problem. By bringing technical infrastructure into the epistemic realm of disinformation, this paper shifts policy conversations around content moderation to encompass stronger cybersecurity architectures.

Read More
Misinformed about Misinformation: On the polarizing discourse on misinformation and its consequences for the field
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Misinformed about Misinformation: On the polarizing discourse on misinformation and its consequences for the field

For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media. Substantial resources have been dedicated to identifying its effects, how and why it spreads, and how to mitigate its harm. Yet, despite these efforts, it can sometimes feel as if the field is no closer to answering basic questions about misinformation’s real-world impacts, such as its effects on elections or links to extremism and radicalization. Many of the conversations that we are having about the role of misinformation in society are incredibly polarizing (Bernstein, 2021), for example, Facebook significantly shaped the results of 2016 elections vs. Facebook did not affect the outcome of the 2016 elections; algorithm recommendations polarize social media users vs. algorithm recommendations do not polarize social media users; deep fakes and other AI generated content are a significant threat to elections, or they are not. On more than one occasion, this zero-sum framing of “the misinformation threat” has led politicians and commentators to point at misinformation as either the origin of all evil in the world or as a rhetorical concept invented by (other) politicians and their allies.  For researchers and members of communities affected by misinformation, it is hard not to see the field in crisis. However, we see this as an inflection point and an opportunity to chart a more informed, community-oriented, and contextual research practice. By diversifying perspectives and grounding research in the experiences of those most affected, the field can move beyond the current polarization. In doing so, policy decisions regarding misinformation will not only be better informed and evidence-based, but realistic about what regulations can or cannot do.

Read More
Strategic Storytelling: Russian State-Backed Media Coverage of the Ukraine War
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Strategic Storytelling: Russian State-Backed Media Coverage of the Ukraine War

During the 2022 Russian invasion of Ukraine, Russia was accused of weaponizing its state-backed media outlets to promote a pro-Russian version of the war. Consequently, Russian state-backed media faced a series of new sanctions from Western governments and technology companies. While some studies have sought to identify disinformation about the war, less research has focused on understanding how these stories come together as narratives, particularly in non-English language contexts. Grounded in strategic narrative theory, we analyze Russian state-backed media coverage of the Ukraine war across 12 languages. Using topic modeling and narrative analysis, we find that Russian state-backed media focused primarily on promoting identity narratives, forming an image that Russia is powerful, Ukraine is evil, and the West is hypocritical. Russian strategic narratives both converged and diverged across languages and outlets in ways that met Russia’s desired image and objectives in each region. This paper allows us to better theorize the evolving and transformative role of strategic narrative in Russian state-backed news media during times of conflict.

Read More
An investigation of social media labeling decisions preceding the 2020 U.S. election.
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

An investigation of social media labeling decisions preceding the 2020 U.S. election.

Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found that based on publicly available information, most of the platforms’ decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms’ stated policies. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices.

Read More
Look Who’s Watching: Platform Labels and User Engagement on State-backed Media.
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Look Who’s Watching: Platform Labels and User Engagement on State-backed Media.

Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are “state-media labels” which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state-media labels affect users’ perceptions of content from state-backed outlets. This article proposes new methodological and theoretical approaches to investigate the effect of state-media labels on users’ engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labeling of five state-funded channels (Al Jazeera English [AJE], China Global Television Network, Russia Today [RT], TRT World, and Voice of America [VOA] News), this article analyses the effect that YouTube’s labels had on users’ engagement with state-backed media content.

Read More
Playing Both Sides: Russian State-Backed Media Coverage of the BlackLivesMatter Movement
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Playing Both Sides: Russian State-Backed Media Coverage of the BlackLivesMatter Movement

Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations. We conduct a qualitative content analysis of 2,014 Facebook posts about the #BlackLivesMatter (BLM) protests in the United States over the summer of 2020 to comparatively examine the overt propaganda strategies of six Russian-linked news organizations—RT, Ruptly, Soapbox, In The NOW, Sputnik, and Redfish.

Read More
The Gender Dimensions of Foreign Influence Operations
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

The Gender Dimensions of Foreign Influence Operations

Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. By examining the political communication of feminism and women’s rights, we find, first, that foreign state actors co-opted intersectional critiques and countermovement narratives about feminism and female empowerment to demobilize civil society activists, spread progovernment propaganda, and generate virality around divisive political topics. Second, 10 amplifier accounts—particularly from the Russian IRA and GRU—drove more than one-third of the Twitter conversations about feminism and women’s rights. Third, high-profile feminist politicians, activists, celebrities, and journalists were targeted with character attacks by the Russian GRU. These attacks happened indirectly, reinforcing a culture of hate rather than attempting to stifle or suppress the expression of rights through threats or harassment. This comparative look at the online political communication of women’s rights by foreign state actors highlights distinct blueprints for foreign influence operations while enriching the literature about the unique challenges women face online.

Read More
Disinformation Optimized: Gaming Algorithms to Amplify Junk News.
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

Disinformation Optimized: Gaming Algorithms to Amplify Junk News.

Previous research has described how highly personalised paid advertising on social media platforms can be used to influence voter preferences and undermine the integrity of elections. However, less work has examined how search engine optimisation (SEO) strategies are used to target audiences with disinformation or political propaganda. This paper looks at 29 junk news domains and their SEO keyword strategies between January 2016 and March 2019. I find that SEO — rather than paid advertising — is the most important strategy for generating discoverability via Google Search. Following public concern over the spread of disinformation online, Google’s algorithmic changes had a significant impact on junk news discoverability. The findings of this research have implications for policymaking, as regulators think through legal remedies to combat the spread of disinformation online.

Read More
The Global Organization of Social Media Disinformation Campaigns
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

The Global Organization of Social Media Disinformation Campaigns

Social media has emerged as a powerful tool for political engagement and expression. However, state actors are increasingly leveraging these platforms to spread computational propaganda and disinformation during critical moments of public life. These actions serve to nudge public opinion, set political or media agendas, censor freedom of speech, or control the flow of information online. Drawing on data collected from the Computational Propaganda Project’s 2017 investigation into the global organization of social-media manipulation, we examine how governments and political parties around the world are using social media to shape public attitudes, opinions, and discourses at home and abroad. We demonstrate the global nature of this phenomenon, comparatively assessing the organizational capacity and form these actors assume, and discuss the consequences for the future of power and democracy.

Read More
The Politicization of the Domain Name System: Implications for Internet Security, Stability and Freedom
Journal Article Sam Bradshaw Journal Article Sam Bradshaw

The Politicization of the Domain Name System: Implications for Internet Security, Stability and Freedom

One of the most contentious and longstanding debates in Internet governance involves the question of oversight of the Domain Name System (DNS). DNS administration is sometimes described as a “clerical” or “merely technical” task, but it also implicates a number of public policy concerns such as trademark disputes, infrastructure stability and security, resource allocation, and freedom of speech. A parallel phenomenon involves governmental and private forces increasingly altering or co-opting the DNS for political and economic purposes distinct from its core function of resolving Internet names into numbers. This article examines both the intrinsic politics of the DNS in its operation and specific examples and techniques of co-opting or altering DNS’ technical infrastructure as a new tool of global power. The article concludes with an analysis of the implications of this infrastructure-mediated governance on network security, architectural stability, and the efficacy of the Internet governance ecosystem.

Read More