Technical infrastructure as a hidden terrain of disinformation
While social media disinformation has received significant academic and policy attention, more consequential forms of intentional manipulation target the underlying digital infrastructures upon which society depends. Infrastructure-based deception, less visible than deception targeting content and platforms, has consequences for internet security, stability and trust. This article examines terrains of disinformation in digital infrastructure, including in the Domain Name System, access and interconnection, public key infrastructures, cyber-physical systems and emerging technologies. Infrastructure disinformation is largely a cybersecurity problem. By bringing technical infrastructure into the epistemic realm of disinformation, this paper shifts policy conversations around content moderation to encompass stronger cybersecurity architectures.
Disinformation and Identity-Based Violence
Disinformation spread via digital technologies is accelerating and exacerbating violence globally. There is an urgency to understand how coordinated disinformation campaigns rely on identity-based disinformation that weaponizes racism, sexism, and xenophobia to incite violence against individuals and marginalized communities, stifle social movements, and silence the press.
While high-profile examples of coordinated disinformation campaigns often focus on how false narratives and fake accounts might disrupt elections, the case of Myanmar illustrates the unique ways that disinformation can be weaponized to foment fear, hatred, and violence against marginalized populations. Social media is not an inherently liberating technology but can be weaponized by governments to control the information space, suppress human rights, and incite violence.
The purposeful spread and amplification of identity-based disinformation is not just an individual expression of individual bias but instead represents the systematic weaponization of discrimination to make hateful narratives go viral. At the core of identity-based disinformation is the exploitation of individuals’ or groups’ senses of identity, belonging, and social standing.
Responding to identity-based disinformation will require technical and human responses that are collaborative, locally relevant, and community-driven. Given the extent of the harms posed by identity-based disinformation, there is a continued need to develop, implement, and improve responses.
Misinformed about Misinformation: On the polarizing discourse on misinformation and its consequences for the field
For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media. Substantial resources have been dedicated to identifying its effects, how and why it spreads, and how to mitigate its harm. Yet, despite these efforts, it can sometimes feel as if the field is no closer to answering basic questions about misinformation’s real-world impacts, such as its effects on elections or links to extremism and radicalization. Many of the conversations that we are having about the role of misinformation in society are incredibly polarizing (Bernstein, 2021), for example, Facebook significantly shaped the results of 2016 elections vs. Facebook did not affect the outcome of the 2016 elections; algorithm recommendations polarize social media users vs. algorithm recommendations do not polarize social media users; deep fakes and other AI generated content are a significant threat to elections, or they are not. On more than one occasion, this zero-sum framing of “the misinformation threat” has led politicians and commentators to point at misinformation as either the origin of all evil in the world or as a rhetorical concept invented by (other) politicians and their allies. For researchers and members of communities affected by misinformation, it is hard not to see the field in crisis. However, we see this as an inflection point and an opportunity to chart a more informed, community-oriented, and contextual research practice. By diversifying perspectives and grounding research in the experiences of those most affected, the field can move beyond the current polarization. In doing so, policy decisions regarding misinformation will not only be better informed and evidence-based, but realistic about what regulations can or cannot do.
Strategic Storytelling: Russian State-Backed Media Coverage of the Ukraine War
During the 2022 Russian invasion of Ukraine, Russia was accused of weaponizing its state-backed media outlets to promote a pro-Russian version of the war. Consequently, Russian state-backed media faced a series of new sanctions from Western governments and technology companies. While some studies have sought to identify disinformation about the war, less research has focused on understanding how these stories come together as narratives, particularly in non-English language contexts. Grounded in strategic narrative theory, we analyze Russian state-backed media coverage of the Ukraine war across 12 languages. Using topic modeling and narrative analysis, we find that Russian state-backed media focused primarily on promoting identity narratives, forming an image that Russia is powerful, Ukraine is evil, and the West is hypocritical. Russian strategic narratives both converged and diverged across languages and outlets in ways that met Russia’s desired image and objectives in each region. This paper allows us to better theorize the evolving and transformative role of strategic narrative in Russian state-backed news media during times of conflict.
An investigation of social media labeling decisions preceding the 2020 U.S. election.
Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found that based on publicly available information, most of the platforms’ decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms’ stated policies. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices.
Look Who’s Watching: Platform Labels and User Engagement on State-backed Media.
Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are “state-media labels” which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state-media labels affect users’ perceptions of content from state-backed outlets. This article proposes new methodological and theoretical approaches to investigate the effect of state-media labels on users’ engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labeling of five state-funded channels (Al Jazeera English [AJE], China Global Television Network, Russia Today [RT], TRT World, and Voice of America [VOA] News), this article analyses the effect that YouTube’s labels had on users’ engagement with state-backed media content.
Playing Both Sides: Russian State-Backed Media Coverage of the BlackLivesMatter Movement
Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations. We conduct a qualitative content analysis of 2,014 Facebook posts about the #BlackLivesMatter (BLM) protests in the United States over the summer of 2020 to comparatively examine the overt propaganda strategies of six Russian-linked news organizations—RT, Ruptly, Soapbox, In The NOW, Sputnik, and Redfish.
The Gender Dimensions of Foreign Influence Operations
Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. By examining the political communication of feminism and women’s rights, we find, first, that foreign state actors co-opted intersectional critiques and countermovement narratives about feminism and female empowerment to demobilize civil society activists, spread progovernment propaganda, and generate virality around divisive political topics. Second, 10 amplifier accounts—particularly from the Russian IRA and GRU—drove more than one-third of the Twitter conversations about feminism and women’s rights. Third, high-profile feminist politicians, activists, celebrities, and journalists were targeted with character attacks by the Russian GRU. These attacks happened indirectly, reinforcing a culture of hate rather than attempting to stifle or suppress the expression of rights through threats or harassment. This comparative look at the online political communication of women’s rights by foreign state actors highlights distinct blueprints for foreign influence operations while enriching the literature about the unique challenges women face online.
Disinformation Optimized: Gaming Algorithms to Amplify Junk News.
Previous research has described how highly personalised paid advertising on social media platforms can be used to influence voter preferences and undermine the integrity of elections. However, less work has examined how search engine optimisation (SEO) strategies are used to target audiences with disinformation or political propaganda. This paper looks at 29 junk news domains and their SEO keyword strategies between January 2016 and March 2019. I find that SEO — rather than paid advertising — is the most important strategy for generating discoverability via Google Search. Following public concern over the spread of disinformation online, Google’s algorithmic changes had a significant impact on junk news discoverability. The findings of this research have implications for policymaking, as regulators think through legal remedies to combat the spread of disinformation online.
Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018
Social media is an important source of news and information in the United States. But during the 2016 US presidential election, social media platforms emerged as a breeding ground for influence campaigns, conspiracy, and alternative media. Anecdotally, the nature of political news and information evolved over time, but political communication researchers have yet to develop a comprehensive, grounded, internally consistent typology of the types of sources shared. Rather than chasing a definition of what is popularly known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the phenomenon. To understand what social media users are sharing, we analyzed large volumes of political conversations that took place on Twitter during the 2016 presidential campaign and the 2018 State of the Union address in the United States. We developed the concept of “junk news,” which refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news. First, we found a 1:1 ratio of junk news to professionally produced news and information shared by users during the US election in 2016, a ratio that had improved by the State of the Union address in 2018. Second, we discovered that amplifier accounts drove a consistently higher proportion of political communication during the presidential election but accounted for only marginal quantities of traffic during the State of the Union address. Finally, we found that some of the most important units of analysis for general political theory—parties, the state, and policy experts—generated only a fraction of the political communication.
The Global Organization of Social Media Disinformation Campaigns
Social media has emerged as a powerful tool for political engagement and expression. However, state actors are increasingly leveraging these platforms to spread computational propaganda and disinformation during critical moments of public life. These actions serve to nudge public opinion, set political or media agendas, censor freedom of speech, or control the flow of information online. Drawing on data collected from the Computational Propaganda Project’s 2017 investigation into the global organization of social-media manipulation, we examine how governments and political parties around the world are using social media to shape public attitudes, opinions, and discourses at home and abroad. We demonstrate the global nature of this phenomenon, comparatively assessing the organizational capacity and form these actors assume, and discuss the consequences for the future of power and democracy.
The Politicization of the Domain Name System: Implications for Internet Security, Stability and Freedom
One of the most contentious and longstanding debates in Internet governance involves the question of oversight of the Domain Name System (DNS). DNS administration is sometimes described as a “clerical” or “merely technical” task, but it also implicates a number of public policy concerns such as trademark disputes, infrastructure stability and security, resource allocation, and freedom of speech. A parallel phenomenon involves governmental and private forces increasingly altering or co-opting the DNS for political and economic purposes distinct from its core function of resolving Internet names into numbers. This article examines both the intrinsic politics of the DNS in its operation and specific examples and techniques of co-opting or altering DNS’ technical infrastructure as a new tool of global power. The article concludes with an analysis of the implications of this infrastructure-mediated governance on network security, architectural stability, and the efficacy of the Internet governance ecosystem.