
Samantha Bradshaw is a scholar of technology, security, and democracy. She is the director of the Center for Security, Innovation & New Technology (CSINT) at American University (AU) and an Assistant Professor at AU’s School of International Service. Samantha also holds a Perry World House Lightning Scholar Fellowship (24-25) and is a Fellow at the Center for International Governance Innovation (CIGI) in Waterloo, Canada.
Researching Disinformation, Social Media and Democracy.
Samantha is a leading expert on new technologies, security and democracy. Her research examines the producers and drivers of disinformation, and how technology—artificial intelligence, automation and big data analytics—enhance and constrain the spread of disinformation online. At the forefront of theoretical and methodological approaches for studying, analyzing, and explicating the complex relationship between technology and democracy, Samantha’s research has helped advance academic debate, public understanding and policy discussions around the impact of emerging technologies on political expression, elections, trust & safety, and privacy.
Recent Publications
While social media disinformation has received significant academic and policy attention, more consequential forms of intentional manipulation target the underlying digital infrastructures upon which society depends. Infrastructure-based deception, less visible than deception targeting content and platforms, has consequences for internet security, stability and trust. This article examines terrains of disinformation in digital infrastructure, including in the Domain Name System, access and interconnection, public key infrastructures, cyber-physical systems and emerging technologies. Infrastructure disinformation is largely a cybersecurity problem.
Disinformation spread via digital technologies is accelerating and exacerbating violence globally. There is an urgency to understand how coordinated disinformation campaigns rely on identity-based disinformation that weaponizes racism, sexism, and xenophobia to incite violence against individuals and marginalized communities, stifle social movements, and silence the press.
For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media. Substantial resources have been dedicated to identifying its effects, how and why it spreads, and how to mitigate its harm. Yet, despite these efforts, it can sometimes feel as if the field is no closer to answering basic questions about misinformation’s real-world impacts, such as its effects on elections or links to extremism and radicalization.
During the 2022 Russian invasion of Ukraine, Russia was accused of weaponizing its state-backed media outlets to promote a pro-Russian version of the war. Consequently, Russian state-backed media faced a series of new sanctions from Western governments and technology companies. While some studies have sought to identify disinformation about the war, less research has focused on understanding how these stories come together as narratives, particularly in non-English language contexts. Grounded in strategic narrative theory, we analyze Russian state-backed media coverage of the Ukraine war across 12 languages.
Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently.
Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are “state-media labels” which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state-media labels affect users’ perceptions of content from state-backed outlets. This article proposes new methodological and theoretical approaches to investigate the effect of state-media labels on users’ engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labeling of five state-funded channels (Al Jazeera English [AJE], China Global Television Network, Russia Today [RT], TRT World, and Voice of America [VOA] News), this article analyses the effect that YouTube’s labels had on users’ engagement with state-backed media content.
Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations.
Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. By examining the political communication of feminism and women’s rights, we find, first, that foreign state actors co-opted intersectional critiques and countermovement narratives about feminism and female empowerment to demobilize civil society activists, spread progovernment propaganda, and generate virality around divisive political topics.
Previous research has described how highly personalised paid advertising on social media platforms can be used to influence voter preferences and undermine the integrity of elections. However, less work has examined how search engine optimisation (SEO) strategies are used to target audiences with disinformation or political propaganda. This paper looks at 29 junk news domains and their SEO keyword strategies between January 2016 and March 2019. I find that SEO — rather than paid advertising — is the most important strategy for generating discoverability via Google Search.
Social media is an important source of news and information in the United States. But during the 2016 US presidential election, social media platforms emerged as a breeding ground for influence campaigns, conspiracy, and alternative media. Anecdotally, the nature of political news and information evolved over time, but political communication researchers have yet to develop a comprehensive, grounded, internally consistent typology of the types of sources shared. Rather than chasing a definition of what is popularly known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the phenomenon.
Digital privacy concerns are primarily viewed through the lens of personal data and content. But beneath the layer of content, less visible issues of infrastructure design and administration raise significant privacy concerns. The Internet's Domain Name System (DNS) is one such terrain. There is already a great deal of attention around how the DNS intersects with freedom of speech, trademark disputes, cybersecurity challenges, and geopolitical power struggles in the aftermath of transitioning the historic U.S. oversight role to the global multistakeholder Internet governance community. However, the privacy implications embedded in the technical architecture of the DNS have received less attention, perhaps because these issues are concealed within complex technical arrangements outside of public view.
Social media has emerged as a powerful tool for political engagement and expression. However, state actors are increasingly leveraging these platforms to spread computational propaganda and disinformation during critical moments of public life. These actions serve to nudge public opinion, set political or media agendas, censor freedom of speech, or control the flow of information online. Drawing on data collected from the Computational Propaganda Project’s 2017 investigation into the global organization of social-media manipulation, we examine how governments and political parties around the world are using social media to shape public attitudes, opinions, and discourses at home and abroad.
Press & Media Engagement

I speak regularly with journalists working on issues related to social media, elections, privacy & surveillance, freedom of speech, and democracy. My research and writing has been featured in numerous local and global outlets, including the New York Times, The Washington Post, CNN, the Globe and Mail, and Reuters.
Public Speaking & Events
Technology and the next frontier in human rights. Hertie School.
I have given lectures and keynotes at organizations around the world including international organizations such as UNESCO and NATO, universities including Harvard, MIT, and Cambridge, and other NGOs, think tanks, and research institutions. You can view a list of my past speaking engagements and access my power point presentations for any previous events.