Exec Insights: Detecting Foreign Online Message Manipulation

Dr. David A. Bray is both a Distinguished Fellow and co-chair of the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center. He is also a non-resident Distinguished Fellow with the Business Executives for National Security, and a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations. He is Principal at LeadDoAdapt Ventures and has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000-2005. 

Detecting online message manipulation by foreign actors— it’s a topic relevant for all organizations (not just government organizations) operating at the intersection of tech, data, and global issues nowadays. It’s also a topic that requires an interdisciplinary team of subject-matter experts to help navigate. It’s also an area of focus associated with Arlington-based Two Six Technologies that does interesting R&D and technical innovations with various government customers including DARPA, the Department of State, U.S. Cyber Command, and more.

With this post, I’m going to dive into the topic of detecting online message manipulation and explore the bottom line up front (BLUF) messages that I found interesting, writing both as someone who has worked both in strategic and operational contexts related to this topic and as an attentive reader of this whitepaper searching for particularly interesting elements associated with the challenges faced by senior executives.

If there’s a key takeaway from this specific whitepaper on detecting online message manipulation, it’s that context matters when gathering meaningful insights as to when and where online messages may be manipulated by actors— be they foreign, in this specific case, or otherwise. Over the last decade, foreign actors have gotten increasingly sophisticated at subtle and more nuanced methods of inauthentic promotion of messages, as well as censoring of messages.

Moreover, while there’s been a lot of attention to possible foreign online message manipulation in the last few years, I would add this is not a “net new” thing by any stretch of the imagination. If anything, it’s been there since the early days of the World Wide Web circa the mid-to late-1990s. In the years since, a larger number of foreign actors have gotten increasingly skilled at doing this to promote specific messages into the information spheres of another country— and to continue extant domestic online message manipulation that already happens in their own country onto the larger global stage.

A second key takeaway provided by this paper is that foreign online message manipulators can include efforts to employ automated (bot) or human actors to push more extreme messaging, including campaigns that combine the two methods in sophisticated ways. The paper notes: “… manipulators can be bots or humans… [and] will typically push more extreme narratives that may be aligned with the foreign adversary’s stance but are too controversial for official accounts to post.”

Moreover, there’s an even more challenging aspect of foreign online message manipulation, namely overcoming online efforts intended to discredit or prompt dismay in those who support minor dissidents or those sharing information counter to whatever an official state line might be. The paper mentions that foreign online message manipulators “… may push narratives that do not align with a government’s official stance but are intended to discredit the opinions of dissidents. For example, sources have shown that Iran-backed actors will flood Instagram with radicalized memes that don’t reflect the official position of the Iranian government but are intended to influence the beliefs of U.S. social media users.”

Essentially, foreign online message manipulation, either from bots or human actors, can be dual-sided, aiming to both promote certain messages as well as discredit or drive a wedge to isolate opinions of a dissident group.

A third and final key takeaway involves awareness of the tactics associated with message manipulation as the first step to getting a handle on the evolving challenges of detecting online message manipulation. For example, “PRC authorities use a method called ‘astroturfing,’ in which they organize streams of inauthentic posts designed to create a veneer of authentic, grassroots support for certain policies and viewpoints.”

As someone who lived through a disinformation attack that involved bots and some (apparently hired in select cases) human actors, I find abusive astroturfing to be especially problematic in creating a veneer of authentic support for certain policies— if only because this then casts shade on actual real authentic grassroots activities and erodes trust and support in free societies like ours.

The white paper includes a cautionary note that “Authoritarian governments intimidate people into self-censorship as a form of censorship. For example, nearly 22% of content in 2023 authored by pro-PRC inauthentic accounts on global social media aimed to intimidate overseas dissidents into silence by slandering them… many of these critics publicly testified how much these campaigns damaged their lives.”

For humans seeking to speak counter to the messages of authoritarian regimes, there can be a very real cost to both freedom of expression and those seeking to dissent against authoritarian regimes from such slander and intimidation techniques online.For those of us fortunate to live in free societies like our own, together we need to be mindful that our own message spaces can be (and frankly, are being) manipulated. The cost of the freedoms we have comes with the need for vigilant awareness that in an increasingly interconnected world, the forums we have for online communications include bot and human actors seeking foreign online message manipulation. Detecting when this happens represents an important part of developing effective deterrence of such actors.