Blog

What Are Social Media Bots? And How Do They Impact National Security?

Default Author Image
March 3, 2022
social_media_intelligence_graphic

During the 2014 annexation of Crimea, it is believed that Russia used troll armies to inflate pro-Russian narratives on social media. Flash forward to 2022, and social media remains an active front in the ongoing conflict between Russia and Ukraine. 

The technology behind these malign influence campaigns uses social media bots, which automate social interactions—for good or ill. Monitoring open-source social media data is valuable for understanding bot activity and emerging tactics that can threaten national security interests.

What exactly are social media bots, how do they impact national security, and how can intelligence analysts separate bots from real people?

What are social media bots?

The Office of Cyber and Infrastructure Analysis (OCIA) defines Social Media Bots as:

“…programs that vary in size depending on their function, capability, and design; and can be used on social media platforms to do various useful and malicious tasks while simulating human behavior. These programs use artificial intelligence, big data analytics, and other programs or databases to imitate legitimate users posting content.”

Social bots essentially automate social media interactions to mimic human activity but perform at a scale not possible for human users.

Bots can be automated or semi-automated. Automated bots run independently based on human-set parameters, whereas semi-automated bots combine these parameters with some human management. This allows users to create fake social media accounts and personalities.

The term “social media bot” gets a bad rap, but not all social media bots are malicious. For example, they are used for customer service via chatbots that answer inquiries and make sales. They can also deliver breaking news and events to the public, and even support counter-terrorism.

Malicious applications include:

  • Terrorist recruitment, promoting content to radicalize vulnerable audiences
  • Online harassment and hate speech
  • Disinformation and malign influence, circulating conspiracy theories and fake news
  • Market manipulation by spreading false information about a company or industry

According to the OCIA, social media bots can be deployed in five ways. Click farming and like farming hires real people to generate internet traffic by liking and reposting content. Hashtag hijacking leverages hashtags to target an audience. Repost storms use a network of bots that repost each other’s content. Sleeper bots remain mostly dormant and cycle through periods of intense activity. Trend jacking, also known as a watering hole attack, leverages trending topics to appeal to an audience.

Why social media bots matter

Malicious social media bots have significant impacts on:

  • Political processes. Social media bots can influence public opinion, build mistrust between populations and their governments, disrupt democratic processes, and exacerbate geopolitical tensions.
  • Financial security. Misleading content takes a financial toll, costing an estimated $78B on the global economy each year. This includes the cost of reputation management, stock market hits, and countering disinformation.
  • Public health and safety. Social media bots can co-opt social movements, shift public opinion around global issues like climate change, influence vaccination rates, and recruit terrorists. The European Parliament considers disinformation a human rights issue that violates privacy, democratic rights, and freedom of thought.

The OCIA states that social media bot usage—and along with it, malicious bot behavior—is increasing on networks in the United States. Bots on social networks like Twitter likely sit somewhere between 5-15% of users, though exact numbers are hard to estimate. And according to The New York Times, there is some debate about how prominent social media bots actually are.

Exact usage rates aside, the online information ecosystem is becoming a priority for adversaries and nation-states. Social media bots are central to this landscape, and their impacts on national security are likely to scale and evolve in the coming years.

Social media bots and intelligence analysis

The public and private sectors have navigated social media bots throughout election cycles, the COVID-19 pandemic, and political conflict. For intelligence analysts, monitoring bot activity requires distinguishing bots from real accounts.

You might be dealing with a social media bot if the account:

  1. Has higher activity levels than the average person and posts around the clock.
  2. Uses multiple languages in an attempt to target global audiences.
  3. Focuses on specific political narratives, propaganda, or misleading content.
  4. Uses a generic profile picture.
  5. Has a username made up of random letters and numbers.
  6. Was created in the last year or less.
  7. Uses prose that sounds unnatural.
  8. Seems to repeat how they use emoticons and punctuation like exclamation points.
  9. Likes and share content but has very few original comments or posts.
  10. Likely has fake followers that engage in similar activities or originate in countries associated with click farms, like China.

However, bots are becoming more human-like and harder to detect according to a 2019 study by the University of California. Critical analysis skills alone may struggle to detect social media bots—especially at scale—as adversaries develop more advanced techniques. Widespread social media bot usage means that analysts need advanced tools to facilitate monitoring.

Technology companies and research groups are using AI techniques to facilitate machine-based social media bot detection (like this Botometer developed by Indiana University). This detection technology isn’t fool-proof, but AI can support human analysis—especially when intelligence teams are bombarded with open-source data.

Over the last decade, social media platforms have become an important resource for national security teams as open-source data gains precedence. These networks are valuable for illuminating public safety risks like disinformation and terrorist recruitment, which rely on social media to reach target audiences.

Bots are now widely used to scale and target malicious social media activities. As this tactic evolves, intelligence teams must leverage advanced solutions to monitor malicious social media bots and protect vulnerable populations from their influence.

Begin your free trial today.