By Flashpoint Global Intelligence Team
Flashpoint analysts have been tracking various disinformation campaigns and monitoring tools that can be used to attack electoral infrastructure or spread disinformation within online illicit communities.
A successful attack on election infrastructure would have very different consequences than what is typically associated with conventional attacks against corporate networks. Unlike financial transactions, votes cannot be inspected and annulled individually without consequences for the whole system. At the same time, even a slight suspicion of compromise or fraud risks causing a critical number of voters to reject the system or individual results as fraudulent – as seen most recently with the 2020 Democratic caucus in Iowa. A voting system that includes electronic encoding or transmission of voting data over a network risks being targeted even if the system is not connected to the internet (e.g. voting machines).
Challenges with Voting Technology:
Due to the risk of security vulnerabilities at various stages of electronic voting processes, to date only a small number of countries use online voting extensively in national elections (e.g. Estonia). In the United States voting infrastructure is a responsibility of states with only a handful of states experimenting with online voting solutions. It is unlikely states will quickly deploy new online options before the November 2020 election, but the pressure from the current Coronavirus pandemic is sure to propel this technology forward for future elections. Depending on the design of the actual system, this can come with significant challenges.
Cryptographic encoding of votes also can be undermined. For instance, a recent 2020 MIT report uncovered a vulnerability in the system of DemocracyLive, where sensitive information was transmitted together with votes in a potentially unsafe way. When encrypted votes are stored on a server, this potentially makes them susceptible to exfiltration or ransomware attacks.
The US isn’t the only country that has challenges with electronic voting, of course. Other countries experience challenges, particularly with online voting. Recently an online voting system in Switzerland was found to have a cryptographic vulnerability allowing someone with inside access to change votes. Experimental online voting in two Russian regions in Russia’s recent constitutional referendum – which experts dismissed as highly fraudulent – led to leaks of sensitive personal data.
These are only a few challenges with voting technology that Flashpoint analysts are tracking. Another key consideration when we look at an election is the role of disinformation and misinformation.
The Role of Disinformation:
Disinformation is false information that is intended to mislead its recipient and is different from misinformation (false information, no malicious intent) and malinformation (actual information, malicious intent). Disinformation is a constant problem that is often, but not always, related to electoral campaigns. Flashpoint analysts have observed a slew of misinformation and disinformation (as well a heightened interest in tools used to spread it) within online illicit communities and mainstream social media in the context of the novel coronavirus, for instance.
Campaigns carrying disinformation or malinformation can be short-term, opportunistic campaigns usually focusing on one event, long-term, strategic campaigns that intend to build devoted audiences or amplify discord, or anything in between. Purposes can vary between drowning out “undesirable” content, protecting or enlarging information space, and creating mistrust or animosity among adversaries. Content shared in these campaigns is often in the grey zone between the clearly violative and the clearly permissible as defined by the evolving standards of social media companies, but gradually evolves into divisive or harmful content over time, making use of non-organic content promotion tools or simply the structure of a given platform. Having studied the impact of disinformation campaigns driven by extremist communities, for example, Flashpoint analysts often see memes as an entry point before more aggressive content is served. Disinformation narratives sometimes take a fundamentally truthful or at least plausible core message, but add a spin to it that is in favor of their aim, stance, or ideology. Disinformation ecosystems produce their own “experts”, modeled on actual experts, who command an air of authority.
When we think about misinformation and disinformation, attribution is a growing challenge. There is a lot of shared infrastructure. State-sponsored campaigns are usually spotted based on coordination across verticals and the principle of “who benefits”. However, very often the key question is not where the message originates but who amplifies its reach: both domestic and foreign spreaders of disinformation have relied on “trusted” domestic influencers – public or media personalities with a broad audience – to break into the political mainstream. Without this kind of amplification disinformation campaigns are often confined to fringe communities.
While there is no surefire way to secure an election that relies on electronic infrastructure and can be influenced by disinformation campaigns, understanding the risks as well as the tools that can be used to unduly influence a democratic process is key to developing better rules and safeguards. Security practitioners as well as voters in general can educate themselves to ensure they have the whole picture.
If you’re interested in discussing more in-depth examples of the above topic, please request a demo here.