How Digital Propaganda may affect EU Elections 2019: The Good, the Bad and the Ugly
The elections for the European Parliament this year could be viewed as the first truly digital EU elections. This is not so much because people will be able to register and, in the case of Estonians, even vote online, but because digital platforms are expected to weigh heavily on the way in which European citizens mobilise, campaign and vote in support of their parties in these elections. Normally, this should constitute good news given the declining voter turnout in European elections despite the successive rounds of enlargement in the past two decades. However, this is hardly a “normal” time for conducting elections, as the potential disruptive effect of digital disinformation looms large in the minds of many in the context of the recurring controversies surrounding the results of the U.S. presidential elections and of the Brexit referendum in 2016. What to expect then in terms of how digital disinformation may interfere in these elections? Three possible scenarios could be contemplated in response to this question.
The Good: Resilience Works
Comparing with the previous EU elections in 2014, EU institutions are arguably much better prepared for dealing with disinformation, not least because they have acquired a better understanding of how disinformation operates in the European information space via a dedicated network of affiliates, which by now includes the EastStratComm Task Force, the Hybrid Fusion Cell and the Centre of Excellence for Countering Hybrid Threats.
The Rapid Alert System set up among the EU institutions and Member States to facilitate the sharing of insights related to disinformation campaigns and to coordinate responses is already operational and is expected to contribute to a tangible reduction of online disinformation during the elections. Furthermore, there are encouraging signs that the public has become more resilient to the consumption of “fake news,” both in terms of the number of people and the amount of time they spent accessing false information. According to a recent study, the most prominent identified false news websites in France and Italy are far less popular than major established news sites, although the difference between false news sites and news sites in terms of interactions on Facebook is less clear-cut.
If the trend holds (and this may well be a big “if”), then one should expect disinformation to have a relatively marginal role in the elections—that is, to still be part of the online conversation but without dominating it. The graph in Fig 1, which is based on hashtag co-occurrences with #EUelections2019 posted during the month of March 2019, tentatively supports this view. While some of the suspicious hashtags such as #soros, #thegreatawakening, #maga and #qanon are still in the mix, they fall right outside the core area of the conversation, which mainly revolves around neutral or supportive hashtags of the EU (#voteeu, #europa, #eupol, #europeanspring, etc.).
The Bad: “Unruly” Counterpublics Strike Back
One interesting development of disinformation warfare is the rise of the "counterpublics," a term originally coined by Nancy Fraser and which refers to “the parallel discursive arenas where members of subordinated social groups invent and circulate counter discourses to formulate oppositional interpretations of their identities, interests and needs.” While counterpublic theory has been normatively associated with emancipatory inquiries into the subaltern status of traditionally marginalised social groups (women; workers; ethnic, racial or sexual minorities), its framework offers analytical currency for studying “unruly” counterpublics as well, such as right-wing populist movements or more radical groups that may even reject basic democratic principles.
While EU institutions are much better prepared than five years ago to deal with disinformation, the rise of “unruly” counterpublics and technological advances are set to complicate the electoral process and its outcome.
In other words, while in the previous elections in 2014 entities affiliated with the Russian government were the main perpetrators of disinformation in the European infospace, local actors—the “unruly” counterpublics—might take the front stage in the current elections and employ disinformation as a tactical instrument for making their voice heard, recruiting sympathizers and resetting the agenda of public discussion. External actors might still seek to promote disinformation for strategic purposes, but mainly from a supporting position, by amplifying content produced by counterpublics—either via traditional channels, or by using embassies as mini-Sputniks or bullhorns for disseminating third-party produced disinformation—in addition to their own content. A recent study examining the political debate in the Spanish digital space has noted the declining influence of mainstream parties and the rising role of “unruly” counterpublics such as Vox, the new far-right party, whose anti-immigration agenda has been actively promoted by disinformation networks on Facebook and WhatApp.
Fig. 2: Most active communities in the Spanish digital landscape ahead of the EU Parliamentary elections (Source: Alto Data Analytics)
The Ugly: The Machines Take Over
Statistics show that global traffic generated from bots has already surpassed human-generated internet traffic and that harmful bots (29 percent) have the edge over helper bots (23 percent). From a disinformation perspective, the impact of bots is more than troubling. According to a recent study, bots play a critical role in driving the viral spread of content from low-credibility sources: six percent of Twitter bots proved enough, for instance, to spread 31 percent of all tweets linking to low-credibility content and 34 percent of all articles from low-credibility sources.
The predicted arrival of “deep fakes” (hyper-realistic, difficult-to-debunk fake video and audio content) promises to take the disinformation game to a whole new level by enhancing the “credibility” of the malicious content, reducing the time for reaction and widening the circle of the actors involved. Advances in Artificial Intelligence (AI) technology will also make it easier to manipulate human emotions and reach the intended audience with a high degree of precision through individually tailored messages targeted for different audiences across multiple media channels simultaneously and automatically. Evidence to support the view that disinformation in European elections may have entered a more sophisticated, AI-driven stage of operation, is already available.
According to a recent report, around half of all Europeans (up to 241 million) could have been exposed to disinformation promoted by 6,700 so-called “bad actors” posting content on social media accounts linked to Russia even before the electoral campaign has formally started. Disinformation was apparently pushed via automated bots programmed to pick up specific text cues, as well as by humans using software to communicate through multiple accounts at the same time. For example, after French President Emmanuel Macron published an article on March 4 about the future of Europe, “bad actor” activity increased 79 percent on March 5, from the previous day, mostly seeking to discredit his ideas (see Fig 3).
Fig. 3: “Bad Actor” Social Media Activity vs. Google Search Trends (1-10 March 2019, Source: SafeGuard Cyber)
To conclude, the stakes of the European elections are high for the future direction of the EU, with some polls predicting that anti-European parties look likely to become the second-largest group in the parliament, with up to 35 percent of seats. While EU institutions are much better prepared than five years ago to deal with disinformation, the rise of “unruly” counterpublics and technological advances are set to complicate the electoral process and its outcome.
This piece was originally published by the USC Center on Public Diplomacy