As Europeans prepare to elect a new EU parliament next week, the world’s biggest tech companies say they have yet to see any mass campaign to subvert or suppress the vote.
There were fears that the poll, in which 427m citizens, would be a particular target because of the difficulty in policing content posted in dozens of languages across 28 EU countries.
With EU politicians pressing them to take action, Facebook, Twitter and Google all set up specialist teams to search for evidence of malicious propaganda. But so far each company has instead reported a pause in the disinformation war.
“We are always seeing a baseline level of it but nothing that has coalesced around a specific topic, theme, or group or even country,” said Yoel Roth, head of site integrity at Twitter.
Richard Allan, Facebook’s head of global public policy said there had been no “published accounts of attacks specifically related to the EU election today.”
Similarly, Clara Sommiere, from Google’s EU public policy team also confirmed: “So far, we haven’t seen any interference on the platforms.”
Changes brought in after the 2016 US presidential election and the UK’s Brexit referendum, appear to have helped. In 2016, Russian nationals bought $100,000 in political advertising on Facebook that were viewed by 4-5m people before the election, including voter suppression messages, urging African Americans to vote from home via text.
Now, all three platforms have pages showing who has bought political ads and how much they have spent.
But while there is no evidence of any co-ordinated, state-sponsored, disinformation campaigns, the platforms continue to struggle with ideological and often false information posted by national political parties and activists.
“The actors are transnational, non-state actors, like alt-right and far-right across Europe, a lot of them engaging with local groups,” said Sasha Havlicek, chief executive of the Institute for Strategic Dialogue, a think-tank that researches online propaganda.
“You see domestic political parties, populist parties that bleed across borders, so it’s really hard for governments and tech companies to address domestic political activity.”
Earlier this month, the activist non-profit group Avaaz reported three far-right Spanish networks that had reached 1.4m people, and 23 Italian pages with 2.46m followers, on Facebook, sharing anti-immigration, anti-LGBT, anti-Islam, antifeminist and other divisive content, including, false information about immigrants.
For example, the most active Italian page supporting the far-right League party, “Lega Salvini Premier Santa Teresa di riva”, had been sharing a video apparently showing migrants smashing a police car. The video, which has almost 10 million views, was actually a scene from a movie and had been previously debunked. The activity wasn’t picked up by Facebook, but the company took them down after vetting the accounts.
“We expect an increase [of misinformation] around elections because financially motivated and politically motivated actors capitalise on these issues,” said Tessa Lyons, who is in charge of misinformation on the Facebook news feed.
A Twitter spokesperson added: “People of all political persuasions engage in aggressive, partisan speech, which can sometimes fall into targeted abuse and other rule violations. It’s a passionate time for people, and there will always be enforcement actions we have to take to protect the health of the conversation.”
In response, each company has launched its own targeted effort to monitor “civic” conversations across Europe about hot-button policy issues including immigration, religion, family values and climate change.
Facebook said it has been preparing for the EU elections by building up large international teams, drawing from a pool of 500 employees devoted to global elections. The company has widely publicised that it has a staff of 30,000 people working on safety and security, which is three times what it had at the start of 2017.
The social media giant uses a combination of automated systems that can identify and remove fake accounts or groups, and human investigators that scan the horizon for new threats, to proactively watch for manipulation.
This includes its 40-person Dublin operations team with representatives from each EU country, who will scrutinise the elections via Facebook and Instagram 24/7 during the polling period. “In the US we have academics on voter suppression who work with our market teams, [asking] how will voter suppression look in this market?” said Lexi Sturdy, who oversees the Dublin election team.
Content that violates Facebook’s policies, such as hate speech or voter suppression including wrong polling dates or venues, is immediately taken down when flagged, but the majority of propaganda that Facebook sees doesn’t violate its rules so is left on the platform, but fact-checked or suppressed by reducing the “relevance score” of a post.
“These networks are built to build communities and scale them, and you can do it without hate speech or bullying or [threats of] violence. So if we focused only on content we would be limited,” said Nathaniel Gleicher, cyber security chief at Facebook.
“So the other piece is behaviour . . . when we take action on [these] networks, it’s not about the political alignment, it’s about the act they are using deceptive techniques to conceal identity.”
Twitter said it has invested most of its resources into automating these processes for human review, compared to Facebook’s raw manpower approach.
“We developed new technology to identify users posting very high volumes of hashtags in an attempt to get them trending. We are focused on finding a technical answer and a team of experts that can apply to every election,” said Yoel Roth, head of site integrity at Twitter.
Google has focused its efforts on displaying the most reliable information regarding electoral processes, for instance creating a custom box for each EU country and language for those searching for voting information.
But campaigners are concerned that while the blunt-force approach has addressed instances of obviously false news, it is hard to review the platforms’ work without access to their data.
“It’s increasingly difficult as researchers to find the ‘silver bullet’ of attribution, partly due to limited data access to platforms like Facebook,” said Ms Colliver.
“This is a matter of enabling the research community to have checks and balances in place, to see what’s working and what isn’t,” her colleague, Ms Havlicek added.
The opacity of how Facebook, and Twitter customise feeds also means independent parties cannot trace the sources of misinformation.
“Some of this can be solved algorithmically. If things can go back to chronological order on feeds, it would make a dramatic difference in how people can trace the origins of information,” said Jenni Sargent, managing director of First Draft, a non-profit that tracks online conversations during elections.
The European Commission is also demanding more: it wants Facebook and Twitter to give detailed breakdowns on how they are stopping malign actors from exploiting ads on the platforms. It has also asked Facebook how many EU users were affected by eight network of trolls that Facebook took down in March in North Macedonia, Kosovo and Russia.
“We don’t want the platforms to be marking their own homework,” said EU security cyber security commissioner, Mr King.