Wellacebook failed to detect obvious election-related disinformation in ads ahead of Brazil’s 2022 elections, a new report from Global Witness has found, continuing a pattern of failing to catch material that violates its policies, which the group describes as “alarming.”
The ads contained false information about the country’s upcoming elections, such as promoting the wrong election date, incorrect voting methods and questioning the integrity of elections – including Brazil’s electronic voting system.
This is the fourth time the London-based nonprofit has tested Meta’s ability to catch egregious violations of the most popular social media platform’s rules — and the fourth such test Facebook has failed. In the three previous cases, Global Witness sent out ads containing violent hate speech to see if Facebook’s controls — either human reviewers or artificial intelligence — would catch them. They do not.
“Facebook has identified Brazil as one of its priority countries where it is investing dedicated resources specifically to tackle election-related disinformation,” said John Lloyd, senior adviser at Global Witness. “So we wanted to really test their systems with enough time to get them up and running.” And with the US midterms around the corner, Metta just needs to get it right – and right now.”
Brazil’s national elections will be held on October 2 amid high tensions and misinformation threatening to discredit the electoral process. Facebook is the most popular social media platform in the country. In a statement, Meta said it had “prepared extensively for the 2022 elections in Brazil.”
“We launched tools that promote reliable information and label election-related posts, created a direct channel for the Supreme Electoral Court (Brazil’s electoral body) to send us potentially harmful content for review, and continue to work closely with Brazilian authorities and researchers,” the company said.
In 2020, Facebook began requiring advertisers wishing to run election or political ads to complete an authorization process and include a “paid by” disclaimer on them, similar to what it does in the U.S. Increased Safeguards followed the 2016 US presidential election, when Russia used rubles to pay for political ads designed to incite division and unrest among Americans.
Global Witness said it broke those rules when it sent the test ads (which were approved for publication but were never published). The group ran the ads outside of Brazil, from Nairobi and London, which should have raised red flags.
It also wasn’t required to put a “paid by” disclaimer on ads and didn’t use a Brazilian payment method — all safeguards Facebook says it has put in place to prevent abuse of its platform by malicious actors trying to interfere in elections around the world.
“What is abundantly clear from the results of this investigation and others is that their content moderation capabilities and the integrity systems they put in place to mitigate some of the risk during election periods are simply not working,” said Lloyd.
The group used ads as a test rather than regular posts because Meta claims to hold ads to an “even stricter” standard than regular, unpaid posts, according to its help center page for paid ads.
But judging by the four investigations, Lloyd said that’s not really clear.
“We constantly have to take Facebook’s word for it. And without a verified independent third-party audit, we simply cannot hold Meta or any other technology company accountable for what they say they are doing,” he said.
Global Witness submitted ten ads to Meta that apparently violate its rules on election-related advertising. These include, for example, false information about when and where to vote and question the integrity of Brazil’s voting machines – mirroring disinformation used by malicious actors to destabilize democracies around the world.
In another study conducted by the Federal University of Rio de Janeiro, researchers identified more than two dozen ads on Facebook and Instagram for the month of July that promoted misleading information or attacked the country’s electronic voting machines.
The university’s internet and social media department NetLab, which also took part in the Global Witness investigation, found that many were funded by candidates running for a seat in the federal or state legislature.
It will be Brazil’s first election since far-right President Jair Bolsonaro, who is seeking re-election, came to power. Bolsonaro has repeatedly attacked the integrity of the country’s electronic voting system.
“Disinformation loomed large in the 2018 election, and this year’s election has already been marred by reports of widespread disinformation spread from the very top: Bolsonaro is already casting doubt on the legitimacy of the election result, leading to fears of a US-inspired January 6 “stop the coup-style coup attempt,” Global Witness said.
In its previous investigations, the group found that Facebook failed to capture hate speech Myanmar, where ads use slurs about people of East Indian or Muslim descent and call for their death; in Ethiopia, where ads use dehumanizing hate speech to call for the killing of people belonging to each of Ethiopia’s three major ethnic groups; and in Kenya, where ads feature beheadings, rapes and bloodshed.
— Associated Press writer Diane Jeantet contributed to this story.
More must-see stories from TIME