Despite a concerted effort by Facebook to stem political disinformation ahead of the US elections, false and misleading ads are still circulating as a result of glitches and loopholes – and what critics claim is weak enforcement of the social media giant’s own policies.
The problems were highlighted in recent days by revelations that misleading ads, debunked by independent fact-checkers, were being reposted, shared and targeted at specific voters even after being banned by Facebook.
The company acknowledged on Monday that some banned ads were resurfacing, and began taking down the messages after a report by the Wall Street Journal, citing research from the New York University Ad Observatory.
Some of the ads, which were from groups supporting President Donald Trump and containing false claims about Democrat Joe Biden, were shared more than three million times on Facebook, according to Laura Edelson, a researcher with the NYU group.
In some cases they were reposted by third parties “but in some cases we saw the same group reposting the same ad,” Edelson said.
That is because Facebook took down specific ads but failed to block subsequent ones with the same content, which could be recopied to a new “creative,” according to Edelson.
“Facebook won’t allow that identical object to run but someone could recreate it,” she said.
Edelson said the reposting suggests Facebook is not aggressive enough in enforcing its policy: “It’s not technically hard to prevent that same content from being uploaded.”
Facebook said it began removing the reposted ads after the Journal report over the weekend.
The ads were being run by the American Principles Project, a conservative group, and contained false claims that Biden supports the far-left Antifa movement and backs sex change operations for children.
“When a fact-checker applies a rating, we apply a label and demote it while also rejecting it as an ad – this is true for all ratings,” a Facebook spokesperson told AFP.
“We reviewed these ads and are taking action on those that violate our policies, while also working to improve how we find similar ads to those that were already rated.”
Creative bad actors
Bret Schafer, a researcher with the nonprofit Alliance for Securing Democracy, said Facebook had made considerable efforts to close loopholes but that it has not been sufficient to stem the flow of misinformation and disinformation.
People tend to use the former to describe the spread – accidental or otherwise — of false information while the latter is usually taken to mean deliberate misinformation.
“Bad actors can get very creative in how they circumvent any restrictions,” Schafer said.
“Facebook’s automated and human moderation can’t keep up with the volume and the ways actors are trying get around the restrictions.”
Last week, Facebook saw a rocky start to its effort to ban new political messages a week ahead of the November 3 election as part of a move to avert last-minute hoaxes going viral. Rival parties complained new ads had been appearing despite the policy.
Facebook product manager Rob Leathern acknowledged that some ads were “being paused incorrectly,” and some advertisers were having trouble making changes to their campaigns.
“We’re working quickly on these fixes,” Leathern said last week.
Facebook’s enforcement issues come with platform struggling to limit the kind of manipulation efforts seen in 2016 when foreign entities spread disinformation to influence the election.
Facebook has launched dozens of fact-checking partnerships, including with AFP, aimed at limiting the spread of hoaxes and rumors.
“The work we do combating the spread of misinformation online feels like a less fun version of whack-a-mole some days, said Aaron Sharockman, executive director of PolitiFact, one of Facebook’s partners.
“We debunk a conspiracy only to have 10 others pop up in their place. I think that’s always going to be a bit of the nature of this work, however, and it’s not surprising that bad actors are trying to find weak points or holes in Facebook’s fact-checking program.”
Media Matters’ president Angelo Carusone said the watchdog group had already warned about loopholes in Facebook’s policies which let misinformation flow.
“Now we are seeing those warnings come to life,” Carusone said last week.
Edelson said the current problems are different from 2016.
“Four years ago there were a small number of bad actors responsible for the majority of misinformation,” she said. “Now there are a lot of advertisers who looked at 2016 and see this a as a good strategy.”
Schafer said that in 2016, “a lot of people were caught off guard” by the extent of political disinformation and misinformation.
Today, he said, “We are better prepared as a society but we are nowhere near the inoculation level. So it still could have an impact.” – Rappler.com