artificial intelligence

Meta says election interference happening at ‘manageable amount’ but serious concerns persist

Kyle Chua

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Meta says election interference happening at ‘manageable amount’ but serious concerns persist

Nick Clegg, chief of global affairs at Meta, speaks the EmTech Digital 2024, May 22, 2024

Screenshot from MIT

Nick Clegg, Meta's chief of global affairs, touts the firm's efforts but recent research and an ongoing EU probe point to continuing disinformation problems

Of the major elections which have already taken place this year, Meta said it saw little evidence of generative AI being used to create content that can interfere in the political process. 

“The interesting thing so far – I stress, so far – is not how much but how little AI-generated content [was used], ” said Nick Clegg, chief of global affairs at Meta, during a discussion at EmTech Digital, MIT Technology Review’s annual AI conference.

Clegg, a former UK deputy prime minister, previously shared the same findings when he spoke at a Meta AI event in April. 

He said that AI-generated election content is there and is “discernible,” but it’s not happening at a “systemic level.” 

He also said that election interference attempts are happening at a “manageable amount,” noting that Meta caught concerted attempts to interfere in the Taiwanese election. 

Among the countries that have already held major elections this year include Indonesia, Pakistan, and Bangladesh. More are due to join them, with over 50 countries holding national elections later this year, such as the US, the UK, and India, among others. 

Meta has been the center of a multitude of electoral controversies in the past, and it has since promised to do better ahead of the biggest election year in history, announcing new policies to combat misinformation and disinformation across its platforms. 

Clegg claims what’s different from 2016 – when Meta was blamed for helping Donald Trump win as US President – is that it now has the world’s largest network of fact checkers to root out and identify misinformation. 

Since that year, it has also removed over 200 “networks of coordinated inauthentic behavior.”

In spite of Meta’s efforts, disinformation and harmful content still manages to find its way to an audience.

In the Philippines, a recent Rappler investigation, for example, saw how individuals take advantage of Facebook’s ad platform to scam people with health products. In March, a deepfake featuring Rappler CEO Maria Ressa circulated on Facebook to direct people to a third-party site designed as well to scam people.

The Guardian, just this May, also reported researcher findings of potential Russian propaganda ads targeting Europe’s elections. The site reported that research found “doom-laden” stories laced with “vitriolic sneers about [French President] Emmanuel Macron, [Ukrainian President] Volodymyr Zelenskiy and [President of the European Commission] Ursula von der Leyen” running as ads in the first 13 days of May.

The site quoted the researcher Paul Bouchaud, a PhD researcher at the School for Advanced Studies in the Social Sciences in Paris, who criticized the company’s anti-disinformation efforts. “It is technically feasible to detect in real time a coordinated propaganda network… The fact that Meta does not systematically address this issue [shows] a lack of willingness, more than a lack of technical feasibility.”

The EU will hold parliament elections in June. The bloc also launched a disinformation probe against Meta in April, saying that the company’s moderation efforts are “insufficient” as reported by Al Jazeera.

‘Whack-a-mole’ game continues

“Crucially, we do that through a level of industry-wide cooperation, which simply didn’t exist then, and we apply all of that – including, by the way, AI technology – to try and identify extremist groups and militia groups that we don’t want on our platforms,” he explained.

Clegg added that Meta removes groups from the platform in a way that is transparent. The social media giant publishes in full all of its policies and standards, including how it deals with such groups, in the hope it drives a “virtuous cycle of scrutiny, accountability, and pressure” for it to do better. 

Must Read

What can the Philippines learn from how AI was used in Indonesia’s 2024 election?

What can the Philippines learn from how AI was used in Indonesia’s 2024 election?

For Meta to keep up with these militia groups, however, Clegg said it has to work with other platforms; it cannot act on its own. “This is a highly adversarial space. You play Whack-a-Mole, candidly. You remove one group, they rename themselves, rebrand themselves, and so on,” he said. 

Meta previously announced it would label AI-generated images uploaded on Facebook, Instagram, and Threads that were created using generative AI tools.

It has now rolled out the system which can reportedly identify invisible markers in an image at scale, such as the C2PA and IPTC technical standards. The system then puts a visible marker on the image and invisible watermarks on the metadata in the image file to indicate that that image was generated by an AI tool. Watermarks would also be added to images created using Meta’s own generative AI systems. 

But, as Clegg admitted, Meta’s system for detecting AI-generated content has limitations and is still imperfect. Watermarks, which the system relies on, can be tampered with, for example. 

Clegg said Meta’s systems are designed to identify misinformation and disinformation in content, regardless of how they are made or where they come from.

“Our systems and our policies and our tools, when it comes to misinformation, disinformation and so on, are built to be entirely agnostic about the origin. It doesn’t matter whether a piece of misinformation about the elections is generated by a robot or a human being; our fact checkers should still be identifying it and it will still be enqueued by AI systems, interestingly enough,” he said. 

Clegg underscored the help of AI in reducing the amount of harmful content on Meta’s platforms, saying “AI is a sword and a shield in this.” – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!