artificial intelligence

After challenges in regulating social media, governments turn up heat early on AI

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

After challenges in regulating social media, governments turn up heat early on AI
Scrutiny of AI intensifies in April as governments attempt to get ahead of the technology currently spearheaded by ChatGPT

MANILA, Philippines – Governments worldwide are putting artificial intelligence, and chatbot ChatGPT especially, under scrutiny right away. 

Launched just last November 2022, ChatGPT set the record for fastest growing user base in January 2023, with 100 million monthly active users. In late March, Italy and its data protection agency Garante temporarily banned the app, over its alleged failure of checking users younger than 13 years old, and said there was an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot. 

It joined China, Hong Kong, Iran, Russia, and parts of Africa in a list of countries where the app is unavailable. 

While Italy’s ban – a ban that its deputy prime minister believed was excessive – looks like it would be lifted by the end of April, it has sparked greater conversation amongst lawmakers in the EU, the US, Australia and several other countries. 

The EU, which led the world in data privacy regulation through the General Data Protection Regulation (GDPR), proposes an AI Act which classifies AI tech into four tiers according to risk: unacceptable, high, limited, and minimal. 

Minimal risk AI includes spam filters or AI in video games, while unacceptable ones which are banned outright include government social scoring and real-time biometric identification systems in public spaces, according to the World Economic Forum

Where does ChatGPT fall under? That’s to be determined. A paper on the journal Internet Policy Review, proposes that because the technology is so dynamic, it might not fit into one category neatly. Rather, it proposes a more dynamic method as well to “monitor for and mitigate systemic risks on a regular basis.” 

“The pure scale of adoption, in combination with the versatility and general purpose characteristics of the [ChatGPT] technology, challenge the AI Act’s risk-based approach in a second important way: it is simply impossible to predict,” the paper says. 

The AI Act began taking shape in April 2021 when the EU Commission published a proposal to regulate AI, a year and a number of months before the unexpected sudden rise of ChatGPT.

What countries are doing

France’s privacy watchdog CNIL (Commission nationale de l’informatique et des libertés) on April 11 said it has “received several complaints about ChatGPT and is investigating them” while Spain’s own AEPD (Agencia Española de Protección de Datos) on the same day requested that the “issue of ChatGPT be included” in the next plenary of the EU’s data protection committee scheduled for April 13. 

The AEPD justified its request saying that “global processing operations that may have a significant impact on the rights of individuals require coordinated decisions at European level.”

In the US, President Joe Biden on April 11 sought public input on the regulation of AI, with the National Telecommunications and Information Administration planning to draft a report that will look at “efforts to ensure AI systems work as claimed – and without causing harm.” 

Japan plans to lead discussions on AI at the G7 Digital and Tech Ministers’ Meeting to be held in the country on April 29 and 30.

China – which has homegrown ChatGPT-like technologies from giants Baidu, Alibaba, and Sensetime –  on the same day unveiled its proposed measures including one that says content by generative AI must be in line with the country’s core socialist values, as reported by Reuters. 

Must Read

In the tech spotlight: Who is ChatGPT maker OpenAI’s CEO Sam Altman?

In the tech spotlight: Who is ChatGPT maker OpenAI’s CEO Sam Altman?
Why the scrutiny? 

This generation of AI products led by ChatGPT is considered by many as the most revolutionary tech product since the advent of social media and smartphones. 

Many governments worldwide, in recent years, have tussled with the tech industry across different issues such as antitrust, state-initiated information operations powered by social media, disinformation, data privacy, fraud, and data breaches.

Among those, disinformation may be considered as among the most damaging to society, giving rise ultimately to polarized societies with no shared reality. In a way, governments were caught off guard with the outsized impact of social media, punctuated by Russia’s interference in the 2016 US elections, which made use of trolls and fake accounts on Facebook and other social networks to create political division in the country. 

Even before ChatGPT, and generative imagery AI tools like DALL-E and Midjourney, there have been deepfake videos that look realistic. But these technologies have become more advanced, producing more realistic-looking results, and easier to use. 

A prime example: the viral image of a Pope wearing a fashionable coat. It was innocuous enough, but it showed how convincing the technology has become, with many believing the image to be true at first.

In important moments, a national election for example, what happens when these potentially convincing fake images proliferate? If people learn to rely on ChatGPT or other similar tools, how accurate can it truly be, and who will be there to fact check what people are able to read, especially when it can produce different text even when using the same prompt, and can hallucinate or confidently respond with inaccurate information? 

Considering the Philippines has a history of being a petri dish for digital manipulation – Cambridge Analytica, the British firm which harvested the data of 87 million Facebook users and politically targeted them, tested their tactics in the Philippines first before deploying it in the West – it’s something to be vigilant about.

As ChatGPT shows little signs of slowing, with OpenAI Sam Altman taking on an almost Zuckerberg-like “move fast, and break things” stance as it competes with the tech giants – and it also has one backing it, Microsoft – governments and respective data protection agencies would do well to, this time, stay in step. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.