artificial intelligence

EmTech Digital 2024: Policy direction concerning AI must be human-centered and human-aligned

Kyle Chua

This is AI generated summarization, which may have errors. For context, always refer to the full article.

EmTech Digital 2024: Policy direction concerning AI must be human-centered and human-aligned

Screenshot from MIT Technology Review video

Policies must promote transparency, accountability, and reliability not just from the AI systems but from those behind them as well, argues one AI expert

Generative artificial intelligence (AI) is rapidly integrating itself into our daily lives, and the more it does, the more we must be vigilant of the potential risks and ethical dilemmas that such technology brings.

However, the unprecedented rate at which it evolves makes it a challenge to put guardrails on it. So the question confronting people is, “How do you govern a moving target?”

Christabel Randolph, a law fellow from the Center for AI and Digital Policy, tried to answer that question during her presentation at EmTech Digital 2024, MIT Technology Review’s signature AI conference, held from May 22 to 23.

Coming from an AI ethics non-profit – the same one that filed a complaint against ChatGPT to the FTC last year – Randolph argued that policy direction concerning AI must be human-centered and human-aligned. By that, she meant that policies must promote transparency, accountability, and reliability not just from the AI systems but from those behind them as well.

Must Read

EmTech Digital 2024: AI must expand human capabilities, not sideline them

EmTech Digital 2024: AI must expand human capabilities, not sideline them

Randolph added that policies governing the use of generative AI are more important today than they have ever been before, warning of the potential threat of anthropomorphic models that mimic and manipulate human behavior. More tech companies are also racing to commercialize generative AI and rushing to put out new models in the market.

There have been policy frameworks about AI even before the early 2020s generative AI boom. For example, the Universal Guidelines for AI, a set of guidelines that outline principles for AI governance, was published in 2018. Those guidelines then helped inform the G20-backed OECD AI Principles and the UNESCO Recommendation on the Ethics of AI, which was adopted by all 193 member states.

Multilateral declarations and policy frameworks, as Randolph emphasized, are not as effective at governance as hard policies because they don’t have binding safeguards.

That’s why the Center for AI and Digital Policy decided to work directly with national governments and move the conversation from policy frameworks to regulation and legislation.

The organization’s efforts did bear fruit, with a number of regulatory policies being passed last year, specifically governing the use of generative AI.

Randolph explained that China was the first to regulate generative AI. The interim measures issued by the country’s government required generative AI service providers to put labels and prominence mechanisms to ensure IP rights are respected. They are also required to ensure that the algorithms they deploy do no discriminate.

But as fast as China was to regulate, the EU’s AI Act is the most comprehensive, as she noted in her presentation. The legislation requires generative AI to disclose training data and respect copyright laws. It also requires continuous auditing and conformity assessments of AI models.

Meanwhile, in the US, Randolph said that there have been discussions and some bills put forward in Congress, but so far, a hard legislation has yet to materialize. She partly attributed the lack of urgency to fears that regulation might hinder innovation – fears that she said are unwarranted. For her, regulation doesn’t hinder innovation; it instead holds it to a higher standard.

In line with that, she posed the question: If an innovation doesn’t add value to our lives; cannot be trusted; and does more harm than good, can it still be defined as an innovation?

During the same session, Amir Ghavi, AI, tech transactions, and IP partner at Fried Frank LLP and a lawyer who has defended AI companies in court, gave a big picture view of today’s courtroom battles surrounding AI. He said that there have been 24 AI-related lawsuits so far in 2024, the majority of which concern intellectual property (IP) rights.

From a legal standpoint, he likened generative AI to the photocopier and VCR, both of which faced lawsuits filed by IP holders who argued that these technologies could possibly infringe on their rights. The court, however, didn’t agree and ruled in favor of the two technologies, stating that they possess a notion of fair use and are capable of substantial non-infringing use cases. Ghavi sees the same happening to generative AI, and said he expects IP lawsuits to slow down moving forward. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!