Judgment Call

The gap between regulation and Big Tech looks to widen

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

The gap between regulation and Big Tech looks to widen
OpenAI CEO Sam Altman expresses the need for licensing and testing, but can legislation catch up with GPT’s blazing development?

At an internal meeting attended by Rappler editors with Cambridge Analytica whistleblower Chris Wylie a few weeks ago, I asked him where the world was headed, given the arrival of and the hype surrounding the current generation of artificial intelligence technology, even as the problems in social media had remained largely the same. 

“Disaster,” he said. That’s where we’re headed. 

We’ve harped on this before – disinformation will be turbocharged by generative AI, able to generate stories, images, and videos almost in an instant.

Maria Ressa, at the 2023 Nobel Prize Summit on May 25, relaunched the 10-Point Plan, an initiative to rein in Big Tech co-authored by Nobel Peace Prize laureate Dmitry Muratov. She warned: “The tech has gone exponential. And we’re still moving at glacial speed…. The window to act is closing.” 

Hello, I’m Gelo Gonzales, Rappler’s Technology section editor and minder of our Media and Disinformation editorial cluster. 

OpenAI’s GPT-5, the large language model behind ChatGPT, is looking at an early 2024 release. It is exponentially more powerful than the GPT-4 released in March 2023. 

Meanwhile, in the United States, Section 230, the contentious law that protects online platform owners from accountability on content posted by its users, remains untouched. 

The US Supreme Court recently sidestepped discussing that specific law in cases that alleged Twitter had failed to police terrorist content on the platform, and that YouTube and its algorithm had recommended terrorism content. They decided Twitter did not “encourage or solicit” acts of terrorism, even in its failure to remove such content on the platform. YouTube was let off the hook as well, partly on the grounds that the similar Twitter case had failed for the plaintiffs. 

I think these illustrate how far regulation in the US, where these tech giants are headquartered, lags behind the exponential development of tech. Progress to at least tweak Section 230 to make social media companies more accountable is going nowhere. 

Meanwhile, the tech industry is already onboard with AI, in what AI expert and Professor Emeritus of Psychology and Neuroscience at New York University Gary Marcus describes as “among the most world-changing technologies ever, already changing things more rapidly than almost any technology in history.”

Marcus said that at a US Congress hearing with Open AI CEO Sam Altman – his first appearance before Congress – which at the very least feels like a positive sign: most other Big Tech bigwigs like Mark Zuckerberg or Jack Dorsey made their appearance post-controversy or post-scandal. 

Altman repeatedly expressed his desire to work with the government, and to have regulations in place. He specifically called for three things: the establishment of safety standards, a licensing agency that ensures compliance from AI companies, and independent audits. 

It’s almost like Altman attended and heard Wylie’s Rappler+ talk a few weeks ago, where the whistleblower basically said that it’s absurd how the world doesn’t have safety evaluations and ethical standards for digital products before they are released – especially when the most basic appliances, toasters, and such face stringent quality testing first before they are made available to the public. 

ChatGPT, an incredibly powerful piece of technology, was released in November 2022, without any such testing. US lawmakers should have asked Altman whether he would be open to pulling back the app so testing and evaluation could be done. 

Altman charmed Congress, saying he actually wished for people to use the app less because they didn’t have enough GPUs to support demand. But he had a few red flags as well. He essentially said that he trusted people would be smart enough to detect fake photos just like how we knew how to detect Photoshopped images, and that people would do fact-checking on their own if they felt something was off or suspected a hallucination with text generated by ChatGPT. 

That’s not going to happen, if we’ve learned anything from the social media era. The machine is too powerful, too large, and too coordinated, especially when the said machine knows us more than we know ourselves, and has a way of segregating people into conveniently manageable “communities.” 

Marcus also said, “The big tech company’s preferred plan boils down to ‘trust us.’ But why should we? The sums of money at stake are mind-boggling.”

Altman’s statement is a red flag because, once again, it seems to want to deflect responsibility away from their company, and onto the individual – it’s your fault for not fact-checking.

The CEO also threatened to leave the European Union if the bloc’s new AI Act doesn’t get revised. He described it as “over-regulating.” A day later, he would recant, and say that conversations had been productive, and they had no plans to leave

He has appeared contradictory on big matters before. Once he made a board member say that he would be fired if he failed as CEO, to show employees that he wasn’t an autocrat. Later, the public release of ChatGPT would be, in his own words, a “unilateral decision,” breaking away from the company’s tradition of democratic debate. Now, he’s expressed the need for great regulation – but maybe just not too much. 

You know who professes love for government cooperation and regulation as well in just the right amounts? Mark Zuckerberg. 

Altman’s so-far contradictory nature – in a heavy-handed kind of way – has manifested himself in the company’s direction too, as Marcus said: “OpenAI’s original mission statement proclaimed [their] goal is to advance AI, in the way that [is] most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Seven years later, they’re largely beholden to Microsoft, embroiled in an epic battle of search engines that routinely make things up.” 

Elon Musk has said that, given Microsoft’s huge backing, the giant could theoretically “cut off OpenAI” any time they want. 

Altman, the current face of generative AI, is showing signs he might not be that different after all from previous Big Tech CEOs. He and his technology are moving very fast, so the potential for breaking things is there. He likes just the right amount of regulation, and he and his company are repeating the familiar “benefit for humanity” chorus that tech companies have always loved, while downplaying the big investors they answer to. 

That kind of leadership has led us to disaster before. Again, adding the weight of continuing problems with social media, and the glacial pace at which guardrail-putting progresses, perhaps my better question to Wylie would have been, “How big exactly is this disaster going to be?” – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.