MANILA, Philippines – OpenAI CEO Sam Altman is the man of the hour in the tech industry, and has been since AI chatbot ChatGPT publicly hit the scene on November 30, 2022, becoming history’s fastest growing app, reaching 100 million users two months after launching in January 2023.
Its impressive capabilities paired with explosive growth have caused worry and concern over several aspects – how it may be used to boost disinformation, and cause job loss faster than it can create new opportunities, among other disruptions.
For that reason, Altman found himself facing the US Congress on May 16, before his company had come close to anything like Facebook’s Cambridge Analytica scandal. In the social media era, the likes of Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey, and Google’s Sundar Pichai made their Congressional debuts after the fact, after the world had come to realize the power or influence these tech companies held over matters of society.
Altman’s appearance preempts scandal. One reason can be gleaned from US Senator Richard Blumenthal’s opening statement in the Altman hearing: “Congress has a choice. Now. We had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content exploiting children, creating dangers for them…But Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”
Second, public opinion is matching sentiment in the halls of power. A majority, 61% of US citizens, believe that AI could threaten civilization, according to a Reuters survey.
Altman is the current face of generative AI – AI able to create new content out of mountains of data – and what he thinks and says are crucial to how the technology will continue to evolve. Whether it will be more to society’s benefit or more to its detriment will also be dependent on how world governments put the legislative safeguards, and how the public are taught to be critical.
A call for licensing and testing requirements
“For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination,” Altman said.
Cambridge Analytica whistleblower Chris Wylie has already stressed how safety evaluations and ethical standards for digital products are non-existent. Altman is directly expressing a willingness to put together such requirements for products, although it is a little belated as ChatGPT, like the Facebooks and Twitters before it, also came out without those.
Altman expounds further later on in the session on specific steps:
“Number one, I would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away, and ensure compliance with safety standards.
Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate…
And then third, I would require independent audits. So not just from the company or the agency, but experts who can say the model is in compliance with these stated safety thresholds, and these percentages of performance on question X or Y.”
Altman’s biggest nightmare: job loss
The CEO was asked what his “biggest nightmare” was, and he proceeded to talk about job loss.
“Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict…I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better…I think it’s important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused, and it’s a tool that people have a great deal of control over and how they use it. And second, GPT-4 and other systems like it are good at doing tasks, not jobs.”
Gary Marcus, an AI expert and Professor Emeritus of Psychology and Neuroscience at New York University, warned though that what’s not clear is the timescale over which new opportunities can cover for immediate jobs lost.
He said, “Past performance history is not a guarantee of the future. It has always been the case in the past that we have had more jobs, that new jobs, new professions come in as new technologies come in. I think this one’s gonna be different. And the real question is over what time scale? Is it going to be 10 years? Is it going to be a hundred years? And I don’t think anybody knows the answer to that question.”
Marcus added, “I think in the long run, so-called artificial general intelligence really will replace a large fraction of human jobs. We’re not that close to artificial general intelligence, despite all of the media hype and so forth. I would say that what we have right now is just a small sampling of the AI that we will build in 20 years.”
Election disinformation: one of Altman’s areas of greatest concern
“It’s one of my areas of greatest concern – the more general ability of these models to manipulate, to persuade, to provide, sort of, one-on-one interactive disinformation… When Photoshop came onto the scene a long time ago, you know, for a while people were really quite fooled by photoshopped images and then pretty quickly, developed an understanding that images might be photoshopped. This will be like that, but on steroids, and the interactivity, the ability to really model, predict humans, as you talked about, I think it’s going to require a combination of companies doing the right thing, regulation and public education.”
“A single player experience”: Generative AI needs a different response from social media
“[Generative AI] is different. And so the response that we need is different. This is a tool that a user is using to help generate content more efficiently than before. They can change it. They can test the accuracy of it. If they don’t like it, they can get another version. But it still then spreads through social media or other ways. ChatGPT is a single-player experience where you’re just using this. And so I think as we think about what to do, that’s, that’s important to understand that there’s a lot that we can and do there. “
Use ChatGPT less
“To be clear, OpenAI does not [have an] ad-based business model. So we’re not trying to build up these profiles of our users. We’re not trying to get them to use it more. Actually, we’d love it if they use it less because we don’t have enough GPUs. But I think other companies are already, and certainly will, in the future, use AI models to create, you know, very good ad predictions of what a user will like. I think that’s already happening in many ways.”
Altman didn’t close the door, however, on an ad-based business model.
Asked by Senator Cory Booker if the company would ever do ads, Altman says, “I wouldn’t say never. I don’t think, like, I think there may be people that we want to offer services to, and there’s no other model that works. But I really like having a subscription based model. We have API developers pay us and we have ChatGPT.”
Altman on content creators, and content owners
“Again, to reiterate my earlier point, we think that content creators, content owners, need to benefit from this technology. Exactly what the economic model is, we’re still talking to artists, and content owners about what they want. I think there’s a lot of ways this can happen, but very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology. And we believe that it’s really going to deliver that – but [as for] content owners, likenesses, people totally deserve control over how that’s used, and [how] to benefit from it.”
Altman on news organizations
“It is my hope that tools like what we’re creating can help news organizations do better. I think having a vibrant national media is critically important. And let’s call it: round one of the internet has not been great for that…” Altman says.
Senator Amy Klobuchar points out to Altman, “But do you understand that this could be exponentially worse in terms of local news content if they’re not compensated? Because what they need is to be compensated for their content and not have it stolen.”
Altman: “Again, our model, the current version of GPT-4 ended training in 2021. It’s not a good way to find recent news. And it’s, I don’t think it’s a service that can do a great job of linking out, although maybe with our plugins, it’s possible. If there are things that we can do to help local news, we would certainly like to. Again, I think it’s critically important.”
Altman has trust that people will be able to tell inaccurate information generated by ChatGPT
“We find that people, that users are, are pretty sophisticated, and understand where the mistakes are, that they need to be responsible for verifying what the models say, that they go off and check it,” Altman says.
Questions that Altman needs to be asked are, in the era of disinformation, how many people have actually learned to tell fiction from fact? Especially when information is taken out of ChatGPT – which makes it easier for people to produce convincing, professional-sounding text content – and re-posted and spread on social media, what drives him to trust that people will be able to discern then?
Altman does add that as these systems get better, it would be even harder to tell the fake from the truth. He explained, “I worry that as the models get better and better, the users can have, sort of, less and less of their own discriminating thought process around it. But I think users are more capable than we often give them credit for, in conversations like this. I think a lot of disclosures, which if you’ve used ChatGPT, you’ll see about the inaccuracies of the model are also important.”
ChatGPT and similar tools allow for faster content generation, and that can include disinformation, not to mention, hallucinations by the system. Few have changed on social media, only that disinformation peddlers have a new toy to play with.
On working with other languages with fewer speakers
“We think this is really important. One example is that we worked with the government of Iceland, which is a language with fewer speakers than many of the languages that are well represented on the internet to ensure that their language was included in our model…
And I look forward to many similar partnerships with lower resource languages to get them into our models. GPT-4 is unlike previous models of ours, which were good at English and not very good at other languages. Now, [it’s] pretty good at a large number of languages. You can go pretty far down the list ranked by number of speakers and, and still get good performance. But for these very small languages, we’re excited about custom partnerships to include that language into our model run.” – Rappler.com
There are no comments yet. Add your comment to start the conversation.