artificial intelligence

Safety evaluations, ethical standards for digital products are non-existent – Chris Wylie

Gelo Gonzales

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Safety evaluations, ethical standards for digital products are non-existent – Chris Wylie

Nobel Peace laureate Maria Ressa and Cambridge Analytica whistleblower Chris Wylie, at Navigating an AI Future, a Rappler sit-down conversation on World Press Freedon Day, May 3, 2023 in Pasig City.

Angie de Silva/Rappler

Generative AI is here, but, like social media before it, it came out without proper harm and safety testing

MANILA, Philippines – Cambridge Analytica whistleblower Chris Wylie, at a Rappler+ briefing on Wednesday, May 3, identified what he believes is one of the key problems that has amplified the potential of digital products to cause societal harm: non-existent safety evaluations and ethical standards. 

“One of the things that I find really frustrating is that we don’t apply the same standards of responsibility and ethics to tech companies and engineers and the designers of these systems. Imagine if you were an architect, and say, hey, I’ve got this great vision for a community: I’m going to put the black people here, and then wall them up so they don’t talk to the white people here. Would that be allowed? No, because we have anti-segregation rules,” Wylie said.

On Facebook, digital segregation happens with the way the platform delivers ads and content that could rely on various criteria, such as race, religion, and sexuality. “This person is black, this person is Jewish, this person is gay, this person is white. I’m going to show them different information now,” he cited as example.  

“What does it do when you separate people digitally, and you start to ‘ghettoize’ people? And when you look at history, terrible things tend to happen when you take groups and you other them, and you put them into ghettos. And that is what is happening on social media right now. We are ghettoizing the internet, we are ghettoizing people, and that creates an effect of othering, and that othering allows people to want to kill each other,” Wylie warned. 

Shattered public discourse

The ghettoizing also shatters a key pillar of democracy: public discourse.

“Democracy is premised on public discourse and collective decision making. As soon as we move the public from public discourse and allow people to see and interpret the world in radically different ways on a radically different set of facts, we’re no longer acting as a common community making a decision in an election. We’re making decisions about different realities. You can’t have a functioning democracy if you don’t have some common element of understanding,” he said.

And this digital segregation is enabled by the code and the engineering of social media, Wylie said. Moral standards and laws prevent segregation in the physical world. So, Wylie asked, “Why aren’t we applying the same basic moral standards that we would in any other profession, any other industry to tech?”

Debating on content moderation as the cure to social media’s problems distracts us from the root of the issue, which is the platform’s code and algorithm. 

The questions that should be asked instead, said Wylie: “Wait a second, Facebook, let’s park this content moderation conversation for a second. Why did you build an algorithm that is actively deceiving millions of people, and why didn’t you test for that? And why should you be allowed to have this in the first place?”

The debate on content moderation is a red herring.

NOBEL Peace Laureate Maria Ressa and Cambridge Analytica whistleblower Chris Wylie at ‘Navigating an AI Future,’ a Rappler sit-down conversation on World Press Freedon Day, May 3, 2023, in Pasig City.
Safety evaluation, fixable code

Wylie illustrates how glaring the lack of safety evaluation is for tech products like platform algorithms and AI: 

“If I want to make any other kind of consumer product, like a toaster, there are more safety standards, there are more testing requirements for a toaster to put in your kitchen than AI – because there are no requirements to test your AI for safety or harmful effects. You can just release it. We do not allow any other consumer product to just be released without any kind of safety standards,” he pointed out.

Is there a way to “deghettoize”?

“I don’t think there’s a solution aside from requiring and applying desegregation principles to digital platforms. If we required engineers to consider potential segregating or discriminatory effects of the AI, and to test for those effects, at least for future constructions, I think it would create better design and start to alter the behavior of people on systems,” said Wylie. 

“At the end of the day, it’s code. Code is fixable. [The question is] what framework are we using to test that code against? And, right now, engineers are not required to test for racializing effects, segregating effects.” 

Advent of generative AI

The world hasn’t yet solved the social media problem, but now comes another technology that is already changing society as we speak: Generative AI like ChatGPT and Midjourney.

“And now when you look at generative AI, we are on the precipice of something monumental. And we have no safety framework, no regulatory framework. We are so unprepared for this. And we are just allowing an industry to go and just do it, and experiment with society. We are now a massive petri dish for these companies,” Wylie said.

Regulation often moves at a snail’s pace. while products like ChatGPT are growing exponentially in capacity. In just a few years it grew from being able to track a few million parameters to about a trillion parameters possibly this year with its latest version.

“When you look at the history of regulation, every disruptive tech innovation, de facto, came about with no regulation. When airplanes were invented, there was no FAA (Federal Aviation Act). When modern medicines came about, there was no food and drug administration. Why did regulation come about? Usually, there’s a series of public disasters that happen, and then people realize, you can’t trust industry to police itself,” he said.  

“Tech is no different. To give you a concrete example, if we had a digital safety regulator that was charged with consumer safety legislation, which required safety testing, and harm mitigation, that would do a lot of good. And it conforms to how we regulate every other industry. It’s only fair that AI and digital products fall in line with what every other business has to already do.”

Tech has had its big public disaster, its big plane crash with Cambridge Analytica. Generative AI has already taken flight but, once again, it’s been unleashed without the safety testing that Wylie has advocated for. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.