artificial intelligence

Tackle racism in AI, BLM co-founder tells tech bosses

Reuters

This is AI generated summarization, which may have errors. For context, always refer to the full article.

A US study finds that facial recognition technology is not as accurate at identifying African-American and Asian faces compared to Caucasian faces

 As concerns grow over racial bias in artificial intelligence, Black Lives Matter co-founder Opal Tometi urged the tech sector to act fast against perpetuating racism in systems such as facial recognition.

“A lot of the algorithms, a lot of the data is racist,” the US activist who co-founded BLM in 2013 told Reuters on the sidelines of Lisbon’s Web Summit.

“We need tech to truly understand every way it (racism) shows up in the technologies they are developing,” she said.

Artificial intelligence is transforming the world and can be applied in diverse sectors, from improving the early detection of diseases to sorting out data and solving complex problems.

But there are also concerns around it.

The tech industry has faced a reckoning over the past few years over the ethics of AI technologies, with critics saying such systems could compromise privacy, target marginalized groups and normalize intrusive surveillance.

Some tech companies have acknowledged that some AI-driven facial recognition, which are popular among retailers and hospitals for security purposes, could be flawed.

On Wednesday, Facebook announced it was shutting down its facial recognition system citing concerns about its use and, Microsoft said last it would await federal regulation before selling facial recognition technology to police.

Police in the United States and Britain use facial recognition to identify suspects. But a study by the US National Institute of Standards and Technology found the technology is not as accurate at identifying African-American and Asian faces compared to Caucasian faces.

Last year, the first known wrongful arrest based on an incorrect facial recognition occurred in the United States. The United Nations has cited the case, attributed to the fact that the tool had mostly been trained on white faces, as an example of the dangers posed by a lack of diversity in the tech sector.

“They (tech companies) have to be very careful because technology has the ability to expedite values that otherwise would come about more slowly,” Tometi said. “But technology speeds everything up so the impact will be worse, faster.”

Urging software developers to “pay attention to all details”, she said they should hear Black people more.

“Unfortunately I feel like tech companies have a long way to go to build a bridge with the community,” she said.

According to the digital advocacy group Algorithmic Justice League, one of the reasons why AI systems are not inclusive is the predominantly white male composition of developer teams.

“We need solutions for the future, for future challenges, but those solutions need to be very inclusive,” Tometi said. “They need to protect marginalized and vulnerable communities – that’s their duty.” – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!