Meta Platforms

Whistleblower tells US Congress Meta failed in protecting teens’ safety on its apps

Victor Barreiro Jr.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Whistleblower tells US Congress Meta failed in protecting teens’ safety on its apps

META. The Meta logo is seen on a smartphone in front of the displayed logos of Facebook, Messenger, Instagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.

Dado Ruvic/Reuters

Ex-Meta employee Arturo Bejar testifies that Meta Platforms knew of harms teens could encounter on its apps, but did not do enough to address them

An ex-Meta employee, Arturo Bejar, testified to the United States’ Senate Judiciary Subcommittee on Privacy, Technology and the Law on Tuesday, November 7, that the company – which owns Facebook and Instagram – knew of harms teens could encounter on its apps, but did not do enough to address the concerns.

The Senate hearing was about social media and how it affected teen mental health.

Whistleblower tells US Congress Meta failed in protecting teens’ safety on its apps

Bejar worked on well-being for Instagram from 2019 to 2021 and was formerly director of engineering for Facebook’s Protect and Care team from 2009 to 2015.

His work at Meta was, in part, focused on encouraging positive behavior and giving tools for young people and other users to manage unpleasant experiences on the services.

Bejar, in his testimony, said, “Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform, And they have yet to establish a goal for actually reducing those harms and protecting children. It’s time that the public and parents understand the true level of harm posed by these ‘products’ and it’s time that young users have the tools to report and suppress online abuse.”

Must Read

Meta sued for ‘addictive’ features, causing mental health issues

Meta sued for ‘addictive’ features, causing mental health issues

One of the allegations Bejar explained in his testimony, CNN reported, was that of how Meta allocated its resources.

While it worked on automated detection of rule-breaking content, less resources were poured into human reviews of situations that didn’t neatly or explicitly violate Meta’s rules but which users found distressing anyway.

Reuters reported a Meta statement in which it said the company was committed to protecting young people online. They pointed to its backing of the same user surveys Bejar cited in his testimony and its creation of tools to allow for anonymous notifications regarding potentially hurtful content.

“Every day countless people inside and outside of Meta are working on how to help keep young people safe online,” Meta’s statement said, adding, “All of this work continues.”

Bejar, in his submitted testimony, said, “All this time there has been extensive harm happening to teenagers, and the leadership has been aware of it, but they have chosen not to investigate or address the problems. I know because I respectfully communicated this directly to the executive team in
2021, and have watched them do essentially nothing in response.”

Bejar told Congress he met regularly with senior executives at the company, including CEO Mark Zuckerberg, but he subsequently concluded the executives had decided “time and time again to not tackle this issue.”

Bejar wrote to Zuckerberg and senior officials at Meta about his findings. These included data from the research team indicating “as many as 21.8% of 13-15 year olds said they were the target of bullying in the past seven days, 39.4% of 13-15 year old children said they had experienced negative comparison, in the past seven days, and 24.4% of 13-15 year old responded said they received unwanted advances, all in the prior seven days.”

“Later, the research team revised the survey results to state that the likely number of 13-15 year old children receiving unwanted sexual advances in the past seven days was likely only 13%, still a shocking number. Obviously, an even higher percentage of these children are receiving unwanted sexual advances on a monthly basis,” he explained.

He added that follow-up reactions “were not constructive.”

Reuters added that Bejar also told Meta executives his own 16-year-old daughter had been sent misogynistic comments and obscene photos, without having any adequate tools to report those experiences back to Meta.

‘Hardly safe and supportive’

According to Bejar’s written testimony, “there is still no way, so far as I or teenagers I know can determine, for a minor to flag a conversation in Instagram to indicate it contains unwanted sexual advances. And this is just one of several categories of meaningful harm that teenagers experience.”

“An environment where unwanted sexual advances are normalized is hardly safe and supportive,” he added.

Bejar said social media companies have to be made to do the work to reduce harms on their platforms. This includes making data-driven decisions, and improving accountability and transparency on the harms that occur on the platforms.

“Social media companies are not going to start addressing the harm they enable for teenagers on their own. They need to be compelled by regulators and policy makers to be transparent about
these harms and what they are doing to address them,” he said. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Person, Human, Sleeve

author

Victor Barreiro Jr.

Victor Barreiro Jr is part of Rappler's Central Desk. An avid patron of role-playing games and science fiction and fantasy shows, he also yearns to do good in the world, and hopes his work with Rappler helps to increase the good that's out there.