human rights

Can the decentralized web help to protect human rights?

Caitlin Thompson

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Can the decentralized web help to protect human rights?
In an age of internet shutdowns, takedown requests, and deepfakes, the race is on to create a resilient and verifiable archive for the work of campaigners and citizen journalists
As published byCoda Story

In 1991, Los Angeles police were filmed beating a Black man named Rodney King during a traffic stop. Video of the scene, captured by a witness on a camcorder, filled the international news. The acquittal of the officers involved sparked the city’s 1992 riots. Since then, bystander footage has drawn international attention to numerous incidences of police violence, particularly against people of color, including the 2020 murder of George Floyd. 

Witness, a global organization dedicated to training and assisting activists to use video in the defense of human rights, was founded in the wake of the Rodney King case. Now, the group is teaming up with the Filecoin Foundation for the Decentralized Web (FFDW), which supports the creation of open-source software and protocols for decentralized data storage. 

We sat down with Sam Gregory, program director at Witness, to talk about ways to build a digital record of human rights violations that is resilient to internet shutdowns, deepfakes and the erasure of content that all too often results from overzealous platform moderation policies.

This conversation has been edited for length and clarity. 

Coda Story: Technology has evolved significantly since the Rodney King camcorder footage. Often, we think that this rapid progress has made it easier to document the world around us and provided smarter, better and faster tools for journalists and activists. I’m curious what new challenges there have been. 

Sam Gregory: The past 30 years have taught us that technology changes, but human rights issues don’t. The underlying questions about how you enable more people to document abuses in ways that are safe, ethical and effective remain the same. 

About a decade ago, we noticed that we were helping people create trustworthy content that protected the truth, but an increasing amount of falsehoods, lies and manipulations of video also started to emerge. We saw that particularly in the Syrian conflict where footage of atrocities was weaponized to fit specific and often conflicting narratives. Footage is easy to record, but it’s also easy to ignore and easy for it to lead to harm. 

As it becomes easier for more and more people to post video online, including that of rights abuses, has verifying such material become harder? 

About a decade ago, we started to see new challenges to the integrity of video. For instance, people would question the stories that clips were telling and say, “That wasn’t filmed in that place, it was filmed in this place.” So, we partnered with an organization called The Guardian Project to start building tools like ProofMode, which adds rich metadata to videos and photographs and cryptographically signs that piece of media to increase verifiability, show if an image has been changed and provide a chain of custody. 

The human rights community was really early in this because we saw the need. It’s vital that we show evidence of war crimes or police violence and show when and where it’s happening. We have been doing that by giving signals of trustworthiness for specific pieces of media, based on what you can see in the image and the additional data attached to it. The idea behind authenticity infrastructure is to provide tools that allow people to choose better opt-in data to attach to their media, which will help the viewer understand where it came from and how it was made. 

Many cameras and phones will add location data or dates to the metadata by default. But  do these identifiers potentially pose a risk to people filming and distributing it? How do you authenticate a video while protecting privacy and anonymity for people in vulnerable situations? 

We could see right from the start that if you can better understand the provenance of videos and images, that’s great for the person trying to verify them, but that it also raises potential privacy risks for the person who shot them. We saw the potential for governments to weaponize these technologies. Many citizen journalists who work with us can’t add all that extra data, and you don’t want them to be excluded just because they haven’t used a specific technology. It’s important that these tools be opt-in, that they aim to offer signals of trustworthiness, rather than absolute confirmation of truth, and that you don’t force people to use them when they don’t want to or can’t. 

We need to have a way for images and video to still be trusted, even if you do things like blur out a face to protect a person’s privacy. People also need to be able to make choices about how much information they disclose, such as the location it was shot in. Journalists and human rights defenders live in a real-world context where that data poses real risks. 

How has the rise of deepfakes created new challenges for people documenting human rights abuses?  

Deepfakes make it easier to dismiss footage of critical events. The “Infopocalypse, the world is collapsing, you can’t believe anything you see,” rhetoric that we’ve all heard in recent years has been deeply harmful. Deepfakes have gotten easier to make. However, they’re still not prevalent in the really sophisticated way that I think many people believe they are. 

There’s what we call the “liar’s dividend,” which is the ability to easily dismiss something true as a deepfake. We’ve seen this in some high-profile cases recently. Witness was involved in a case in Myanmar, involving the former chief minister of Yangon. A video was released by the military government in March this year, in which he appears to accuse Aung San Suu Kyi, the country’s de facto leader before the February 2021 coup, of corruption. In the video, he looks very static, his mouth looks out of sync and he sounds unlike his normal self. People saw this and were like, “It’s a deepfake.” 

They put it through an online deepfake detector, which came back saying 95% likely a deepfake. The news spread really quickly after that. The narrative was that Myanmar’s military was able to manipulate the lips of this political prisoner to make him say stuff he would never say. But, it turns out that the footage wasn’t a deepfake at all. It was most likely a forced confession. Are the underlying claims true? We don’t know, there’s no legitimate rule of law in Myanmar at the moment, and the video was released by the military government. The problem was that the narrative spread so quickly on Twitter that this was a deepfake created by the government, and there was no expertise to counter that. 

We’re seeing this happen towards video evidence globally. Increasingly, people are just saying, “You can’t believe anything.” We need to be investing in equity of access to tools that authenticate videos and debunk fakes, so they are available broadly.

Let’s talk about the decentralized web. It’s basically the opposite of the internet infrastructure we have now, which is largely dependent on large platforms and service providers. With the support from FFDW, Witness is exploring how it can be used to build an online digital record of human rights abuses. Tell us about that. 

The decentralized web can potentially support a similarly decentralized approach to controlling and managing content; a more robust way to preserve the integrity of that content over time and to verify it. And it can provide a way for human rights communities around the world to set governance rules specific to them.  

We’re increasingly seeing the challenges of the centralized web in terms of government-enforced internet shutdowns in repressive states, takedown requests and the forced removal of content. Also, when you put content on YouTube, you’re putting it in a very vulnerable spot anyway. A couple of years ago, the Syrian human rights group Mnemonic saw hundreds of thousands of videos of the Syrian conflict disappear overnight, because of a change in YouTube’s moderation algorithm. Now, you have a real dependence on commercial platforms, for which human rights issues are not a primary business concern. Their moderation decisions really affect the ability of human rights defenders to leverage and control the footage they shoot.

The decentralized web better allows for that. It’s decentralized, it’s robust, it’s verifiable, it’s less subject to decisions made by social media giants. So, we’re trying to make sure that people in that community are paying attention to the needs of people who are not in Silicon Valley and are thinking about these global issues and grappling with some of the tensions we see. For example, having immutable records is a powerful way to resist censorship and prove the origins of media, but sometimes people do need to delete media that drives hate or violence or compromises privacy. 

With the support of FFDW, we’re exploring how the decentralized web can power our work in supporting activists globally, preserving videos of human rights abuses and sharing best practices for archiving evidence. – Rappler.com

This article has been republished from Coda Story with permission.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!