Farieha Aziz saw the writing on the wall.
It was 2016, and lawmakers in Pakistan had passed a sweeping and controversial internet law granting the government the power to censor online content, including criminalizing hate speech and defamation. The bill, part of a national plan to combat terrorism, came in the wake of a 2014 terrorist attack on a school in Peshawar that killed more than 130 children.
Authorities said its implementation would help defend the country against threats to national security and cybercrime; critics argued that its broadly defined provisions could be used to muzzle online expression and content critical of the government. One prominent Pakistani legal expert, who has helped a number of other countries draft internet laws, said it was “the worst piece of cybercrime legislation in the world.”
Aziz, co-founder of the Pakistani digital rights group Bolo Bhi, campaigned against the law as it was being debated, speaking out against its potential impact on freedom of speech. Five years after the passing of the Prevention of Electronic Crimes Act (PECA), Aziz said it has been used against journalists and dissidents, as well as women who have come forward with allegations of sexual harassment online, but then been charged with defamation.
The outcome, Aziz said, “just confirms and validates the apprehensions we raised when the law was being introduced.” The US State Department echoed critics’ concerns, concluding in a 2021 human rights report on Pakistan that it “gives the government sweeping powers to censor content on the internet, which authorities used as a tool for the continued clampdown on civil society.” Between 2016 and 2020, Pakistan’s ranking in the global internet freedom index fell from 31st to 26th, according to the US-based nonprofit Freedom House.
Pakistan’s assault on digital freedom is once again on Aziz’s mind, as US politicians threaten to change a landmark internet law that could affect the bounds of online expression well outside of their country’s borders. The 1996 law, Section 230, shields websites from legal liability for the content users post online. The 26-word mandate, signed by President Bill Clinton and widely referred to as the “digital Magna Carta,” has allowed tech giants like Facebook, YouTube, and Twitter, to flourish without worrying about legal liability.
Yet Section 230 has come under fire from American lawmakers on both sides of the aisle as a bipartisan appetite to regulate large technology companies has grown, especially following the January 6 insurrection at the US Capitol and the deplatforming of former President Donald Trump from Facebook and Twitter. Both events have ratcheted up calls on the right and the left to hold social media platforms accountable for their content moderation decisions, with Republicans principally concerned about biases against conservative voices on social media, and Democrats focused on the role of platforms in spreading disinformation and extremism.
Section 230 has become a flashpoint in American politics and the subject of numerous attempted legislative reforms in the past two years. Since 2020, roughly two-dozen bills have been introduced, by Congressional Democrats and Republicans alike, to slash or chip away at it. Some would repeal the law altogether; others would strip away platforms’ liability shield for posted content involving civil rights, harassment, child sexual abuse and more.
While the proposals vary and reflect a broad range of concerns, from censorship to cyberstalking, it’s unclear which, if any, will pass. But what is obvious is that there is significant political will to eliminate or reconfigure Section 230. In a January 2020 interview, Joe Biden – now President of the United States – stated that it should be “immediately” revoked, while former president Donald Trump repeatedly called to repeal the law during his time in office.
Yet, while much of the debate in the US has focused on the domestic impact of a post-Section 230 internet, lawyers, human rights campaigners, and digital rights activists around the world predict that eliminating the law could have global consequences. Among their fears are that scrapping or significantly amending the law could set a precedent for countries from Asia to Latin America to introduce new policies that place severe restrictions on digital speech or make platforms liable for the material posted on their sites, causing them to aggressively remove content that could expose them to lawsuits.
Eric Goldman is a professor focusing on internet law at California’s Santa Clara University School of Law and has written extensively about Section 230. As he put it, “To the extent that the US stops trying to fight for free speech online, every other country in the world is going to fall even further on the censorship scale.”
Farieha Aziz expects a US decision on Section 230 to influence the conversation on tech regulation in Pakistan, where the government has repeatedly tightened control of the internet in recent years. In addition to the cybercrime law, authorities have banned dating applications over concerns about “immoral” content and unveiled regulations imposing penalties, including hefty fines and potential bans, on social media platforms that violate government requests to take down content.
“The first thing that happens over here is the government tends to borrow from the narrative of, ‘Oh, but this happens elsewhere as well, this happens in democracies,” she said. If the US changes Section 230, Aziz predicts that Pakistani authorities are “immediately going to draw from it and use this as a justification that, “Look, even in the US they believe the companies need to be held liable.’”
So, how could scrapping a US internet law affect how people around the world engage with digital platforms?
Section 230 protects websites and platforms like Facebook, Amazon, Twitter, and YouTube from lawsuits over the content users create, like comments on online news stories, restaurant reviews on Yelp, videos on YouTube, social media posts and more.
One of Section 230’s foremost experts, Jeff Kosseff, an assistant professor of cybersecurity law in the United States Naval Academy’s Cyber Science Department, describes it as the “26 words that created the internet.” The Electronic Frontier Foundation, a US-based nonprofit that advocates for digital privacy and free speech, refers to it as “the most important law protecting internet speech.”
For critics, however — including US lawmakers — Section 230 has given social media giants too much legal protection and shielded them from accountability for their approach to content moderation. Democrats who have spoken out against it say that the legal shield removes the incentive for platforms to moderate harmful content, including disinformation, hate speech, conspiracy theories, and extremism. Republicans opponents, meanwhile, argue that it provides cover for social media companies to censor voices on the right.
Though the two sides have both embraced reform, they have different objectives, underscoring how legislation scaling back platform immunity can be used to advance vastly different political agendas. Democrats want companies to more aggressively moderate harmful content, such as hate speech and misinformation, while Republicans want to limit the abilities of the same social media companies to moderate postings on their platforms.
While most countries have yet to pass local liability legislation, Section 230 has acted as the “law across the world,” according to Anupam Chander, a law professor at Georgetown University who specializes in international tech regulation. The European Union has taken a similar, but less sweeping approach, passing legislation in 2000 shielding platforms from liability if they are unaware of illegal content or quickly comply with requests to take it down.
Chander says a change to Section 230 will have “ripple effects across the world.” It could license other countries to introduce policies making platforms liable for content hosted on their sites, forcing them to more aggressively moderate and remove material that could expose them to litigation or violate local laws.
“There’s a lot of speech that might lead to liability across the world. And I think that is a risk of 230 changes,” Chander said. “The precedent it will set for the most free-speech-promoting democracy in the world to hold platforms responsible for the millions of speech messages that they offer each day is really quite stunning.”
In Latin America, the U.S debate on Section 230 is already influencing conversations about platform liability. A recently proposed draft bill in Mexico would give the country’s telecommunications regulator (IFT) the power to overrule social media networks in decisions about content moderation. It would also require platforms with more than one million users to get authorization from the IFT to operate in the country and impose fines of up to $4.4 million on digital platforms for non-compliance. Human Rights Watch said the legislation would “place the harshest restrictions on free speech that Mexico has seen in decades.”
Mexico’s draft bill came a month after Donald Trump’s suspension from Facebook and Twitter, a decision that infuriated President Andrés Manuel López Obrador, who had a warm relationship with the former US president, and vowed to lead an international campaign against social media censorship.
Javier Pallero, an Argentina-based policy director for the digital rights group Access Now, draws a line connecting Mexico’s proposal, which was drafted by a member of Lopez Obrador’s party, Trump’s social media suspension and bipartisan support for Section 230 reform.
“If the country that used to champion this limitation of liability is now questioning it, then that generates the space for other countries and lawmakers to spark proposals of reform,” he said.
In Brazil, too, it appears that US wrangling over Section 230 has fast-tracked efforts at internet reform. At least four bills related to content moderation and platform liability have been proposed since the US Capitol riot, including two that explicitly reference Trump’s deplatforming from social media outlets.
Like Republicans in the US, conservative politicians in Brazil have set their sights on what they see as excessive censorship of right-leaning voices. One proposal would ban platforms like Facebook and Twitter from moderating content unless they received a court order; another would make platforms liable for financial damages when they censor or ban content that expresses a user’s opinion.
The bills — all introduced by Congressional members of the right-wing Partido Social Liberal, which was President Jair Bolsonaro’s party before he split from it in 2019 — mirror Republican lawmakers’ concerns in the US about suppression of right-wing voices on social networks.
Artur Pericles, head of research at InternetLab, a technology-focused research center in the capital, Sao Paulo, told me that he had noted similarities between the Brazilian platform liability bill for opinion-related content moderation and legislation introduced by US Senator Josh Hawley, a Republican from Missouri, in June 2020. Hawley’s bill — the Ending Support for Internet Censorship Act — would only extend Section 230 protections to companies that are able to prove political neutrality by getting an immunity certification from the Federal Trade Commission.
“Both the left and right agree that platforms have too much power,” Pericles explained. “But then the left thinks that platforms should be more accountable to combat misinformation and hate speech. The right thinks that the platforms should have less power so they are not able to censor people on the right.”
Bolsonaro has also thrown his weight behind the push to regulate content moderation. In mid-May, the government published a draft executive order that would bar platforms from taking down content unless they have a court order, an option that could bypass Congressional efforts, and become law with the swipe of Bolsonaro’s pen.
Though Pericles suggested the proposals would be unlikely to survive legal challenges, he said the executive order, in particular, has already had an impact on public discourse, highlighting platform liability and drawing attention to politicians’ efforts to limit content moderation. “Even if it doesn’t make it into law, it definitely has changed the conversation and has given it a new sense of urgency,” he said. “People on the right who support the president have been drawing on the conversation in the United States for a while now.” – Rappler.com
Illustration by Teona Tsintsadze