Facebook’s highlights and lowlights of 2017

Gelo Gonzales, Anne Mari Ronquillo

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Facebook’s highlights and lowlights of 2017
Facebook continues to grow in 2017. But with the growth comes mounting pressure to stymie fake news and balance its unhealthy dominance on the digital ad industry

MANILA, Philippines – Of all the tech giants in the world, few have been under a spotlight more than Facebook has – and continues to go be in – this year. They’ve reached milestones, hitting 2 billion users in early 2017 and recording growing revenues in almost every quarter.

But with the continued growth come the pitfalls and growing accountabilities. With their reach, the platform’s ability to influence society is massive. With every little change to their fabled algorithm, Facebook changes what we are able to receive, see, read and hear on the platform. That’s 2 billion people or more than a quarter of the world’s population who are at the mercy of these tweaks.

Highlighting the platform’s power to influence society were the 2016 US presidential elections where Facebook admitted that 126 million users were exposed to Russian propaganda. Another 20 million were exposed to similar material on Instagram, which is also Facebook-owned. These materials are said to have, to an extent, influenced the US elections, in which Trump won.

Along with these state-sponsored information operations, other problems such as the broadcast of murder and suicides on Facebook Live, revenge porn, human trafficking, extremist content, fake news, trolling and social media weaponization in the Philippines and in other parts of the world remain largely remain unsolved on the platform.

The growth of the platform itself has led to a duopoly between Facebook and Google, wherein the staggering majority of digital ad spending goes to them – a trend that has yet to be curbed, essentially making them the biggest media companies in the world, threatening the existence of the world’s journalistic instituions.

Just earlier this month, European news agencies including the Agence France-Presse called attention to this, demanding that net giants like Facebook share a part of ad revenues to news outfits that dig up and fact-check the information. 

Below, in our compilation of highlights and lowlights for Facebook, we review these controversies they’ve faced in 2017, the role they inadvertently played in the weakening of democracies, and also, some of the solutions that the company has tried to push to solve the platform’s problems. 

Trump’s immigration policy

Following US President Trump’s signing of an executive order to restrict refugee entries early this year, Facebook leader Mark Zuckerberg posted his concerns on his page. He went on to remind fellow Americans that the United States are a nation of immigrants, and to close its doors on refugees would show a lack of courage and compassion.

TRUMP. Facebook this year acknowledged that their platform had been an influential instrument in affecting election outcomes such as the US elections in 2016 won by Donald Trump. File photo by Drew Angerer/AFP

Donald Trump would later on tweet that Facebook has always been anti-Trump, implying that it’s also a source of fake news. Not shying away from controversy, Zuckerberg penned another widely-shared Facebook post on September about how both Trump himself and liberals have condemned Facebook for content they don’t agree with.

Zuckerber then also recognized the critical role that Facebook played in the 2016 elections, but only after the social network platform was scrutinized for running Russian-sponsored ads during campaign period. Since then, Facebook has appeared to be more vigilant whenever a national election comes around such as that time they delete tens of thousands of fake accounts in the UK right around their elections. 

The fight against fake news

In late 2016, reports that fake news had spread faster on Facebook than truthful material began to came out. At the start, Facebook and its founder Mark Zuckerberg were dismissive of the claims, calling the notion “crazy.” They denied that their platform and the proliferation of fake news on it had an influence on that year’s US elections. It wasn’t almost until a year later when Zuckerberg reversed his statement.

In a Facebook post, he said, “Calling that crazy was dismissive and I regret it. This is too important an issue to be dismissive.”

In between the original statement and the reversal, Facebook has actually attempted a few measures to curb fake news. They launched the Journalism Project in 2017, a collaborative effort with journalists and fact-checking organizations to prevent the spread of fake news on the platform. Part of the project is that it allowed users to flag suspected fake news, see warnings next to fake posts, and they cut off ad revenue to fake news sites masquerading as a real news organization.

Germany was one of the first countries where these tools were tested, along with the US where the disputed tag for fake stories were rolled out. The disputed tag was then removed in December 2017, with Facebook saying that it had the adverse effect of reinforcing entrenched beliefs in people instead of reversing them. Since then, Facebook has begun to instead roll out a list of suggested fact-checked stories related to the fake post.

This highlights the notion that Facebook’s battle against fake news is partly trial-and-error. In several online posts, they’ve mentioned a reduction in the number of fake news posts that people get exposed to since implementing measures. In 2018, as these measures mature, and they gather more data, the hope is we’ll be able to find out more what works and what doesn’t. 

Earlier in April, Facebook along with other tech companies set up a $14-million fund for the News Integrity Initiative, which aimed to improve news literacy. In the Philippines, the Center for Media Freedom and Responsibility and the National Union of Journalists of the Philippines contributed to the effort against fake news too, releasing a FakeBlok plug-in, a fake news blocker, in June, representing one of the many third-party efforts in the world to help solve the fake news plague. 

In August, pages that habitually linked to fake news sites were banned from advertising on Facebook. In October, they tested an information button (“i”) that when clicked shows more information about the website linked on a Facebook post.

Like the European media demanding that tech giants share ad revenues, US media also spoke out against the Facebook-Google duopoly. Men under the US’ News Media Alliance warned of a duopoly between the two, saying that the two tech giants’ dominance in digital advertising may “distort the flow of economic value derived from good reporting.”

In November, Facebook along with Google, other tech firms, and global news orgs also put up the “Trust Project,” an initiative aimed at identifying “trustworthy” news sources, in the latest effort to combat online misinformation. Microsoft and Twitter also participate in the “Trust Project” together with 75 news organizations to tag news stories which meet standards for ethics and transparency.

But despite these numerous efforts to combat fake news, the social media platform continued to struggle with issues involving rapid spread of misinformation. In October, false news reports of the Las Vegas shooting made it into Facebook, where they were shared and spread within minutes. Facebook had removed the links shortly after. These fake reports, originating from 4chan, even topped Google searches, and were also heavily linked on Twitter.

This had left the news and journalism industries worried as more people all over the world are looking to Facebook and other platforms for their news fix. Guardian Media Group (Publishers of The Guardian) CEO David Pemsel recently argued that publishing outfits like theirs are unable to thrive in platforms like Facebook, which he observes, puts a premium on viral content as opposed to ones that uphold journalism.

That said, it’s this inherent design feature of Facebook to reward virality and “share-ability” above all else that allows highly emotional and controversial-sound posts to spread faster than content that is truly credible.

 

Facebook and the Russian disinformation campaign

Facebook’s potential to be weaponized and used in spreading propaganda rears its ugly head again this year. 

Trump’s presidential victory continued to divide America after reports of disinformation campaigns led by Russia started to surface. Troll farms originating from Russia are said to have sown discord among American voters through sharing articles, Christian memes, and making damning comments about the Democratic party. Paid ads were also served, and Facebook revealed in November that around 126 million Americans have been exposed to these Russian-sponsored ads.

Meanwhile, similar Russian ads reached around 20 million users on Facebook-owned Instagram.

Facebook has always maintained a neutral political stand in the United States despite being criticized of serving American democrats more than conservatives. Both parties were able to run their advertisments on Facebook during campaign period, but the glaring interference from Russia supported by a borderless Internet is proof that influence can come from just anybody, anywhere.

REPRESENTATIVES. Facebook General Counsel Colin Stretch, Twitter Acting General Counsel Sean Edgett, and Google Law Enforcement and Information Security Director Richard Salgado are sworn in before the Senate Judiciary Committee's Crime and Terrorism Subcommittee in the Hart Senate Office Building on Capitol Hill October 31, 2017 in Washington, DC. File photo by Chip Somodevilla/AFP

And ironically, a platform that was once touted as something that brought people together is now causing divisions in society – some of which are caused by skillful information operations, the likes of which we’ve also already witnessed here in the Philippines after Duterte’s election in 2016. (READ: Propaganda war: Weaponizing the internet)

The ease at which the platform can be gamed to manipulate what information spreads and what doesn’t continues to make Facebook a desirable tool for parties attempting to control people’s opinions and mindsets – a sign of a weak democracy. 

Algorithmic tweaks

Facebook’s role in news sourcing and delivery continues to grow globally, but they are foremost a tech company that needs to look inwards for solutions. One of the ways by which Facebook worked to deliver better and more accurate news was improving their Trending page. Early this year, Facebook users of the same region started seeing the same topics, effectively replacing the previous “personalized” content delivery that was based on individual interests.

The social media platform employed algorithms that started looking more into publisher activity, such as how many of them are covering the same events and topics, as opposed to relying on the clicks and shares for individual articles.

Facebook’s Trending page saw continuous improvements throughout the year. The social media giant was again criticized for biased selection of news sources, but it had stated that Trending topics are not from a predetermined list of sources, and are in fact selected due to combined factors of user engagement and citation by other articles.

Video clickbait is another ongoing problem within social media. In August, Facebook had employed the policy to demote content that shows fake play buttons.

Content policing: Revenge porn, suicide, and violence

In May, Facebook’s moderation policies were exposed by The Guardian, which shocked more than a few people with how some issues were treated. For instance, Facebook once advised moderators to ignore images mocking people with disabilities, images of physical bullying, or those that depict non-sexual physical abuse of children under 7 even if the images come with negative comments.

The reasoning for this is that so that others may “engage with or challenge them” in the posts. But if there is a sadistic or celebratory element to the said posts, that’s when Facebook deletes or takes action on them. 

While moderators are now asked to take down all videos showing suicide, Facebook’s former policy was to keep the video because they don’t want to “censor or punish people in distress who are attempting suicide,” and they will only remove the footage once any and all opportunities to help the person have passed. Now, all suicide videos are deleted even when these videos are shared “by someone other than the victim to raise awareness.”

POLICY PAINS. Facebook's current set of moderation policies on sensitive issues such as online abuse and suicide sowed controversy in May 2017

Animal cruelty was also a part of the policies, which said that animal abuse content may stay on the platform for awareness reasons but they’ll remove “content that celebrates cruelty against animals.”

On the flip side, Facebook had some attempts at fostering a safe online community by taking stands and actions against revenge porn, hate speech, violence, and terrorism.

In April, Facebook had started to restrict sharing for reported content that includes revenge porn. Reported content following this category can no longer be shared within Facebook, as well as on Messenger and Instagram. In May, Facebook announced that it will be adding 3,000 staff members in order to weed out violent content throughout the platform.

Videos of users performing self-harm have circulated  Zuckerberg stated the importance of being able to act quickly when it comes to taking down sensitive content, as well as responding to anything that may indicate somebody needing help.

The same measures are being taken to eradicate hate speech. As online extremism and radicalization grew in the EU, Germany had proposed to collect fines from social networks that fail to remove hate speech. Facebook had exerted continuous efforts, ensuring that reports of illegal hate speech are acted on within 24 hours of being notified.  

Facebook’s Head of Policy and Management, Monica Bickert, later highlighted the platform’s use of Artificial Intelligence to automatically removed content similar to previously removed ones on the grounds of extremism. The company’s official stance is to be a hostile place for extremists, using itself and its technology to become part of the solution.

In India, Facebook introduced a feature that helps keep women safer online. Profile pictures with a blue border and a small inscription of a shield cannot be downloaded, tagged, shared or captured in a screenshot. This has been reported otherwise as a anti-photo theft feature, but its origins point to Facebook’s study that women in India don’t feel safe sharing their photos online.

Facebook’s dollar figures for 2017

Facebook enjoyed continued growth in revenue and profit from quarters one through three of 2017. The period Q1 2017 saw a 76% revenue growth at $8.03B from last year’s $5.38B. Q2 experienced a 45% revenue increase at $9.32B from last year’s $6.44B. Q3 showed the most promise so far despite the advent of the Russian disinformation controversy.

From last year’s $7.01B revenue in Q3, they are up at $10.3B in 2017, with a whopping $4.7B in profit despite being in hot water in the Senate hearing.  

Much of the money comes from serving digital ads – an industry dominated by Facebook and Google with around 63% of digital ad spend in the US going to these two tech giants in 2017. As mentioned above, they’ve formed a duopoly that’s making it hard for other media and journalistic entities to be profitable.

With Google and Facebook becoming a thriving source of information, it becomes more important in 2018 that stakeholders make sure that these platforms offer truthful information, identify fake news, and protect themselves from being manipulated by information agents. Otherwise, they become instruments of propaganda that can further weaken democracies around the world. 

Facebook has a total of 2.07 billion monthly active users as of 2017 Q3.

See Facebook’s own 2017 Year In Review here. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Clothing, Apparel, Person

author

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.