Facebook rollout of political ad transparency report in PH uncertain

MANILA, Philippines (UPDATED) – Whether Facebook's political ad transparency report will be implemented in the Philippines or not is uncertain. 

Facebook has sent an email to Rappler that "there is no information yet on the roll out of the political ad transparency report in the Philippines at this time."

Earlier, at a Manila event held Wednesday, November 21, a Facebook spokesperson said they “believe it is being rolled out,” but didn’t confirm affirmatively, saying they will check and confirm.

The political ad transparency report – called the "Ad Library Report" – is currently active in the US, the United Kingdom and Brazil. For those countries in which it is active, Facebook has a public page set up showing what organization paid for which political ad, and how many ads a political Facebook page has run.

In lieu of the political ads page, however, Facebook says there is a way already to check out what ads a Page is running, also launched this year. "You can now see the ads a Page is running across Facebook, Instagram, Messenger and our partner network, even if those ads aren’t shown to you. Just log into Facebook, visit any Page and select 'Info and Ads,'" Facebook explained. 

With elections happening in the Philippines in 2019, a page like the political ads transparency report will be helpful for Filipino voters to see where some of the campaign funds are coming from, what organization is backing who, and who's spending the most on Facebook ads.

Facebook updated their policy on political ad transparency in response to the United States' Honest Ads Act introduced in October 2017. 

Facebook has a searchable database for these political ads. Here's a screenshot:

Screenshot from Facebook

On the page, Facebook explains: "Facebook's Ad Library is a searchable database. It includes ads related to politics and issues of national importance that have run on Facebook or Instagram. This report is a weekly summary of the library and includes data for ads that have been viewed by people in the US for the time period selected above. Making this report available to the public is part of Facebook's efforts to increase transparency in advertising."

 

Rolling out a political ads transparency page in the Philippines will also line up well with their proclaimed desire to be more transparent. At the Manila event, they reiterated that in April 2018, detailed versions of their “Community Standards” guidelines were made public as well. 

Currently, Facebook transparency can be selective. At the event, when asked about why they didn’t share the entire list of violating pages taken down in the Philippines back in October, they said that they usually don’t share the full list of pages when they do take-downs, and they just share a few examples to illustrate what kind of behavior they don’t allow, and what bad actors aren’t allowed. 

As they’ve stated before, they said that they looked at “inauthentic behavior” rather than the actual content or message of the pages. Notably, the content on the pages they did actually reveal were pro-Duterte.

Moderating content

Facebook also reiterated their efforts to improve their policies on moderating toxic content such as hate speech, terrorism, and sexual violence and exploitation.  

On their frontlines: 20,000 people making up their content policy team, made up of experts on the various problem areas generated by the content on the platform. Of these, around 7,400 are the actual content reviewers, covering every time zone and speak over 50 languages including Filipino. They aim to resolve reports within 24 hours, a process that’s assisted greatly by artificial intelligence.

COMMUNITY STANDARDS. Sheen Handoo, Facebook's public policy manager, shows the many areas for which content moderation is done. Photo by Gelo Gonzales/Rappler

COMMUNITY STANDARDS. Sheen Handoo, Facebook's public policy manager, shows the many areas for which content moderation is done.

Photo by Gelo Gonzales/Rappler

Asked whether 7,400 may be disproportionally low a number facing over 2 billion users, Facebook answers that it’s the artificial intelligence (AI) that’s greatly assisting them in handling the content issues. AI, the company said, also helped them remove over 700 million inauthentic accounts from July to September 2018 from the platform, and hundreds of millions more earlier in the year.  

The inauthentic accounts removed were accounts that were spotted and removed quickly after being created, and are not part of its user base of 2.2 billion, Facebook asserted.

Facebook also explained that it uses AI to catch the re-sharing of sexual images relating to revenge porn across the platform: on public feeds, secret groups, and even Facebook Messenger. 

They also said in Q3 2018, they have taken “10 times the amount” of content they have taken action on compared to Q4 2017, emphasizing their increased efforts.  

Still, it’s an imperfect system. Just recently on Facebook, a 16-year-old girl was auctioned off in South Sudan to a bidder who paid 500 cows, 3 cars, and $10,000 to the father. The auction – a violation of Facebook’s policies, the company has said – stayed on the platform for 15 days before being removed. The post managed to get through and stay on the platform.

The company said they are "getting better at proactively identifying violating conrent before anyone reports it, especially for hate speech and violence and graphic content." Facing an endless litany of controversies in 2018, the company would do well to police content incidents better in 2019.  – Rappler.com

Gelo Gonzales

Gelo Gonzales is Rappler’s technology editor. He covers consumer electronics, social media, emerging tech, and video games.

image