Social Good Summit 2023

Institutions encouraged to harness AI but consider ethics, speak out on harms

Gaby Baizas, Lorenz Pasion

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Institutions encouraged to harness AI but consider ethics, speak out on harms

Alex Pama, former Executive Director of the National Disaster Risk Reduction and Management Council, Philippine Coast Guard Spokesperson Commodore Jay Tarriela, Rene Almendras of Ayala Corporation and Ingrid Rose Ann Beroña, Chief Risk Officer of GCash, in a panel discussion at the Social Good Summit 2023, in Taguig City on September 16.

Angie de Silva/Rappler

Rene Almendras of Ayala Corporation calls organizations to stick to their core principles to use technology ‘for the right reasons’

MANILA, Philippines – Brands, communities, and institutions are hopeful about how artificial intelligence can boost their productivity, but have called organizations to use it ethically.

At Rappler’s Social Good Summit on Saturday, September 16, leaders from different industries and sectors discussed how their respective fields could make the most out of technology and the rise of AI.

Rene Almendras, senior managing director of Ayala Corporation, stressed that organizations should have a good understanding of their “true North” when using AI, so that leaders and employees don’t stray from their core principles and use technology “for the right reasons.”

“We go back to the values of the institution, [the values that] the institution stands up for…. Every institution needs to draw that line. ‘We have a written policy on what to do, that will tell us [to] follow ethics or the code of conduct on social media use by the company,’” Almendras said.

Almendras also shared that Ayala Corporation had been attacked online in the past, and said that the biggest challenge of new forms of technology is the possibility that it could hamper “the ability to discern what is real, what is true, what is proper and what is not.”

The emergence of AI has sparked fears that it would replace jobs in different sectors such as the business process outsourcing and entertainment industries, and that it would fuel new forms of disinformation.

It also brought about discussions on the need for safety evaluations and government regulation for such tools.

Must Read

Gov’t must mandate access to Big Tech data to make platforms accountable

Gov’t must mandate access to Big Tech data to make platforms accountable

Alex Pama, former executive director of the National Disaster Risk Reduction and Management Council (NDRRMC), said that while science and social media improved their approach to disaster risk assessment, he noted that navigating technology requires a “whole-of-society” approach.

“Not one sector will be able to have the answer. Nobody has a silver bullet to say, ‘Okay, this is what we’re going to do to optimize technology and sciences,’” Pama said.

“There should be a concerted effort by everybody now…to speak about the things that need to be improved on, developed further, and highlight those that are destructive in nature,” he added.

Almendras also called on tech companies to defend corporations, especially those based in countries like the Philippines, against such threats.

“My appeal to Big Tech is [to] make it easier for us to correct. You cannot imagine how difficult it is to request Facebook to close a fake account. You cannot imagine the amount of time that we have to go through and the kind of documentation just to counter a fake accusation…. It’s just so frustrating.

And it’s very hard to explain to somebody in New York, or in San Francisco, the reality of what is happening in Manila,” he said.

How do different sectors use AI and technology for good?

For Beroña, securing the accounts of their consumers is their top priority at GCash. In line with the company’s “tech for good” vision, she said GCash uses AI to understand their consumers’ data and to go after perpetrators abusing the platform for scams and other cybercrimes.

Beroña also said GCash was pushing for the passage of the proposed Anti-Money Mule Bill, filed by Senator Migz Zubiri earlier this March. The bill aims to penalize perpetrators who funnel money from victims’ accounts and wallets and hide behind fake identities for financial accounts.

Pama explained how the NDRRMC uses technology to assess risks, identify hazards, rescue people, and provide urgent medical response during a disaster. 

MUST READ: Join this conversation with DRRM experts: How do we achieve #ZeroCasualty?

“Technology plays [a] very significant part in this. From remote sensing, to determin[ing]…not only the casualties but the loss [and] damage as far as livelihood is concerned,” Pama said.

But Pama also underscored the risks of the emergence of AI and explained that it might open up new doors to fabricate truth that determines life or death during disasters. He pointed out the need for a concerted effort to provide “very good and curated information” vital to disaster resilience.

“This time around, the bigger challenge would be disaster resilience…. All information, facts, and data that need to be curated now will be dependent on the accuracy and the truthfulness. Whose truth is it to use insofar as coming up with the right assessment as far as risk is concerned?

You’re not just talking response, you’re not just talking preparedness now, but the whole context of disaster resilience,” Pama said.

Philippine Coast Guard (PCG) spokesperson Jay Tarriela said its members use high-definition cameras, satellite imagery, radars, drones, and even Starlink internet to document maritime aggression in the West Philippine Sea (WPS).

This allows them to put on record incidents like when China aimed lasers at a PCG ship in February this year, and when a Chinese Coast Guard vessel pointed a water cannon at small Philippine vessels in August.

“It is crucial to emphasize that our utilization of technology empowers the Filipino people by providing them accurate information and enabling them to combat fake news and false narratives that plague our nation in recent years,” Tarriela said.

The PCG spokesperson also said they used underwater cameras to document not just China’s aggression in the disputed waters, but also the damage caused to the marine environment by their land reclamation.

Must Read

Experts advocate collaboration, proper understanding of AI to address social impacts

Experts advocate collaboration, proper understanding of AI to address social impacts

But in terms of using generative AI, Tarriela said the PCG avoids using such tools in their day-to-day work due to confidentiality issues. Aside from the fact that their work requires “authentic, not AI-generated” images and videos for reports, they are also not allowed to upload data on applications that use AI.

“As of this time, we are not using AI… We are not allowing our research analysts to upload some of the reports to that kind of application [because] we still don’t know who runs that particular application.

As much as possible, we just rely on factual narratives, photos, and videos we got from our patrols,” Tarriela said. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Sleeve, Clothing, Apparel

author

Gaby Baizas

Gaby Baizas is a digital forensics researcher at Rappler. She first joined Rappler straight out of college as a digital communications specialist. She hopes people learn to read past headlines the same way she hopes punk never dies.
Avatar photo

author

Lorenz Pasion

Lorenz Pasion is a researcher at Rappler and a member of its fact-check team that debunks false claims that spread on social media.