artificial intelligence

Can we use AI to enrich democratic consultations?

Gemma B. Mendoza

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Can we use AI to enrich democratic consultations?

GUIA ABOGADO

Rappler designs an experiment that leverages the capacity of large language models to synthesize inputs from text and audio to generate policy ideas from FGDs

Can artificial intelligence (AI) systems be used to enrich democratic consultation processes around important policy matters, such as the question on how AI should be governed?

Rappler wanted to give it a try, thus the decision to participate in the challenge of OpenAI to submit “experiments in setting up a democratic process for deciding on rules AI systems should follow within the bounds of law.” OpenAI is the American artificial intelligence research laboratory behind the large language model-based chatbot ChatGPT.

OpenAI approved in July the proposal of Rappler and nine other organizations from among hundreds of applicants worldwide.

Rappler’s proposal involved developing aiDialogue, a prototype AI-moderated chat room that gathers insights for the vital question: How should AI systems behave?

In aiDialogue, Rappler prompted ChatGPT to assume the persona of “Rai,” an FGD moderator. Rai gathered inputs from users, synthesized the discussions, and probed further by asking follow-up questions. Based on the inputs from session participants, it then suggested rules and policy ideas that should guide the behavior of AI systems. 

POLICY CHECK. A screenshot of participant responses and policies generated from an online consultation using aiDialogue

In designing the proposed process, we considered hard lessons learned from the social media wasteland – which resulted from the failure of tech platforms to act on systemic issues. 

On many occasions, lack of transparency – along with failure by tech platforms to invest in technologies and staff who can help nuance local concerns in countries they operated in – enabled widespread disinformation, voter manipulation, state-sponsored hate, and even genocide

The impact has been most severe on vulnerable democracies in the Global South, where authoritarian leaders used cyber armies and propaganda networks to systematically undermine journalists, activists, and other independent voices.

Transparency is a metric where most generative AI systems still fail, as indicated by a recent Stanford study. Incidentally, this is one of the key concerns raised by participants in our consultation process.

Rappler’s project aimed to ensure the following:

  • A process that includes and considers the experiences of the Global South in generating global policies on AI. 
  • A process that’s inclusive, representing diverse viewpoints and grassroots concerns within the global context. 
Grassroots-level insights

Apart from developing aiDialogue, Rappler also designed a consultation process that allowed participants to share their insights verbally through human-moderated focus group discussions (FGDs). 

We chose to combine both human-moderated and AI-moderated FGDs, as well as on ground and online processes as a way to give participants options on modes of communication they are more comfortable with. This helped ensure that those who are technologically left behind can still participate in the consultation process. 

We also wanted to know if participants would articulate their views differently if asked to write down their thoughts versus verbalizing their opinions. This helped us see the comfort level of participants – whether in chatting with an AI moderator in a chat room or sharing insights with a human moderator.

We decided to use small private groups with shared demographics to generate the initial policy ideas. A smaller sample size makes it easier to trace how the policy ideas are linked to the actual participant inputs. This could help build confidence in the system and credibility for the overall process. 

Our hypothesis

This was our hypothesis: a one-size-fits-all consultation process is insufficient, considering the magnitude of the potential disruptive impact of AI technologies on humanity. 

The process we designed primarily leverages the capacity of large language models to generate both qualitative and quantitative outputs from various types of unstructured inputs (text and audio) from participants. 

In effect, the consultation process combines the quantitative nature of survey research with the qualitative depth of insights from focus groups. 

As of October 19, the team has already conducted 15 consultations on aiDialogue, including four combined human and AI-moderated sessions. 

Below are the initial conclusions we drew from this experiment:

  • While participants recognized that the AI-moderated FGD had more potential to scale, initial feedback showed that more participants still found the human-moderated consultations more engaging, meaningful, and trustworthy. They also made participants feel heard.
  • The 15 sessions generated a total of 95 initial policy ideas. This shows that it’s possible to leverage the capacity of large language models to process and synthesize inputs in audio and text formats to capture views from diverse stakeholders on a particular policy issue. 
  • But large language models have limitations, especially when drawing insights from audio inputs of participants who are non-native English speakers. Some of the transcription errors we found significantly altered the meaning of participant views. 
  • These models also have limitations in generating enforceable constitutional policy ideas. Thus the need to bring into the loop human experts who can provide more nuance and detail in the crafting of enforceable policies.

Finally, while online consultation processes are scalable, it is important to recognize lessons learned from how online mobs and disinformation networks successfully hacked online civic spaces – and consequently democracies around the world – by manipulating public opinion.

In such situations, it is possible that even surveys may in fact be merely measuring the impact of systemic manipulation rather than serving as genuine mechanisms for gathering democratic inputs. 

The attached report, which contains preliminary findings, explains how the entire experiment was executed. 

Rappler intends to conduct more onground consultations in the coming weeks, and will update the report once all inputs are done.

Results from this experiment will be submitted to OpenAI, with the goal of helping the company make informed deliberations on how it should get inputs from the public about its how AI products – ChatGPT, Dall-E and others – should behave. 

The hope is for this process to serve as a milestone for an ongoing dialogue with platforms on tech governance. We will keep our readers informed of any progress in relation to this initiative. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Person, Human, Sleeve

author

Gemma B. Mendoza

Gemma Mendoza leads Rappler’s multi-pronged efforts to address disinformation in digital media, harnessing big data research, fact-checking, and community workshops. As one of Rappler's pioneers who launched its Facebook page Move.PH in 2011, Gemma initiated strategic projects that connect journalism and data with citizen action, particularly in relation to elections, disasters, and other social concerns.