This is AI generated summarization, which may have errors. For context, always refer to the full article.
MANILA, Philippines – United States-based tech research firm Gartner found that seven out of 10 users believed the “greater integration” of generative AI in social media would harm user experience, fearing that the technology could aid further in the spread of false information.
The emergent technology was also cited as among the reasons along with the spread of misinformation, toxic user bases, and the prevalence of bots as to why half of social media users might significantly limit their social media usage or abandon platforms by 2025.
Similar research published in November 2023 from France’s Ipsos found that globally 74% agree that generative AI would make it easier to generate “very realistic fake news stories and images.” They also found that 51% believed that the technology would make misinformation and disinformation worse.
These perceptions on how AI will affect the information ecosystem, including social media, is fueling concerns and tempering excitement over the technology.
The Pew Research Center reported in August 2023 that 52% of the Americans it surveyed said they felt more concerned rather than excited about the increasing use of artificial intelligence. Only 10% said that they were more excited than concerned, while the remaining 36% had mixed feelings.
In less than one year, the percentage of Americans expressing more concern than excitement had grown by 14 points from only 38% in December 2022. Sentiment leaned heavily towards concern than excitement over all the age groups surveyed.
Pew noted, “Some point to clear problems that have been identified with generative AI systems, which produce erroneous and unexplainable things and are already being used to foment misinformation and trick people.”
Old concerns, faster technology
Surveillance and data privacy – two things that legacy social media platforms have struggled with even before AI came under the tech limelight – are significant concerns as well. Fifty-three percent of users surveyed by Pew, who believed that AI was doing more to hurt than help people, kept their personal information private, to avoid unintended ways that their data could be used.
Experts cited by Pew continued to express concerns over the harmful profit incentives in digital tools and systems, saying that these were “likely to lead to data collection aimed at controlling people rather than empowering them to act freely.” They also worried that ethical design would still be an afterthought, with digital systems continuing to be released before they were thoroughly tested.
Pew said that some were also “anxious about the seemingly unstoppable speed and scope” of the tech, which could “enable blanket surveillance of vast populations and could destroy the information environment, undermining democratic systems with deepfakes, misinformation, and harassment.”
There is fear that AI is developing at a pace that makes it impossible for society to adapt, with a massive group of tech figures and experts signing a petition earlier in 2023 to “pause giant AI experiments.”
The Philippines’ Senator Risa Hontiveros also described AI progress as not being “at human scale,” and, as legislators take notice, she hopes that experts will be “able to help put in place those protocols and safeguards to slow things down, decelerate, and bring things down to human proportions.”
Inability to discern amid growing deployment
A June 2023 study called “AI model GPT-3 (dis)informs us better than humans,” published in the peer-reviewed Science Advances, illustrates how humans may not be adjusting fast enough to the technology.
The study found that humans are already having trouble discerning between AI-made content and human-made ones, with large language models (LLMs), the technology powering generative AI, that “can already produce text that is indistinguishable from organic text.”
With LLMs improving at astonishing rates, the researchers warned that, “therefore, the emergence of more powerful large language models and their impact should be monitored.”
“If the technology is found to contribute to disinformation and to worsen public health issues, then regulating the training datasets used to develop these technologies will be crucial to limit misuse and ensure transparent, truthful output information,” they said.
“In addition, until we do not have efficient strategies for identifying disinformation (whether based on human skills or on future AI improvements), it might be necessary to restrict the use of these technologies, e.g., licensing them only to trusted users (e.g., research institutions) or limiting the potential of AIs to certain types of applications.”
Despite these concerns, generative AI steadily find deployment in industries, with Gartner predicting that about 80% of advanced creative roles will be asked to harness the technology’s potential in differentiating their business, and freeing up teams from routine work to focus on more creative tasks.
Google’s generative AI-powered search is also expected to decrease brands’ organic search traffic by 50% by 2028, with 79% of users telling Gartner they’re ready to use such a tool by 2024. And, already, about 65% already prefer ChatGPT for seeking information than traditional search engines, according to Forbes.
There’s the schism. While the majority of users, along with tech experts, have expressed worry over the rapid deployment of AI and its potential negative effects on the spread of disinformation on social media, industries and companies spurred by market forces and promised productivity will continue to push the tech forward on people and society.
As formulation of legislation is relatively slow compared to the development of AI, and tech and AI companies will continue to act largely on the motive of profit without effective regulations, what happens to the information ecosystem then? – Rappler.com