artificial intelligence

Cybercriminals bypass ChatGPT restrictions to make malware worse, phishing emails better

Victor Barreiro Jr.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Cybercriminals bypass ChatGPT restrictions to make   malware worse, phishing emails better
Researchers with CheckPoint say cybercriminals can bypass ChatGPT's barriers, and create malicious content, like phishing emails and malware code, using it

MANILA, Philippines – Cybercriminals are finding ways to get past restrictions to OpenAI’s ChatGPT artificial intelligence (AI) tool, allowing them to make AI-powered improvements to malware code or phishing emails.

Cybersecurity company CheckPoint said in a February 7 blog post that its researchers found an instance of cybercriminals using ChatGPT to improve on the code of a piece of 2019 malware known as InfoStealer. Ars Technica added in a February 9 report that the application programming interface (API) for an OpenAI GPT-3 model known as text-davinci-003 was being used instead of ChatGPT for the purpose of bypassing the restrictions.

CheckPoint’s researchers wrote, “The current version of OpenAI´s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few if any anti-abuse measures in place.”

“As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface,” the researchers added.

Due to this, a user from an underground forum is selling a service combining the API with the Telegram messaging application, so interested parties can make AI-powered queries without restrictions in place. While the first 20 queries are free, further queries are sold for $5.50 for every set of 100 additional queries.

Another cybercriminal, meanwhile, created an OpenAI API-based script to bypass the previously noted anti-abuse restrictions.

Ars Technica added that OpenAI, when sought for comment, did not immediately respond to an email asking if it knew of CheckPoint’s findings, nor did it comment on any future plans regarding updating the APIs to prevent further abuse.

Reports follow Microsoft partnership

Back in December, CheckPoint, on its CheckPoint Research blog, had previously discussed the possibility of using ChatGPT to write malware and improve phishing messages.

The Checkpoint and Ars Technica reports follow Microsoft’s announcement in January of a continued partnership with OpenAI. They also occurred around the time of the announcement that Microsoft’s Bing search engine and Edge browser would be revamped with artificial intelligence to improve user experience. – Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!
Person, Human, Sleeve

author

Victor Barreiro Jr.

Victor Barreiro Jr is part of Rappler's Central Desk. An avid patron of role-playing games and science fiction and fantasy shows, he also yearns to do good in the world, and hopes his work with Rappler helps to increase the good that's out there.