Cybercriminals bypass ChatGPT restrictions to make malware worse, phishing emails better
Researchers with CheckPoint say cybercriminals can bypass ChatGPT’s barriers, and create malicious content, like phishing emails and malware code, using it
MANILA, Philippines – Cybercriminals are finding ways to get past restrictions to OpenAI’s ChatGPT artificial intelligence (AI) tool, allowing them to make AI-powered improvements to malware code or phishing emails.
Cybersecurity company CheckPoint said in a February 7 blog post that its researchers found an instance of cybercriminals using ChatGPT to improve on the code of a piece of 2019 malware known as InfoStealer. Ars Technica added in a February 9 report that the application programming interface (API) for an OpenAI GPT-3 model known as text-davinci-003 was being used instead of ChatGPT for the purpose of bypassing the restrictions.
CheckPoint’s researchers wrote, “The current version of OpenAI´s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few if any anti-abuse measures in place.”
“As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface,” the researchers added.
Due to this, a user from an underground forum is selling a service combining the API with the Telegram messaging application, so interested parties can make AI-powered queries without restrictions in place. While the first 20 queries are free, further queries are sold for $5.50 for every set of 100 additional queries.
Another cybercriminal, meanwhile, created an OpenAI API-based script to bypass the previously noted anti-abuse restrictions.
Ars Technica added that OpenAI, when sought for comment, did not immediately respond to an email asking if it knew of CheckPoint’s findings, nor did it comment on any future plans regarding updating the APIs to prevent further abuse.
Reports follow Microsoft partnership
Back in December, CheckPoint, on its CheckPoint Research blog, had previously discussed the possibility of using ChatGPT to write malware and improve phishing messages.
The Checkpoint and Ars…