Skip to main content

Great, hackers are now using ChatGPT to create malware

A new threat has surfaced in the ChatGPT saga, with cybercriminals having developed a way to hack the AI chatbot and inundate it with malware commands.

The research firm Checkpoint has discovered that hackers have designed bots that can infiltrate OpenAI’s GPT-3 API and alter its code so that it can generate malicious content, such as text that can be used for phishing emails and malware scripts.

Counterpoint screencap of Business model of OpenAI API based Telegram channel.

The bots work through the messaging app Telegram. Bad actors use the bots to set up a restriction-free, dark version of ChatGPT, according to Ars Technica.

Recommended Videos

ChatGPT has thumbs-up and thumbs-down buttons that you can press as part of its learning algorithm if it generates content that can be considered offensive or inappropriate. Normally, inputs like generating malicious code or phishing emails is off limits, with ChatGPT refusing to give a response.

This nefarious chatbot alternative has a price tag of $6 for every 100 queries, with the hackers behind it also giving tips and examples of the bad content you can generate with this version. The hackers have also made a script available on GitHub. The OpenAI, API-based script has the ability to allow users to fake a business or person, in addition to generating phishing emails through text-generation commands. The bots can also assist you in the ideal placement for the phishing link in the email, according to PC Gamer.

It is difficult to know how much of a threat this development will be to AI text generators moving forward, especially with major companies already committed to working with this increasingly popular technology. Microsoft Bing is set to soon add ChatGPT support to its browser in an upcoming update as a part of its ongoing collaboration with OpenAI, for example.

While ChatGPT remains free for the foreseeable future, minus the priority ChatGPT Plus subscription, this isn’t the first time the AI text generator has been targeted by scammers. In January, news broke that thousands of people were duped after paying for iOS and Android mobile app versions of the chatbot, which is currently a browser-based service.

The Apple App Store version was especially popular, despite its $8 weekly subscription price after a three-day trial. Users also had the option to pay a $50 monthly subscription, which notably was even more expensive than the weekly cost. The app was eventually removed from the Apple store after it received media attention.

ChatGPT is certainly the main target for scammers as it has surged in popularity, but it remains to be seen if bad actors will eventually jump on one of the many ChatGPT alternatives circulating.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more