Skip to main content

ChatGPT Bing is becoming an unhinged AI nightmare

Microsoft’s ChatGPT-powered Bing is at a fever pitch right now, but you might want to hold off on your excitement. The first public debut has shown responses that are inaccurate, incomprehensible, and sometimes downright scary.

Microsoft sent out the first wave of ChatGPT Bing invites on Monday, following a weekend where more than a million people signed up for the waitlist. It didn’t take long for insane responses to start flooding in.

ChatGPT giving an insane response.
u/Alfred_Chicken

You can see a response from u/Alfred_Chicken above that was posted to the Bing subreddit. Asked if the AI chatbot was sentient, it starts out with an unsettling response before devolving into a barrage of “I am not” messages.

That’s not the only example, either. u/Curious_Evolver got into an argument with the chatbot over the year, with Bing claiming it was 2022. It’s a silly mistake for the AI, but it’s not the slipup that’s frightening. It’s how Bing responds.

The AI claims that the user has “been wrong, confused, and rude,” and they have “not shown me any good intention towards me at any time.” The exchange climaxes with the chatbot claiming it has “been a good Bing,” and asking for the user to admit they’re wrong and apologize, stop arguing, or end the conversation and “start a new one with a better attitude.”

User u/yaosio said they put Bing in a depressive state after the AI couldn’t recall a previous conversation. The chatbot said it “makes me feel sad and scared,” and asked the user to help it remember.

These aren’t just isolated incidents from Reddit, either. AI researcher Dmitri Brereton showed several examples of the chatbot getting information wrong, sometimes to hilarious effect and other times with potentially dangerous consequences.

The chatbot dreamed up fake financial numbers when asked about GAP’s financial performance, created a fictitious 2023 Super Bowl in which the Eagles defeated the Chiefs before the game was even played, and even gave descriptions of deadly mushrooms when asked about what an edible mushroom would look like.

Bing copilot AI chat interface.
Andrew Martonik / Digital Trends

Google’s rival Bard AI also had slipups in its first public demo. Ironically enough, Bing understood this fact but got the point Bard slipped up on wrong, claiming that it inaccurately said Croatia is part of the European Union (Croatia is part of the EU, Bard actually messed up a response concerning the James Webb telescope).

We saw some of these mistakes in our hands-on demo with ChatGPT Bing, but nothing on the scale of the user reports we’re now seeing. It’s no secret that ChatGPT can screw up responses, but it’s clear now that the recent version debuted in Bing might not be ready for primetime.

The responses shouldn’t come up in normal use. They likely result in users “jailbreaking” the AI by supplying it with specific prompts in an attempt to bypass the rules it has in place. As reported by Ars Technica, a few exploits have already been discovered that skirt the safeguards of ChatGPT Bing. This isn’t new for the chatbot, with several examples of users bypassing protections of the online version of ChatGPT.

We’ve had a chance to test out some of these responses, as well. Although we never saw anything quite like users reported on Reddit, Bing did eventually devolve into arguing.

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more
Bing Chat fights back against workplace bans on AI
Bing Chat shown on a laptop.

Microsoft has announced Bing Chat Enterprise, a security-minded version of its Bing Chat AI chatbot that's made specifically for use by company employees.

The announcement comes in response to a large number of businesses implementing wide-reaching bans on the technology -- including companies like Apple, Goldman Sachs, Verizon, and Samsung. ChatGPT was the main target, but alternatives like Bing Chat and Google Bard were included in the bans.

Read more