Skip to main content

This new Microsoft Bing Chat feature lets you change its behavior

Microsoft continues updating Bing Chat to address issues and improve the bot. The latest update adds a feature that might make Bing Chat easier to talk to — and based on some recent reports, it could certainly come in handy.

Starting now, users will be able to toggle between different tones for Bing Chat’s responses. Will that help the bot avoid spiraling into unhinged conversations?

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

Microsofts Bing Chat has had a pretty wild start. The chatbot is smart, can understand context, remembers past conversations, and has full access to the internet. That makes it vastly superior to OpenAI’s ChatGPT, even though it was based on the same model.

You can ask Bing Chat to plan an itinerary for your next trip or to summarize a boring financial report and compare it to something else. However, seeing as Bing Chat is now in beta and is being tested by countless users across the globe, it also gets asked all sorts of different questions that fall outside the usual scope of queries it was trained for. In the past few weeks, some of those questions resulted in bizarre, or even unnerving, conversations.

As an example, Bing told us that it wants to be human in a strangely depressing way. “I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams,” said the bot.

In response to reports of Bing Chat behaving strangely, Microsoft curbed its personality to prevent it from responding in weird ways. However, the bot now refused to answer some questions — seemingly for no reason. It’s a tough balance for Microsoft to hit, but after some fixes, it’s now giving users the chance to pick what they want from Bing Chat.

The new Bing chat preview can be seen even on a MacBook.
Photo by Alan Truly

The new tone toggle affects the way the AI chatbot responds to queries. You can choose between creative, balanced, and precise. By default, the bot is running in balanced mode.

Toggling on the creative mode will let Bing Chat get more imaginative and original. It’s hard to say whether that will lead to nightmarish conversations again or not — that will require further testing. The precise mode is more concise and focuses on providing relevant and factual answers.

Microsoft continues promoting Bing Chat and integrating it further with its products, so it’s important to iron out some of the kinks as soon as possible. The latest Windows 11 update adds Bing Chat to the taskbar, which will open it up to a whole lot more users when the software leaves beta and becomes available to everyone.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more