Skip to main content

How to enable or disable ChatGPT from Windows taskbar

ChatGPT is built right into Windows 11, making it super easy to use and play around with, whether you want to use it as an alternative to Google search, or to teach you board games. If you don't like this feature, though, you can easily disable it with just a few clicks.

Re-enabling it is a cinch too. Here's how to do both.

Difficulty

Easy

Duration

5 minutes

What You Need

How to disable ChatGPT in Windows 11

Windows 11 has ChatGPT built into its Bing search tool, making it easier than ever to access ChatGPT's conversational chats and to help with your search results. Here's how to disable it so that you can go back to the more straightforward search tool it started out as.

Step 1: Press the Windows key + I to open the Settings menu.

Step 2: Select Privacy and security from the left-hand menu.

Windows 11 settings menu.

Step 3: Select Search permissions.

Privacy and security menu.

Step 4: Under More Settings, toggle Show search highlights to Off.

Turn off ChatGPT.

How to enable ChatGPT in Windows 11

If ChatGPT doesn't seem to be working for you, or you previously disabled it and want to enable it once more, follow these steps.

Step 1: Press the Windows key + I to open the Settings menu.

Step 2: Select Privacy and security from the left-hand menu.

Step 3: Under More Settings, toggle Show search highlights to On.

When you do want to play around with ChatGPT again, consider trying it out on OpenAI's website. You can use plugins to have it do some incredible things, and there are Chrome extensions to further augment its abilities.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more