Skip to main content

DuckDuckGo’s new AI service keeps your chatbot conversations private

DuckDuckGo
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use — Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude — then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user’s IP address with one of their own. “This way it looks like the requests are coming from us and not you,” the company wrote in a blog post.

Recommended Videos

As with the company’s anonymized search feature, all metadata is stripped from the user queries, so even though DuckDuckGo warns that “the underlying model providers may store chats temporarily,” there’s no way to personally identify users based on those chats. And, as The Verge notes, DuckDuckGo also has agreements in place with those AI companies, preventing them from using chat prompts and outputs to train their models, as well as to delete any saved data within 30 days.

Data privacy is a growing concern among the AI community, even as the number of people using it both individually and at work continues to rise. A Pew Research study from October found that roughly eight in 10 “of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with.” While most chatbots already allow their users to opt out from having their data collected, those options are often buried in layers of menus with the onus on the user to find and select them.

AI Chat is available at both duck.ai and duckduckgo.com/chat. It’s free to use “within a daily limit,” though the company is currently considering a more expansive paid option with higher usage limits and access to more advanced models. This new service follows last year’s release of DuckDuckGo’s DuckAssist, which provides anonymized, AI-generated synopses of search results, akin to Google’s SGE.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more