Skip to main content

You won’t believe how much ChatGPT costs to operate

A new report claims to know how much ChatGPT costs to run per day, including how much each query approximately costs. The popular text-to-speech chatbot might have set off an AI revolution in November 2022, but has proven extremely expensive to maintain.

The new report comes from Dylan Patel, chief analyst at the research firm SemiAnalysis, who says it costs approximately $700,000 per day, or 36 cents per query, to keep the chatbot up and running.

Recommended Videos

It’s so expensive, in fact, that Microsoft might be developing its own proprietary AI chips to assist in the maintenance of OpenAI’s operation of ChatGPT, according to Windows Central.

The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.
Airam Dato-on/Pexels / Pexels

In addition to quickly hitting a 100 million active users in January, a feat that previously took tech brands years to achieve, ChatGPT has struggled with high traffic and capacity issues slowing down and crashing its servers. The company attempted to remedy this by introducing a paid ChatGPT Plus tier, which costs $20 per month, however, there is no word on how many users subscribe to the paid option.

OpenAI currently uses Nvidia GPUs to maintain not only its own ChatGPT processes, but also those of the brands with which it partners. Industry analysts expect the company will likely require an additional 30,000 GPUs from Nvidia to maintain its commercial performance for the remainder of 2023 alone.

With Microsoft as one of its primary collaborators and investors, OpenAI might be looking at the tech brand to assist in developing hardware to bring down the cost of operations for ChatGPT. According to Windows Central, Microsoft already has this AI chip in the works. Code-named Athena, it is currently being tested internally with the brand’s own teams. The chip is expected to be introduced next year for Microsoft’s Azure AI services.

There is no word on how or when the chip will trickle down to OpenAI and ChatGPT, but the assumption is that it will. The connection comes from the fact that ChatGPT is supported by Azure services. The AI chip might not fully replace Nvidia GPUs, but might help decrease the demand for the hardware, thus reducing the cost of running ChatGPT, Windows Central added.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
ChatGPT: the latest news and updates on the AI chatbot that changed everything
ChatGPT app running on an iPhone.

In the ever-evolving landscape of artificial intelligence, ChatGPT stands out as a groundbreaking development that has captured global attention. From its impressive capabilities and recent advancements to the heated debates surrounding its ethical implications, ChatGPT continues to make headlines.

Whether you're a tech enthusiast or just curious about the future of AI, dive into this comprehensive guide to uncover everything you need to know about this revolutionary AI tool.
What is ChatGPT?
ChatGPT (which stands for Chat Generative Pre-trained Transformer) is an AI chatbot, meaning you can ask it a question using natural language prompts and it will generate a reply. Unlike less-sophisticated voice assistant like Siri or Google Assistant, ChatGPT is driven by a large language model (LLM). These neural networks are trained on huge quantities of information from the internet for deep learning — meaning they generate altogether new responses, rather than just regurgitating canned answers. They're not built for a specific purpose like chatbots of the past — and they're a whole lot smarter. The current version of ChatGPT is based on the GPT-4 model, which was trained on all sorts of written content including websites, books, social media, news articles, and more — all fine-tuned in the language model by both supervised learning and RLHF (Reinforcement Learning From Human Feedback).
When was ChatGPT released?
OpenAI released ChatGPT in November 2022. When it launched, the initial version of ChatGPT ran atop the GPT-3.5 model. In the years since, the system has undergone a number of iterative advancements with the current version of ChatGPT using the GPT-4 model family. GPT-5 is reportedly just around the corner. GPT-3 was first launched in 2020, GPT-2 released the year prior to that, though neither were used in the public-facing ChatGPT system.
Upon its release, ChatGPT's popularity skyrocketed literally overnight. It grew to host over 100 million users in its first two months, making it the most quickly-adopted piece of software ever made to date, though this record has since been beaten by the Twitter alternative, Threads. ChatGPT's popularity dropped briefly in June 2023, reportedly losing 10% of global users, but has since continued to grow exponentially.
How to use ChatGPT
First, go to chatgpt.com. If you'd like to maintain a history of your previous chats, sign up for a free account. You can use the system anonymously without a login if you prefer. Users can opt to connect their ChatGPT login with that of their Google-, Microsoft- or Apple-backed accounts as well. At the sign up screen, you'll see some basic rules about ChatGPT, including potential errors in data, how OpenAI collects data, and how users can submit feedback. If you want to get started, we have a roundup of the best ChatGPT tips.

Read more
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more
How you can try OpenAI’s new o1-preview model for yourself
The openAI o1 logo

Despite months of rumored development, OpenAI's release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn't be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here's how to get access for yourself.

Read more