Skip to main content

2023 was the year of AI. Here were the 9 moments that defined it

ChatGPT may have launched in late 2022, but 2023 was undoubtedly the year that generative AI took hold of the public consciousness.

Not only did ChatGPT reach new highs (and lows), but a plethora of seismic changes shook the world, from incredible rival products to shocking scandals and everything in between. As the year draws to a close, we’ve taken a look back at the nine most important events in AI that took place over the last 12 months. It’s been a year like no other for AI — here’s everything that made it memorable, starting at the beginning of 2023.

Recommended Videos

ChatGPT’s rivals rush to market

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

No list like this starts without the unparalleled rise of ChatGPT. The free-to-use chatbot from OpenAI took the world by storm, quickly growing and capturing the imagination of everyone, from tech leaders to average people on the street.

The service was initially launched in November of 2022 but really began to grow throughout the first few months of the year. The stunning success of ChatGPT left its rivals scrambling for a response. No one was more terrified than Google, it seemed, with the company fretting that AI could make its lucrative search business all but obsolete. In February, a few months after ChatGPT went public, Google hit back with Bard, its own AI chatbot. Then, just a day later, Microsoft revealed its own attempt with Bing Chat.

Things went a little worse for Bing Chat, with the chatbot being prone to what became known as “hallucinations” — incidents where it would lie, dream up fake facts, and generally be all-round unreliable. It told one reporter that it spied on, fell in love with, and then murdered its own developer. When we tested it, it claimed to be perfect, said it wanted to be human, and argued with us relentlessly. In other words, it was unhinged.

Things went a little better for Google Bard — it wasn’t nearly so argumentative and avoided Bing Chat’s deranged use of emojis altogether — but still had a tendency to be unreliable. If nothing else, both Google and Microsoft’s quickfire responses to ChatGPT showed how rushed their attempts were — and how dangerous AI’s penchant for misinformation could be.

GPT-4 makes a splash

The ChatGPT website on an iPhone.
Joe Maring / Digital Trends

When ChatGPT launched, it was powered by a large language model (LLM) called GPT-3.5. This was highly capable but had some limitations, such as only being able to take text as an input method. Much of that changed with GPT-4, which went public in March.

ChatGPT developer OpenAI said the new LLM was better in three key ways: creativity, visual input, and longer context. For instance, GPT-4 can use images as input and can also collaborate with users on creative projects such as music, screenplays, and literature.

Currently, GPT-4 is locked behind OpenAI’s ChatGPT Plus paywall, which costs $20 a month. But even with that limited reach, it has had a massive impact on AI. When Google announced its Gemini LLM in December, it claimed it was able to beat GPT-4 in most tests — yet the fact that it could do so by only a few percentage points almost a year after GPT-4 launched tells you everything you need to know about how advanced OpenAI’s model is.

AI-generated images begin to fool the public

A picture made by AI image generator Midjourney depicting Pope Francis in a large white puffer jacket.
Pablo Xavier

Few incidents illustrated the power of AI to trick and misinform than a single image released in early 2023: that of Pope Francis in a large white puffer jacket.

The unlikely image had been created by a person named Pablo Xavier using AI generator Midjourney, but it was so realistic that it easily fooled huge numbers of viewers on social media, including celebrities like Chrissy Teigen. It highlighted how convincing AI image generators can be — and their ability to hoodwink people into believing things that aren’t real.

In fact, just the week before the image of the Pope emerged, a different set of pictures made the news for a similar reason. They depicted former president Donald Trump being arrested, fighting with police officers, and serving time in jail. When powerful image generators get mixed with sensitive topics, be they related to politics, health, war, or something else, the risks can be extreme. As AI-generated images get even more realistic, the Pope Francis creation was a lighthearted example of how quickly we’d all need to up our media literacy game.

AI-generated images have become increasingly common throughout the year, even appearing in Google search results ahead of real images.

A petition begins to sound the alarm

Apple co-founder Steve Wozniak stands against a dark background.
Apple co-founder Steve Wozniak. Gage Skidmore / Flickr

AI has been moving at such breakneck speed — and has already had such alarming consequences — that many people have been raising serious concerns over what it could unleash. In March 2023, those fears were voiced by some of the world’s most prominent tech leaders in an open letter.

The note called on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4” to give time for society at large to assess the risks. Otherwise, the full-scale push into AI could “pose profound risks to society and humanity,” including the potential destruction of jobs, obsolescence of human life, and “loss of control of our civilization.”

The letter was signed by a Who’s Who of tech leaders, from Apple co-founder Steve Wozniak and Tesla boss Elon Musk to a slate of researchers and academics. Whether any AI firms actually took notice when the potential short-term profits are so great is another matter — just look at Google Gemini, which its creators say outstrips GPT-4. Here’s hoping the letter doesn’t turn out to be prophetic.

ChatGPT connects to the internet

ChatGPT app running on an iPhone.
Joe Maring / Digital Trends

When ChatGPT first launched, it relied on its massive trove of training data to help provide answers to people. The problem with that, though, was that it couldn’t stay up to date and was of little use if someone wanted to use it to make a reservation at a restaurant or find a link to buy a product.

That all changed when OpenAI announced a slate of plugins that would help ChatGPT Plugins that could connect to the internet. Suddenly, a new way of using AI to get things done was opened up. The change also vastly updated and expanded what the chatbot could do compared to before. In terms of capability and utility, it was a huge upgrade.

But it wasn’t until May that its browsing capabilities were expanded when the Browsing through Bing plugin was announced at the Microsoft Build developer conference. It was a slow rollout until September when it became available to all ChatGPT Plus users.

Windows gets a new Copilot

Microsoft CEO Satya Nadella announces updates to the company's Copilot artificial intelligence (AI) tool.
Microsoft

Microsoft introduced Bing Chat at the beginning of the year, but the company didn’t stay still. It followed this move up with Copilot, a much more expansive use of AI that was incorporated into Microsoft’s products, which was first announced for Microsoft 365 Copilot.

While Bing Chat was a simple chatbot, Copilot is more of a digital assistant. It’s woven into a bunch of Microsoft apps, like Word and Teams, as well as Windows 11 itself. It can create images, summarize meetings, find information and send it to your other devices, and a whole lot more. The idea is it saves you time and effort on lengthy tasks by automating them for you. In fact, Bing Chat was even subsumed under the Copilot umbrella in November.

By integrating Copilot so tightly into Windows 11, Microsoft not only indicated its all-in philosophy regarding AI, but threw down the gauntlet to Apple and its rival macOS operating system. So far, the advantage lies with Microsoft, especially as we look ahead to Windows 12 launching in 2024.

The academic world grapples with AI

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.
Viralyft / Unsplash

Given how quickly AI exploded onto the scene, you’d be forgiven for not quite understanding how it all works. But that lack of knowledge had real consequences for students at Texas A&M University–Commerce when a professor flunked the entire class for apparently using ChatGPT to write their papers — despite no evidence of students actually doing that.

The problem arose when Dr. Jared Mumm copied and pasted students’ papers into ChatGPT, then asked the chatbot if it could have generated the text, to which ChatGPT answered in the affirmative. The problem? ChatGPT can’t actually detect AI plagiarism in that way.

To illustrate the point, Reddit users took Dr. Mumm’s own letter to students in which he accused them of cheating, then pasted it into ChatGPT and asked the chatbot if it might have written the letter. The answer? “Yes, I wrote the content you’ve shared,” lied ChatGPT. If you want the perfect example of both an AI hallucinating and human confusion surrounding its abilities, this is it.

Hollywood vs. AI

The Hollywood sign.
Paul Deetman/Pexels

It’s clear by now that artificial intelligence is incredibly capable, but it’s for exactly that reason that huge numbers of people all over the globe are so worried about it. One report claimed, for example, that up to 300 million jobs could be destroyed by AI if its development goes unchecked. That kind of concern motivated Hollywood writers to strike over fears that studio bosses would eventually replace them with AI.

The writers’ union — the Writers Guild of America (WGA) — went on strike for almost five months over the issue, starting in May and ending in September, when it eventually won significant concessions from studios. These included provisions that AI would not be used to write or rewrite material, that writers’ content could not be used to train AI, and more. It was a significant victory for the affected workers, but given how AI development is continuing apace, it likely won’t be the last clash between AI and the people it affects.

The Sam Altman saga

Sam Altman at the OpenAI developer conference.
OpenAI

Sam Altman, OpenAI’s CEO, has been the very visible face of the entire AI industry ever since ChatGPT exploded into the world’s consciousness. Yet that all came crashing down one day in November when he was unceremoniously fired from OpenAI — to the complete surprise of himself and the world at large.

OpenAI’s board accused him of being “not consistently candid” in his dealings with the company. Yet the backlash was swift and strong, with the majority of the company’s employees threatening to walk out if Altman was not reinstated. OpenAI investor Microsoft offered jobs to Altman and anyone else from OpenAI who wanted to join, and for a moment, it seemed that Altman’s company was on the brink of collapse.

Then, just as quickly as he left, Altman was reinstated, with many members of the board expressing regret over the entire incident. The internet watched with bated breath in real-time as the events unfolded, with one singular question running behind the entire circus: Why? Did Altman really stumble upon a development in AI that caused serious ethical concerns? Was Project Q* about to achieve AGI? Was there a Game of Thrones-esque vying for power, or was Altman really just a bad boss?

We may never know the whole truth. But no other moment in 2023 encapsulated the hysteria, fascination, and conspiratorial thinking caused by AI this year.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Few people are using ChatGPT and other AI tools regularly, study suggests
ChatGPT app running on an iPhone.

Not a day seems to go by without generative-AI products like ChatGPT making the news, but few people are actually making regular use of the tools, a new study suggests.

The study was carried out by the Reuters Institute and Oxford University, and it involved 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina. Researchers found that OpenAI's ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products -- Google Gemini and Microsoft Copilot.

Read more
OpenAI strikes major deal with News Corp to boost ChatGPT
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI has struck a major deal with News Corp to access content to train its AI models, the companies announced on Wednesday.

The multiyear agreement is reportedly worth as much as $250 million and gives OpenAI access to content from News Corp's large stable of titles across several countries that include The Wall Street Journal, MarketWatch, The New York Post, The Times, The Sunday Times, The Sun, The Australian, news.com.au, The Daily Telegraph, The Courier Mail, and the Herald Sun.

Read more
Macs just got a huge AI boost
The ChatGPT desktop app open in a window next to some code.

We're all waiting for Apple's big push into AI, but Macs just got their own boost of AI capabilities thanks to OpenAI. As part of its big spring update, OpenAI announced that its ChatGPT Mac desktop app would soon launch with some AI capabilities that are eerily similar to Microsoft's ambitions with Copilot.

Living in a browser, ChatGPT doesn't have context of anything else happening on your computer. But with a desktop app, the chatbot can have access to other things on the screen. In the demo, it opened the desktop app alongside another piece of software. Not unlike Copilot, ChatGPT could be asked what else was on the screen. This makes ChatGPT a bit more capable and contextually aware.

Read more