Skip to main content

The death of Moore’s Law is finally starting to stink

The back of the Core Ultra 9 285K CPU.
Jacob Roach / Digital Trends

For more than two decades we’ve heard about the death of Moore’s Law. It was a principle of the late Intel co-founder Gordon Moore, positing that the number of transistors in a chip would double about every two years. In 2006, Moore himself said it would end in the 2020s. MIT Professor Charles Leiserson said it was over in 2016. Nvidia’s CEO declared it dead in 2022. Intel’s CEO claimed the opposite a few days later.

There’s no doubt that the concept of Moore’s Law — or rather observation, lest we treat this like some law of physics — has lead to incredible innovation among desktop processors. But the death of Moore’s Law isn’t a moment in time. It’s a slow, ugly process, and we’re finally seeing what that looks like in practice.

Recommended Videos

Creative solutions

The Ryzen 9 9900X sitting on its box.
Jacob Roach / Digital Trends

We have two brand new generations from AMD and Intel, neither of which really came out of the gate swinging. As you can read in my Core Ultra 9 285K review, Intel’s latest attempt pulls off a lot of impressive feats with its radically new design, but it still can’t hold up to the competition. And the Ryzen 9 9950X, although a clear upgrade over its Zen 4 counterparts, doesn’t deliver the generational improvements we’ve become accustomed to.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Consider this — looking at Cinebench R23, the multi-core jump from the Ryzen 9 5950X to the Ryzen 9 7950X was 36%. Between the Ryzen 9 7950X and Ryzen 9 9950X? 15%. That’s less than half the improvement within one generation. In Handbrake, the Ryzen 9 7950X sped up transcoding by 34% compared to the Ryzen 9 5950X. With the Ryzen 9 9950X, the improvement shrunk to just 13%.

This isn’t just one odd generation, either. Looking at the single-core performance of the Core i9-101900K and Core i9-12900K, Intel delivered a 54% improvement. Even comparing the Core i9-12900K, which is three generations old at this point, to the latest Core Ultra 9 285K, we see just a 20% improvement. Worse, the new Core Ultra series from Intel shows oddly high results in Cinebench, and if you break out to other applications, you can actually see some regressions compared to a generation or two back.

AMD Ryzen 7 7800X3D sitting on a motherboard.
Jacob Roach / Digital Trends

Even within just a few years, the rate of performance improvements has slowed considerably. Moore’s Law doesn’t directly talk about performance improvements — it’s simply concerned with the number of transistors on a chip. But that has clear performance implications. Throwing more transistors at the problem isn’t practical like it once was — read up on the death of Dennard scaling if you want to learn more why that’s the case.

AMD and Intel may not talk about it publicly, but both companies clearly see the writing on the walls. That’s likely why Intel pivoted to a hybrid architecture in the first place, and why it’s introduced a radical redesign with its Arrow Lake CPUs. And for AMD’s part, it’s no secret that 3D V-Cache has become a defining technology for the company’s CPUs, and it’s a clear way to skirt the bottleneck of Moore’s Law. A large chunk of transistors on any CPU die are dedicated to cache — somewhere in the range of 40% to 70% — and AMD is literally stacking more cache on top that it can’t fit onto the die.

A function of space

One important factor to keep in mind when looking at Moore’s Law and Dennard scaling is space. You can build a massive chip with a ton of transistors, sure, but how much power will it draw? Will it be able to stay under a reasonable temperature? Will it even be practical to place in a PC, or in the enterprise, a server? You cannot separate the number of transistors from the size of the die.

I’m reminded of a conversation I had with AMD’s Chris Hall, where we told me: “We were all enjoying Moore’s Law for a long time, but that’s sort of tailed off. And now, every square millimeter of silicon is very expensive, and we can’t afford to keep doubling. We can, we can build those chips, we know how to build them, but they become more expensive.”

Nvidia GeForce RTX 4090 GPU.
Jacob Roach / Digital Trends

I’m not here to defend Nvidia’s insane pricing strategy, but the company has reportedly seen higher pricing from TSMC with its RTX 40-series GPUs than it saw with Samsung with its RTX 30-series GPUs. And, the RTX 4090 does deliver more than twice the transistor count as the RTX 3090 at a very similar die size. If there’s a commitment to Moore’s Law across chips, I’m not sure we as consumers will like the outcome when it comes time to upgrade a PC.

That’s not to mention the other problems a card like the RTX 4090 has faced — high power requirements, an insane cooler size, and a melting power connector. Not all of these problems are a function of doubling the number of transistors, not even close, but it plays a role. Bigger chips for more transistors, more heat, and usually at a higher cost, especially as the cost of silicon continues to increase.

The shortcut

Moore’s Law is dead, PC hardware is getting more expensive, and everything sucks — that’s not how I want to leave this. There will be more ways to deliver performance improvements year over year that doesn’t rely solely on more transistors on a chip at the same size. The way we’re getting there now is just different. I’m talking about AI.

Wait, don’t click off the article. Tech companies are excited about AI because it represents a lot of money — cynical as that perspective is, it’s just the way trillion-dollar corporations like Microsoft and Nvidia work. But AI also represents a way to bring a new form of computing. I’m not talking about a slew of AI assistants and hallucinatory chatbots, but rather applying machine learning to a problem to approximate results that we would previously get with pure silicon innovation.

Ray Reconstruction in Star Wars Outlaws.
Jacob Roach / Digital Trends

Look at DLSS. The idea of using upscaling to maintain a certain level of performance is controversial, and it’s a nuanced conversation when it comes to individual games. But DLSS is enabling better performance without a strict hardware improvement. Add on top of that frame generation, which we now see from DLSS, FSR, and third-party tools like Lossless Scaling, and you have a lot of pixels that are never rendered by your graphics card.

A less controversial angle is Nvidia’s Ray Reconstruction. It’s no secret that ray tracing is demanding, and part of getting around that hardware demand is a process of denoising — limiting the number of rays, then cleaning up the resulting image with denoising. Ray Reconstruction delivers a result that would require far more rays and much more powerful hardware, and it does so without limiting performance at all — and once again, through machine learning.

It really doesn’t matter if Moore’s Law is dead or alive and well — if companies like AMD, Intel, and Nvidia want to stay afloat, they’ll continually need to think of solutions to address rising performance demands. Innovation is far from dead in PC hardware, but it might start to look a little different.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
I tested the Core Ultra 9 285K against the Ryzen 7 7800X3D — and it’s ugly
Fingers holding an Intel 285K.

Intel's new Core Ultra 9 285K is finally here, promising a boost in performance with a significant reduction in power requirements, at least according to Intel. As you can read in my Core Ultra 9 285K review, Intel's performance claims aren't as rosy as reality, especially when stacked up against what is unequivocally the best processor for gaming you can buy: AMD's Ryzen 7 7800X3D.

I threw both processors on the test bench to pit them head-to-head, looking at performance across productivity and gaming apps, as well as thermals and efficiency. These CPUs target different users, but there are still a lot of interesting comparisons we can look at between them.
Specs

Read more
Why you may want to avoid the latest Nvidia driver release
A screenshot of the Nvidia app.

Nvidia’s latest GeForce 566.03 WHQL driver update was released two days ago, and the company has now acknowledged a peculiar issue. According to a report by Overclock3D, users of Corsair’s iCUE software and Bluestacks, may face “higher than normal CPU usage” and are advised not to update to the latest graphics driver update.

Corsair's iCUE software integrates the company’s compatible hardware into a single interface, enabling users to control RGB lighting, adjust fan speeds, create macros, and monitor system performance. Bluestacks, on the other hand, is an Android emulator for Windows, primarily used for gaming and app development.

Read more
I’m jealous — someone scored an RTX 4070 Super for $49 on Amazon
The Nvidia logo on the RTX 4070 Super.

The RTX 4070 Super is one of the best graphics cards you can buy, and it has a price to match, with most models going for $600 on Amazon. However, one lucky Redditor scored a model for just $49. No, it wasn't some steep sale, and most people would see a price like that and assume it's a scam. But sure enough, two days after finding a Gigabyte RTX 4070 Super Gaming OC for $48.94 on Amazon, the card showed up in the mail.

You should absolutely assume that a price that's too good to be true is a scam on Amazon. We've seen just this year how scammers can infiltrate the third-party listings on Amazon to sell fake graphics cards, but this listing for the RTX 4070 Super was different. It was sold and shipped by Amazon Canada and there was only one card in stock, suggesting it was either mismarked or someone seriously needed some extra warehouse space.

Read more