Skip to main content

Intel may already be conceding its fight against Nvidia

Two intel Arc graphics cards on a pink background.
Jacob Roach / Digital Trends

Nvidia continues to own the top-of-the-line GPU space, and the competition just hasn’t been able to, well, compete. The announcement of the impressive-sounding RTX 40 Super cards cements the lead even further.

As a result, AMD is said to be giving up on the high-end graphics card market with its next-gen GPUs. And now, a new rumor tells us that Intel might be doing the same with Arc Battlemage, its anticipated upcoming graphics cards that are supposed to launch later this year. While this is bad news, it’s not surprising at all.

Recommended Videos

Arc Battlemage leaks

First, let’s talk about what’s new. Intel kept quiet about Arc Battlemage during CES 2024, but Tom Petersen, Intel fellow, later revealed in an interview that it’s alive and well. The cards might even be coming out this year, although given Intel’s track record for not meeting GPU deadlines, 2025 seems like a safer bet. But what kind of performance can we expect out of these new graphics cards? This is where YouTuber RedGamingTech weighs in.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

RedGamingTech posted a big update to Intel Arc Battlemage specs in his latest video, and it doesn’t sound particularly good for high-end gaming enthusiasts. According to the YouTuber, the specifications of the flagship chip may be significantly different compared to his previous predictions. What’s worse, it might never even be released.

Initially, RedGamingTech suggested that the top Battlemage GPU would feature 56 Xe cores and a frequency of up to 3GHz. That’s still the case, but rumor has it that there’s been a big shake-up in memory bus and cache configuration. Instead of the 256-bit bus and the 116MB of L2 cache, the YouTuber now says that we can expect a 192-bit bus, 8MB of L2 cache, and a whopping 512MB of Adamantine cache.

Adamantine cache is still pretty unknown to us at this stage, although an Intel patent that PCGamer shared details on tells us more about it. It’s essentially Level 4 cache that’s comparable to AMD’s Infinity Cache and appears to work in a similar way.

That sounds pretty good, right? With 56 Xe cores, the card would be a huge upgrade over the Arc A770 that comes with 32 cores. However, even despite this massive L4 cache, those specs already hint at a less-than-high-end flagship for Intel. With a 192-bit bus, Intel would probably stop at around 12GB of VRAM, unless it ends up feeling adventurous like AMD with the RX 7600 XT or Nvidia with the RTX 4060 Ti. (Let’s hope that it won’t.)

Regardless of whether this GPU is even real, RedGamingTech suspects that Intel may choose not to release it at all due to unsatisfactory profit margins. Instead, Intel might focus on a GPU with 40 Xe cores, a 192-bit memory bus, 18MB of L2 cache, and zero “Adamantine” cache.

Is it time for Nvidia to celebrate?

Nvidia GeForce RTX 4090 GPU.
Jacob Roach / Digital Trends

AMD is reportedly bowing out of the high-end GPU race in this next generation. Now, Intel is said to be doing the same. Where does that leave Nvidia? Right at the very top, with complete control of the enthusiast GPU market and nothing to worry about in that regard.

It’s a dream for Nvidia, but it’s not so great for us, the end users. Giving Nvidia the ability to drive up the prices as much as it wishes brought us the RTX 40-series, where the prices and the performance often just don’t add up. With zero competition at the high end, the RTX 5090 might turn out to be a terrifying monstrosity with an eye-watering price tag. After all, why wouldn’t it be? It’s not like AMD or Intel are doing anything to keep Nvidia from doing otherwise.

On the other hand, even if Intel chooses to focus on the mainstream segment, things won’t change too much. AMD is Nvidia’s main competitor, and even now, when it has a couple of horses in this race, it still can’t match Nvidia’s flagship RTX 4090, or even the surprisingly impressive new RTX 40 Super cards. Intel, now one generation behind (and soon to be two), wouldn’t have been able to beat Nvidia’s future flagship either.

For the mainstream market, meaning the vast majority of GPUs that are sold, it’s actually good if AMD and Intel will be there and give Nvidia some heat. Those prices might end up less inflated as a result. Meanwhile, high-end gaming will be pricier than ever, but unfortunately, Intel wouldn’t have been able to stop Nvidia there anyway, regardless of the card it might never release.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
Bad news for AMD? Nvidia might fast-track the RTX 50-series
Two RTX 4060 cards side by side

Things are finally about to start heating up for some of the best graphics cards. Although we're still in the dark about final release dates, both AMD and Nvidia are said to be launching new GPUs in the first quarter of 2025. However, a new leak tells us that Nvidia might try out a different approach with the RTX 50-series, and that's bound to put some pressure on AMD at the worst possible time.

What's new? We've already heard that Nvidia is likely to announce the RTX 5090 and the RTX 5080 at CES 2025, with its CEO Jensen Huang scheduled to hold a keynote during the event. However, the release dates for the rest of the lineup remained a mystery. Now, a previously reliable source sheds some light on the matter with potential details about the planned launch dates for the RTX 5070, RTX 5070 Ti, RTX 5060, and RTX 5060 Ti.

Read more
25 years ago, Nvidia changed PCs forever
The GeForce 256 sitting next to a Half Life box.

Twenty-five years ago, Nvidia released the GeForce 256 and changed the face of PCs forever. It wasn't the first graphics card produced by Nvidia -- it was actually the sixth -- but it was the first that really put gaming at the center of Nvidia's lineup with GeForce branding, and it's the device that Nvidia coined the term "GPU" with.

Nvidia is celebrating the anniversary of the release, and rightfully so. We've come an unbelievable way from the GeForce 256 up to the RTX 4090, but Nvidia's first GPU wasn't met with much enthusiasm. The original release, which lines up with today's date, was for the GeForce 256 SDR, or single data rate. Later in 1999, Nvidia followed up with the GeForce 256 DDR, or dual data rate.

Read more
Nvidia may give the RTX 5080 a sweet consolation prize
The back of the Nvidia RTX 4080 Super graphics card.

Nvidia's best graphics cards are due for an update, and it seems that the RTX 5080 might get an unexpected boost with faster GDDR7 memory than even the flagship RTX 5090. That might be its sole consolation prize, though, because the gap between the two may turn out to be even bigger than in this generation.

First, the good news. Wccftech cites its own sources as it reports that the RTX 5080 will get 32Gbps memory modules from the get-go -- a significant upgrade over the RTX 5090 with its 28Gbps. The best part is that such a memory upgrade would bring the RTX 5080 to a whopping 1TB/s of total bandwidth, marking a huge improvement over the RTX 4080 Super, which maxes out at 736GB/s.

Read more