Skip to main content

Support for dual GPUs could be making an unexpected comeback

Intel seems to be bringing back something that Nvidia and AMD had long given up on: the ability, and the incentive, to use dual graphics cards in a single system.

Multi-GPUs were once a big deal, but the latest generation abandoned that idea for a variety of reasons. However, Intel has allegedly confirmed that you’ll be able to use multiple Intel Arc GPUs at once. Will that help Intel capture some of Nvidia’s and AMD’s customer base?

Recommended Videos

Update: Intel reached out to us with a short clarification, saying: “Intel showed a Blender Cycles rendering demo at SIGGRAPH with Intel Arc graphics. Multi-GPU rendering support for Intel Arc and Intel Arc Pro graphics cards through oneAPI is supported starting in Blender 3.3. Intel Arc graphics does not support multi-GPU for gaming.”

The original article follows below.

Intel Arc A750M Limited Edition graphics card sits on a desk.
Intel

A good few years have passed since some of us were yearning for a dual-GPU setup with one of Nvidia’s latest and greatest. Stacking Titan GPUs was something many gamers longed for, but realistically, most of us couldn’t afford it — and the performance gains weren’t quite worth it for the average player.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

While the technology persists in the high-performance computing (HPC) segment and among professionals, consumers now stick to a single graphics card. Intel seems eager to shake things up in that regard.

According to TweakTown, which cites an Intel representative, the company is currently readying its oneAPI software in order to be prepared to support multiple GPUs. In fact, Intel was allegedly planning to show off a dual-Arc system during SIGGRAPH 2022, but was unable to do so. Why? TweakTown claims that Intel couldn’t find a chassis big enough to fit two GPUs in time for the event. This checks out, seeing as Intel seemingly only had a small-form NUC chassis on hand, equipped with a single Arc A770 Limited Edition GPU.

Two Intel Arc GPUs running side by side.
Linus Tech Tips / Intel

Multi-GPU support is an interesting addition to Intel Arc. At this point, it’s hard to deny that it’ll be tricky for Intel to compete with AMD and Nvidia. Sure, the lineup can trade blows with some Team Green and Team Red GPUs, but we have next-gen cards coming out in the next couple of months — Intel is certainly going to fall behind.

Using dual Intel Arc GPUs versus a single Nvidia or AMD card could prove to be viable, and if the cards are priced down, it might even be a decent option. On the other hand, with the extra power consumption, the requirement of a roomy case, and the thermal concerns, there were plenty of good reasons why AMD and Nvidia stopped pushing for dual-GPU setups. Intel might reveal more about the tech shortly, so perhaps then we will learn about its exact plans.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
This toolkit just upended Nvidia’s dominance over pro GPUs
Nvidia introducing its Blackwell GPU architecture at GTC 2024.

Nvidia is the undisputed leader in professional GPU applications, and that doesn't come down solely to making the best graphics cards. A big piece of the puzzle is Nvidia's CUDA platform, which is the bedrock for everything from Blender to various AI applications. The new Scale tool, developed by Spectral Compute, aims to break down the walled garden.

Although we've seen competitors to the CUDA software stack, such as AMD ROCm, Scale is a "drop-in replacement" for CUDA. It's a compiler that allows CUDA applications to be natively compiled on AMD GPUs. Spectral Compute says Scale accepts CUDA programs as is, without the need to port to another language. In Spectral's own words, "... existing build tools and scripts just work."

Read more
This could be the reason you upgrade your GPU
The RTX 4080 in a running test bench.

Now more than ever, the best graphics cards aren't defined by their raw performance alone -- they're defined by their features. Nvidia has set the stage with DLSS, which now encompasses upscaling, frame generation, and a ray tracing denoiser, and AMD is hot on Nvidia's heels with FSR 3. But what will define the next generation of graphics cards?

It's no secret that features like DLSS 3 and FSR 3 are a key factor when buying a graphics card in 2024, and I suspect AMD and Nvidia are privy to that trend. We already have a taste of what could come in the next generation of GPUs from Nvidia, AMD, and even Intel, and it could make a big difference in PC gaming. It's called neural texture compression.
Let's start with texture compression

Read more
AMD just revealed a game-changing feature for your graphics card
AMD logo on the RX 7800 XT graphics card.

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it's using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We've heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia's claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Read more