AMD’s next-gen GPU might not be a multi-chiplet computing monster, but it’s better for everyone

AMD’s flagship RDNA 3 GPU is no longer considered a multi-chip GPU, at least not from a graphics computing standpoint. The expectation was that AMD’s best next-generation GPU, its Navi 31 silicon, would be the first to bring the chiplet design of their latest Ryzen CPUs to their graphics cards.

The new rumor that there is, honestly, a bit of a relief.

As AMD’s next-gen GPUs approach their expected October release date and begin shipping to their partners, we’ll be getting closer and closer to the event horizon of the leak hype. That place where fact and fiction start to merge and everything becomes a frenzy of fake numbers sprinkled with a little bit of light truth.

But that doesn’t mean that things are somehow resolved now. The Twitter vs YouTube leak mechanism is always wearing thin, and AMD’s Navi 31 GPU has received so many theoretical and fanciful specs that it’s hard to keep track of where the general consensus is right now.

Where it was once a 92 TFLOP beast with around 15,360 shaders, laid out in a pair of computer graphics arrays (GCDs), these specs have now been reduced to 72 TFLOPs and 12,288 shaders. Now we’re hearing rumors that all the fuss about a dual graphics chiplet design was wrong, and the reality of multi-chip design is more about floating cache than extra computing chips.

Red Gaming Tech’s latest video, apparently corroborated by Twitter leakers, suggests that the entire count of 12,288 shaders will be housed on a single 5nm GCD, with a total of six 6nm multi-cache dies (MCDs) arranged around him. , or possible on top of it.

(Image credit: Red Gaming Tech. YouTube)

Honestly, I would love it if AMD had been able to create a GPU compute chiplet that could live in a single package, alongside other GPU compute chiplets, and be completely invisible to the system. But for a gaming graphics card, this is a difficult task. For data center machines and systems that run entirely compute-based workloads, such as in render farms, duplicating a GPU and running tasks on many different chips works. When you have different pieces of silicon rendering different frames or different parts of a frame in the game, well, that’s another matter.

And, so far, that’s been beyond our GPU tech overlords. We already had SLI and CrossFire, but it was difficult for developers to implement the technology to great effect, so even when they were able to make a game run faster on multiple GPUs, the scale was anything but linear.

You’d spend twice as much buying two graphics cards, to get frame rates maybe 30-40% higher. In certain games. Sometimes more, sometimes less. It was a lottery, a lot of hard work for the developers, and finally it was totally abandoned by the industry.

The holy grail is to make a multi-GPU system or chip invisible, so your operating system and applications running on it will see it as a graphics card.

(Image credit: AMD)

That was the hope when we first heard rumors that the Navi 31 would be an MCM GPU, backed up by leaks and job listings. But I won’t say that hope wasn’t tinged with some apprehension either.

At some point, someone is going to do it, and we thought the time was now, with AMD channeling its Zen 2 skills into a chiplet-based GPU. But we thought it was jumping in first, ready to take the inevitable hit of adopting new technology, and any unexpected latency issues would crop up with different games and countless PC-to-gaming conflicts. All the while hoping that just throwing a whole load of shaders into the MCM mix would overcome any bottlenecks.

But with the multi-chip design now seemingly being purely about the cache data, potentially acting as memory controllers, this will make it much more straightforward from a system standpoint and limit any unpredictability that can occur.

And it’s still going to create an incredibly powerful AMD graphics card.

Nvidia’s new Ada Lovelace GPUs will have to watch out, because this generation is going to be a doozy. And we can actually buy them this time around, though probably still at exorbitant prices.

Leave a Comment