I guess it shouldn’t surprise me that I don’t get really interesting details about AMD’s upcoming CPU and GPU generations of Financial Analyst Day (FAD) 2022 (opens in new tab). After all, it’s more about marketing where AMD is now and where it’s going in the next couple of years, so analysts can be lyrical about why investors should invest money in Dr. Lisa T Su et al.
A FAD, then, isn’t so much about what the next generation of Ryzen CPU and Radeon GPU hardware is going to do, or how it might plan to implement different technological sorceries to do it.
All we really took away from the different presentations – from AMD luminaries like Mark ‘The Papermaster’ Papermaster, Rick Bergman and David Wang – was a selection of marketing keywords and some weird performance-per-watt percentage gains.
Which makes it hard to get too excited about what’s coming from the red team. Although, given that the jaws of exaggerated expectation have bitten AMD in the collective ass previously, maybe that’s not such a bad thing.
There was, however, confirmation that we’re looking at a ~8% IPC increase for the new Zen 4 desktop processors, which we kind of figured from the combination of IPC and frequency that make up the 15% generation gains. the generation AMD touted it in its Computex keynote last month.
We’ve been hampered by performance improvements from previous generations from AMD’s processor design teams, to the point where a collective single-thread gain in Cinebench R23 of 35% has been largely overlooked because the actual architecture gains of Zen 4 should be less than 10%. Perhaps the lowest fruit has already been plucked from the Zen tree.
It’s worth remembering that when things were stagnant at Intel before they got their node game back on track, all you could expect from what were the best new generations of CPUs was a 10% boost. So really, any real-world increase above that number, either because AMD’s new chips can happily consume more power, or because they run at higher frequencies… well, that’s great.
I guess I foolishly expected that we would have something more officially concrete from AMD about their high-end GPUs than some vague noises about chiplet packaging (opens in new tab) and DisplayPort 2.0 support confirmation (opens in new tab). I want to be excited about AMD’s new cards, and not just have to second-guess the truth within the torrents of rumors, leaks, and speculation that constantly fill Twitter.
Because the GPU rumors look potentially very exciting, but now I want the truth, dammit. Yes, I’m impatient…
It’s the chiplet thing that’s the most intriguing, and something we’ve talked about for years (opens in new tab), but it is a shame that there is no meat on the marketing bones of these presentations. Is AMD really using a chiplet design in the same way as with Ryzen, and by that I mean it will have multiple real GPU chiplets to really increase the shader count?
All David Wang says about the ‘advanced chiplet packaging’ of the upcoming RDNA 3 GPU design is: “It allows us to continue to scale performance aggressively without the throughput and cost concerns of a large monolithic silicon.
“This allows us to deliver the best performance at the right cost.”
From a financial standpoint, that’s what you want to hear, and it’s one of the reasons why chiplet-based Ryzen CPUs have been so successful from a technical and manufacturing standpoint.
But I talked to Wang a few years ago about a possible move to MCM designs, and he exposed the inherent difficulties of using multiple GPU chiplets in terms of gaming and visuals.
“To some extent, you’re talking about doing CrossFire in a single package. The challenge is, unless we make it invisible to ISVs, [independent software vendors] you will see the same kind of reluctance.
“We’re following this path on the CPU side,” Wang continues, “and I think on the GPU we’re always looking for new ideas. But the GPU has unique restrictions with this type of NUMA.” [non-uniform memory access] architecture and how you combine resources… Multithreaded CPU is a little easier to scale the workload. NUMA is part of the OS support, so it’s a lot easier to deal with this multi-mold thing compared to the kind of graphical workload.”
You can see that in the confirmation of 3D chiplet packaging for the CDNA 3 GPUs for the Radeon Instinct team, we will likely see multiple compute chiplets on that side, but on the RDNA 3 side, I’m not convinced multi-GPU is ready for primetime gaming in the form of a chip.
So what could this chiplet configuration be? The latest rumors suggest that even for the top two Navi 3x GPUs that are supposed to be chiplet based, they will only come with a single graphics compute chip (GCD) with the actual chiplet packing noises referencing the splitting of the GPU cache into separate multi cache dies (MCDs).
This would make things simple from a straightforward gaming performance point of view, and actually means that you can concentrate most of a GCD’s mold size on the actual graphics material, cutting out the space you would normally have for memory. cache. And if the I/O and memory bus interconnects are offset from the GCD, that also frees up a lot of array space.
But again we’re turning to rumors to get engaged and excited about the possibilities. I’m just impatient; I’m sure it won’t be long before AMD actually gives us something architecturally complicated to get our teeth into. It might be.
Although with RDNA 3’s expected release window to start in October, and it being just a mid-range, monolithic kick-off show, let’s just say I’m not holding my breath.