The Zen Thread

I'm sure someone could easily find it on AMD's site... but I'm lazy this morning. :p
Thanks! This information is not on any of the 7x00X product pages. I suppose the max DP version mostly answers my question.

I was previously frustrated by reports that it drives "4K@60Hz" monitors--that's not enough. E.g. Nvidia datasheets just give you a rundown of number of displays with exact dimensions:

Screenshot 2024-03-22 at 22.25.01.png
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
PSA for Canadians, the 5800X3D has a coupon that takes it down to CA$320 at canadacomputers, or CA$20 less than a 5700X3D. The coupon code is pre-filled on the shopping cart, and clicking on "apply coupon" will reduce the price. In-store pick-up only though.

In related news, I expect to put a 5600X on the agora soon ;)
 
  • Like
Reactions: Carhole

Carhole

Ars Legatus Legionis
14,461
Subscriptor
PSA for Canadians, the 5800X3D has a coupon that takes it down to CA$320 at canadacomputers, or CA$20 less than a 5700X3D. The coupon code is pre-filled on the shopping cart, and clicking on "apply coupon" will reduce the price. In-store pick-up only though.

In related news, I expect to put a 5600X on the agora soon ;)
Awesome. 5800X3Ds are hovering around $315 in the US so that’s a great deal up yonder. The 5700X3Ds on the other hand are quite affordable here at about $245.

Congrats on the upgrade 😎
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
Awesome. 5800X3Ds are hovering around $315 in the US so that’s a great deal up yonder. The 5700X3Ds on the other hand are quite affordable here at about $245.

Congrats on the upgrade 😎
Yeah, it's the equivalent of US$236, so I didn't feel too bad about getting a Thermalright Peerless Assassin to go with it. Just have to wait for that to arrive.
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
B650 boards are going downhill:

View: https://youtu.be/naX-DnKekCM


So much for inexpensive AM5 being future-proof.

There's literally just one board, the ASRock B650M-HDV/M.2, that is worth buying at a low price ($120), and it's been on the market for a while. The new board are all garbage. They also all tout their power delivery abilities on their product pages, despite having no power delivery abilities to speak of.

If you're building a new AM5 system you'll want to be very careful as to which board you choose.
 
Last edited:

hobold

Ars Tribunus Militum
2,657
So much for inexpensive AM5 being future-proof.
"Inexpensive" and "future proof" don't often go together in personal computers. More often than not, "future proofing" is an upselling strategy from the marketing department.

Having said that, most of these new boards are appalling marketing. The vendors could simply state that a board is for a CPU TDP of 65W or 105W. But no, they all must claim 150+W, reality be damned. Lisa Su cannot possibly be amused by getting AMD's reputation assassinated like this.
 

malor

Ars Legatus Legionis
16,093
"Inexpensive" and "future proof" don't often go together in personal computers. More often than not, "future proofing" is an upselling strategy from the marketing department.

Having said that, most of these new boards are appalling marketing. The vendors could simply state that a board is for a CPU TDP of 65W or 105W. But no, they all must claim 150+W, reality be damned. Lisa Su cannot possibly be amused by getting AMD's reputation assassinated like this.
Not sure she would even know.
 

hobold

Ars Tribunus Militum
2,657
And now for something completely different.

I recently dusted off my SIMD programming and re-wrote a core loop in AVX-512 intrinsics. Turns out Zen 4 runs certain useless numerical computations around twelve times faster when done in 8-vectors of "double double precision"[1] as compared to scalar 128 bit multiprecision[2] fixed point. Nice.

I will definitely be wasting money on that Phoenix laptop in the near future. And be looking forward to Zen 5 towards the end of the year.


[1] This is a fairly old technique where a pair of FP values can represent one value of higher precision. See for example here.
[2] An emulation of high precision math with integer arithmetic. Just like long multiplication, but one "digit" is a 32 (or 64) bit integer value.
 
  • Like
Reactions: Pjotr

Aeonsim

Ars Scholae Palatinae
1,057
Subscriptor++
B650 boards are going downhill:

So much for inexpensive AM5 being future-proof.

There's literally just one board, the ASRock B650M-HDV/M.2, that is worth buying at a low price ($120), and it's been on the market for a while. The new board are all garbage. They also all tout their power delivery abilities on their product pages, despite having no power delivery abilities to speak of.

If you're building a new AM5 system you'll want to be very careful as to which board you choose.

Seems to me to be more of a case of bad marketing, manufacturers need to clearly state the board is only good for 65W or 100W and remove overclocking references.
If AMD keeps their current system of having X and non-X versions of the CPU then most of these boards should happily run non-X parts like a 7900/9900 (65W), most of the Ryzen 5's or possibly even a 7800X3D/9800X3D (120W). Buying the cheapest board available and expecting to get good overclocking off them has often been iffy.

There are good B550 boards out there, it's just not these ones!
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
The VRM specs didn't change for the life of AM4. There's no reason to believe they will change for the life of AM5.

Most AM4 boards could support the full range of AM4 CPUs. There were a few here and there that had thermal issues on open test benches, but they weren't that common, at least while roundups were being performed.

It is depressing to see AM5 boards this early in the socket's lifecycle that do not support the full VRM spec. This isn't an Intel situation, where part of the reason Intel says they change sockets or break compatibility is for VRM changes. AMD didn't do this with AM4, and they shouldn't be allowing board partners to do it for AM5.
 
  • Like
Reactions: malor

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
I'm intrigued that microATX is apparently popular enough for all these new boards to be released. But yes, just as Intel should enforce consistent power limits by default on their partner boards, AMD should ensure full spec VRM's are on theirs. My distinctly low-end AsRock B450M Pro4 can handle a 5800X3D without throttling, and I'd expect any future AM5 board to have equivalent capability.
 
B650 boards are going downhill:

View: https://youtu.be/naX-DnKekCM


So much for inexpensive AM5 being future-proof.

‘Downhill’ being most B boards can still handle the flagship. One OK board being $120 (!!) is not a bad thing, though if you’re trying to future-proof then spending $10-$30 more than the cheapest possible board for wifi or USBC or whatever won’t kill you.

Cheaper Intel boards are also a minefield for high-end CPUs, like upsold Dell customers learn every day, and are certainly further downhill future-proof wise.
 

hobold

Ars Tribunus Militum
2,657
Zen 5 rumors are bubbling up more frequently since a few days. A first few alleged benchmark leaks of engineering samples have leaked, too, but are IMHO unreliable. Various vendors of AM5 boards have published BIOS updates with (preliminary) support for Zen 5, apparently confirming the "Ryzen 9000" naming scheme.

A few people closer to sources have made ominous remarks about more distinct variants of CPU chiplets: 1. with 8 high speed cores, 2. with 16 dense lower power cores, and newly 3. with 16 dense cores not optimized for power, but for retaining as much max clock frequency as possible in the smaller silicon area. Number 1 would be the traditional high performance cores we already know; number 2 would be for high density servers as before, but the new variant 3 would be for workstations or desktops, where there is only ever a single socket dissipating heat.

In some circles, the hype is strong for IMHO unrealistic Zen 5 performance expectations. I can believe that an isolated benchmark exists where Zen 5 is indeed 40% faster per clock than Zen 4, but IMHO that would have to be an AVX-512 benchmark - Zen 5 is said to execute some more operations at the full 512 bits width instead of double pumping 256 bit wide ALUs. But for general spaghetti code, I don't see the IPC improvements anywhere near that.

Along similar lines, a few people closer to sources (presumably in the orbit of motherboard vendors) claimed that qualification samples are already available to finalize firmware and software support. But none of those chips have leaked any benchmark data yet.


In any case, the activity behind the scenes has picked up pace. Seems that Zen 5 is on track for a timely release. I would advise to not believe the hype (unless you care for AVX-512 performance). There will be improvements, but unless AMD decides to give us more than 16 cores on AM5, the jump won't be huge.
 

Drizzt321

Ars Legatus Legionis
28,408
Subscriptor++
That's be interesting for 1 chip let 8 Zen5, 1 chip let 16 Zen5c. For consumer loads, I don't really see, in general, a need with gaming/etc for really more than 6-8 high performance cores. And the extra 5c cores (which we need to remember are full ISA & feature compatible) for most general usages, and when you kick in some rendering or something.
 

malor

Ars Legatus Legionis
16,093
In any case, the activity behind the scenes has picked up pace. Seems that Zen 5 is on track for a timely release. I would advise to not believe the hype (unless you care for AVX-512 performance). There will be improvements, but unless AMD decides to give us more than 16 cores on AM5, the jump won't be huge.
Even if they did, the jump probably wouldn't be huge for most people. Even 32 threads is getting unrealistic for most workloads. Algorithms that go that widely multithreaded are often being offloaded to the GPU anyway.
 

hobold

Ars Tribunus Militum
2,657
Algorithms that go that widely multithreaded are often being offloaded to the GPU anyway.
It's not that simple. GPUs have stricter constraints on what they can and cannot run efficiently. Multithreading on manycore CPUs isn't trivial, but generally easier than GPGPU programming. The toolchain for multithrading has improved a lot in recent times, catalysed by the availability of affordable 16+ threaded machines.

It will take time for software to catch up. It will take time for all the old quad cores to be phased out. It will take time for the toolchain to default on automatic multithreading. It will take time for OSes (and programming environments in general) to default to multithreaded APIs that are nonetheless easy enough to use. Maybe we'll have to switch to a different set of programming languages for good.

I don't know how long it will take, and I don't know what the optimal number of hardware threads will be. But I don't think that humanity's demand for ever more freely programmable compute power will forever be satiated by the enthusiast CPUs we have today. There is a use for more threads beyond Cinebench scores.
 

Aeonsim

Ars Scholae Palatinae
1,057
Subscriptor++
Rendering, encoding, running VMs, running multiple game servers, generating ML models, compiling the Linux kernel to name a few.
High performance and scientific computing, a range of algorithms for genetics and bioinformatics scale really well with cpu cores. Photography also benefits when exporting and importing images.

In my experience anything were each item can nearly be treated independently but needs massive amount of memory runs really well on multi-core CPU's especially if you can leverage SIMD operations for crunching the data + scalar and logic operations for making discarding data or processing you don't actually need to do. I've found that while the GPU is really good at the SIMD part it can be problematic to mix in the scalar/logic operations, it doesn't really matter if the GPU is 100x faster at SIMD style data processing if it sucks when you have logic or scalar operations that can reduce the amount of data needing processing by 1000x. Also if you are looking at 100's of GB of data the GPU has issues.
 
Last edited:

malor

Ars Legatus Legionis
16,093
Also if you are looking at 100's of GB of data the GPU has issues.
Sure, but you've moved way out of the mainstream there. By and large, that's serious professional use territory.

There do exist problems that are best tackled by multicore CPUs, but the great majority don't scale well past a certain point. You get diminishing returns with most algorithms as you add more and more threads. Locking issues end up overwhelming the additional productivity from more cores. Only a few algorithms are both 'embarrassingly parallel' (an actual technical term) but also ill-suited to GPU parallelism.

If you've got one of those problems, then packing in a bunch more cores is likely to be a huge help. But not very many people do.

The CPU makers keep going wider and wider, but that's really not what most of us need. They're selling us on the concept because that's all they can make.
 

hobold

Ars Tribunus Militum
2,657
There is no grounds to conclude that GPUs render many-cored CPUs obsolete, or will do so. The reason GPUs are good at what they do is precisely because they are limited.
One prominent example these days is raytracing. This light simulation has ungodly amounts of parallelism, but it is very difficult to map it to the lock-step of SIMD hardware in a scalable way. So 4-wide SIMD is okay-ish, because we can put a 3-vector of a single ray into a SIMD register. But 32-wide or 64-wide of a GPU means tracing several rays together ... only that those rays usually veer off into entirely different parts of the scene.

That doesn't mean GPUs are useless for hollywood VFX. But it does mean that CPUs will be competitive when the workload is highly parallel but not highly regular.
 

hobold

Ars Tribunus Militum
2,657
The CPU makers keep going wider and wider, but that's really not what most of us need.
Programming would certainly be simpler if we could get the IPC we're used to from a single 40 GHz core instead of eight 5 GHz cores. And the user experience would be better, too, because every single piece of software would benefit from that.

But until human ingenuity (heck, we'd probably take alien ingenuity as well) can deliver that mythical CPU, we get whatever is actually possible instead. And that's what we have to make do with.

We'll improve the tools, adapt the OSes, revolutionize the programming languages, even change our habits (!) only to get more effective compute out of the machines that we can make.
 

steelghost

Ars Praefectus
4,975
Subscriptor++
I'm not sure saying "unless a single algorithm can scale with more CPU cores, there's no point in adding them" really makes sense. If you're running any modern OS, there's a lot of different code running 'under the hood'. If I'm running a workload that scales to say, 8 cores, I still want the rest of my OS, my browser, documents, etc to stay responsive while it's running.

If you're running a game that makes use of 8 cores, you don't want that to stutter if your OS decides to re-index your music collection or something in the background etc etc.

Does that justify 16C32T consumer desktops? Not for everyone, but for those people who can make use of the additional grunt, why not? In any case, I rather suspect that the majority of AMD's CPU sales for consumer desktops aren't the really high core count parts, purely based on their relative cost.
 

hobold

Ars Tribunus Militum
2,657
I rather suspect that the majority of AMD's CPU sales for consumer desktops aren't the really high core count parts, purely based on their relative cost.
I think the steam survey recently saw the peak move from four cores to six cores, but the average core number was already closer to eight. At the same time, enthusiast PC reviewers have begun to talk of six cores as entry level, and are recommending eight cores for a new gaming rig.

Nvidia certainly regards higher core counts as a given, and keeps offloading more driver work onto CPUs.

On the other hand, as Moore's Law keeps on dying along, silicon wafers are getting exponentially more expensive at the bleeding edge. So there might be economic factors favouring a lower number of cores as the sweet spot for consumer PCs. The chip vendors might still like to keep selling more silicon to us, but if we cannot afford it, then chips might have no other choice but to shrink.
 

steelghost

Ars Praefectus
4,975
Subscriptor++
I'd say this is already beginning to happen, what with 4c cores (and P/E cores on the Intel side).

The entry level Intel CPUs (with 6 'P' cores and 4-6 'E' cores) are probably a forerunner of what we might see in future Zen parts, with 6 or 8 'big' Zen cores and a chiplet's worth of smaller cores (all according to binning, defects etc) in order to balance the competing pressures of marketing (moar cores is moar betterer), silicon cost, power efficiency, etc.
 

Demento

Ars Legatus Legionis
13,751
Subscriptor
I'd say for gaming, 6 is the minimum now. But the gains past there are also quite fleeting. Very few gaming benchmarks show much of an improvement going from a 7600x to a 7700x, and that's at the crazy benchmarking settings that maximise CPU differences. I expect 6 will be plenty for several more years. 2 went to 4 pretty quick, but 4 was sufficient for a good decade at least.

That said, I bought a 7600 on the premise that I'll have to move it to my wife's PC once Win 10 support ends and I'll get a 9700 or whatever then.
 

hobold

Ars Tribunus Militum
2,657
Regurgitating one more nice story / rumor:

AMD seems to be preparing EPYC branded CPUs for the AM5 platform. IMHO this would be an effective way to market CPUs made with 16+ "dense" cores. There would be less consumer confusion about Ryzen still being the mainstream brand, with the highest single threaded performance on offer (as far as AMD's product catalog is concerned).

And a 16 + 16 dense EPYC model on AM5 could fill some of the gap left by Threadripper going "Pro" with astronomical pricing.

That being said, AMD's product strategy is unknown. There could well be significant barriers, such as BIOS support only coming with special mainboard models, that are targeted somewhere other than mainstream AM5. In other words, these new small EPYCs could make AM5 more capable without necessarily jacking up prices ... but the opposite is also possible: AM5 could be extended into a decidedly high priced new niche.

(The hypothetical 8X3D + 16C "whale milker" model isn't really covered by either Ryzen or Epyc brand, so I am undecided if this new development increases the chances of such an SKU ever appearing.)