How often for a new Build?

whm2074

Ars Centurion
459
Subscriptor
OK late Last year I upgraded my 10 year old DIY PC from a I5-4670 w/ 16GB to an Ryzen 5-5600 w/ 32GB. Also replace one of the 1 TB SSDs with 2TB. Next upgrade is the dGPU...

I used to do a major upgrade every three to four years in the past and now I've been slowing down. Lately all I'm doing is streaming videos and playing games, and not the latest one either.

So how often do you guy build a new PC?
 

malor

Ars Legatus Legionis
16,093
I used to upgrade more often, but kept my 4790K for like 6.5 years. I bought a 5800X in 2020, put a 3070 in it six months later, and then just recently did a 5800X3D. I could easily have kept the old chip longer, it was working fine, but I was tempted by a fairly deep sale.

Sometime in the next year or two I may upgrade the graphic card. With DLSS, the 3070's hanging in there pretty good, but I'd like more VRAM to use with AI stuff. That's all pretty slow running on the CPU. I can type faster than Llama 3 runs on this system, even though I have 64G.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
The cost of upgrades goes up, the performance gain per dollar and compared to the last upgrade does not, or even goes down, and the need for more performance year over year goes down as well as we age and stop doing things and because even old hardware is "fast enough" for almost everything, so upgrades happen less often. And if your income isn't going up fast enough, like most people, you just can't afford to keep up. I used to do major upgrades (rarely did an entire PC get built at once) basically every year once I started working a real job and earning money, like a whole new CPU and motherboard of a full generation up, RAM as necessary, a new generation GPU when they came out. Then it was every couple of years, then a few years. Now that my money is very restricted and parts like GPUs are ridiculously expensive for a major change, it's small upgrades every two or three years and never a big leap. A modest CPU upgrade in the same motherboard (which is hardly even possible anymore); a single generation upgrade of the GPU, to something just less outdated; adding some RAM although next it may have to be complete replacement but not more of it.
 

Aeonsim

Ars Scholae Palatinae
1,057
Subscriptor++
New PC every 5 or so years (Ryzen 1700 to 7900 was the last one) which tends to result in noticeable changes! Have done a 3 year upgrade for partners PC, Ryzen 3600 to Ryzen 5700 which was again a decent change, and used the 3600 to upgrade the 1700 system for the kids.

On the AMD platform there is a lot of bang for buck upgrading, even with the 5700 machine I could still drop in a 5800X3D for 20-30% more gaming performance or 5950X for a substantial increase in multi-core performance.
 

steelghost

Ars Praefectus
4,975
Subscriptor++
"When I really need to, and sometimes when I really want to"

2012 - 3570k build, upgraded GPU and added SSD over time
2018 - Motherboard starts acting weird, "losing" ports and similar, so I sell the guts for what I can get and build a new rig around an 8700k, adding a 1080Ti to replace the 660 that has not aged well

2020 - My 8700k is doing just fine but I'm really tempted by the new Zen 3 parts and the idea of SO many cores. I build a new 5800X system, and pair the 8700k with an RX590 for a serviceable system for my boys to play games on

2021 - I managed to get a good deal on a used 5900X and sell on the 5800X

2022 - the 1080Ti is struggling a bit now since I got a high refresh 1440p monitor, so I add a 3080Ti to the 5900X, moving the 1080Ti to the boys' system, and converting it to a custom loop in the process.

2024 - The 5900X / 3080Ti combo still does it everything I want of it with ease, the 8700k/1080Ti is still doing fine for the older boy, I have also stood up a third gaming PC with a Lenovo 4th Gen i5 and a half height RX550 for my youngest to play Fortnite.

2026? - Zen 5 is mature enough to justify the upgrade from Zen 3, I pair a 9800X3D with a used 4080S / water block combo, and pass the AM4 system to oldest son. Youngest son takes on the 8700k. Etc
 
  • Like
Reactions: Jeff3F

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
Thanks guys. And @Lord Evermore you are right about not being able to afford to keep up. I know many people who don't even have computers due not being able to afford them. I don't know about the rest of you, but I'm getting burn out around tech...
Another thing I thought of is the extreme level of integration now compared to 10, 20, 30 years ago. When you paid for an upgrade back then, you were only paying for that component. (I remember being annoyed that IDE controllers were integrated into the southbridge, and audio started to be included.) Now you have to pay for the fact that if you upgrade the CPU or motherboard, you've replaced 60% of the components of a PC. Physical stuff can be carried over like case, power supply, optical drives, but the solid state stuff is completely replaced with every upgrade.

One reason I am less enthusiastic about PC tech now is the fact that virtually everything is just commodity parts, with little choice that matters, and you just buy what comes bundled together. Varying northbridge and southbridge combinations aren't available, and the differences between chipsets are arbitrary marketing differences with vast price differences; getting one part that is the thing you need requires paying for a bunch of other parts that you don't need.

If you need an optical drive, one is exactly the same as another and has been for a long time. (They stopped being improved/changed long before they stopped being used.) Onboard sound between one mainboard and another is basically no different and is really just fine for all but the most picky people, who often can't actually tell the difference without being told it exists. Cooling solutions all work just fine even with modest overclocking to keep the CPUs and GPUs within spec and working fine for years, so it's all down to tiny differences that don't really matter to functionality. At least half of PC and component sales and marketing now is "it's got more lights" which just isn't a reason for me to spend more money or time on it.

Of course much of this is "I'm getting old" complaints. But there was still definitely a lot more ability in years past to differentiate machines based on actually capabilities and features, when PC enthusiasm required a lot more knowledge and devotion, compared to now when it's a commodity market, where everybody has one in some form, and the differentiation between them is mostly in looks and at best clock speeds and core counts which just comes down to spending more money.
 

malor

Ars Legatus Legionis
16,093
Of course much of this is "I'm getting old" complaints. But there was still definitely a lot more ability in years past to differentiate machines based on actually capabilities and features
CPUs have never been more different. You have wildly varying models, suitable for very different tasks. And the Intel side of the fence is running at insane heat levels, which requires substantial attention to make sure you're getting good performance.

Motherboards have become more commoditized, but building a PC correctly is about as complex as it ever was, if not more so. It's just that the problems you have to solve have changed.
 
  • Like
Reactions: SportivoA
I'm on the same timeline as you, last year I replaced 4th gen i5/16gb with i5-13400/32gb. Substantial upgrade, and this feels like it will be plenty fast for anything I need to do for a long time. Only reason I could see upgrading in the next few years is if there's substantial power savings.

In that same vein, I have an M1 Mac Mini I've been playing around with to see if I could handle a switch to MacOS. It's plenty powerful for my daily driver and sips power, but I've always been a Windows guy.
 

whm2074

Ars Centurion
459
Subscriptor
CPUs have never been more different. You have wildly varying models, suitable for very different tasks. And the Intel side of the fence is running at insane heat levels, which requires substantial attention to make sure you're getting good performance.
I hope Intel is working on bringing down the heat and power levels for it's next generation CPUs. This and cost are reasons I went with the Ryen 5 5600 when I upgrade. After I replace the dGPU I won't be upgrading for a while yet.
 

DaveB

Ars Tribunus Angusticlavius
7,274
I build new systems all the time, it's a hobby for me, I don't treat computers as an appliance. I have a base day-to-day system I've used since 2013 based on a Xeon E5 1620 4C/8T CPU running on an mATX HP X79 motherboard. Both were bought used and still work great even though the CPU is OC'ed to 4.4 GHz. It's great for web surfing, office applications, and other standard home computing functions.

On the other hand, I build secondary systems for gaming and benchmarking all the time. More than 50 systems over the past 20 years based on AMD and Intel platforms. Way back I focused on dual CPU systems since that was the only way to get multithreading, lot of dual Opteron and dual Xeon setups. More recently, I'm back to single socket since you can get as many threads as you want in a single CPU. When AMD Ryzen/Zen came out in 2017 I went AMD for 3 years, build Zen, Zen+ and Zen 2 systems with 1600, 1600X, 1700X, 2600, 2600X, 3600, 3600X and 3900X CPU using motherboards from ASRock, Gigabyte, MSI and even one mini-ITX with a BioStar X370. When Zen 3 debuted with usual AMD AGESA nonsense, I went back to Intel and build a cheap Gen 10 i5-10400 system, followed by a Gen 12 i7-12700F and then my current Gen 13 i7-13700K. In addition, I'm always trying different AMD and Nvidia GPUs (even one Intel ARC). Since I have a Microcenter nearby and purchase a lot of cheap open box motherboards and GPUs, so I don't spend a lot since I resell the old parts for close to what I purchased them for. For example, in my current system the i7-13700K cost just $262, the Asus Z790 with 32GB DDR5-6000 Ram just a net of $170, and the open box Gigabyte RTX 4070 Ti Super just $672.

I've recently accumulated some cheap parts for a possible replacement of my daily driver - $109 for an i5-12400F CPU to go with my Asus H610/32GB DDR4-3200 combo and ASRock A750 ARC GPU leftover from my old Gen 12 system. Still posting from the old 2013
Xeon setup now but it can't last forever.
 

cerberusTI

Ars Tribunus Angusticlavius
6,449
Subscriptor++
I mostly build them as I see a reason to do so.

Long ago I upgraded more frequently, but that decreased as computers became essentially fast enough, and performance stopped increasing so quickly anyway.

I had a 4790K with a GTX 970 and 16GB in it for a long time. It did what I needed it to do, especially with a few SSD upgrades over the years as those became larger and faster. Even eventually getting a 240hz monitor when those came out, it was able to hit that in most games I played.

Eventually it became a bit unstable (after a keyboard issue which I think damaged the MB). I replaced it with an 11900K and a 3070 (still 16GB), which was massive overkill for anything I did with it, and I thought I would likely hang on to for a while.

Last year, the field of AI suddenly became practically useful for a large variety of tasks, and computers are not fast enough any longer. The 11900K is now a file and DB server (after being packed with as much SSD storage as I could fit), and I added a 7950X3D with a 4090 in it and 64GB, as well as a 7800X3D with 96GB (and a very fast SSD.)

It is likely I will continue to keep up with this for a while, as there is a sizeable gap between what hardware is currently capable of, and what I wish it were capable of.
 

cerberusTI

Ars Tribunus Angusticlavius
6,449
Subscriptor++
Come to think of I haven't been keeping up with tech in general, let alone AI and LLM. I just hope we humans don't let ourselves become dependent on it.
The better ones require renting time on a larger computer, or an expense beyond what anyone would see as reasonable for a desktop. There are still a lot of tasks which can be done locally and take a while, so it matters to me, but that is mostly about development rather than use.

We become dependent upon all of our technology where it is useful. I do not think it will be one of our worst in terms of being more helpful than harmful, even if not all of the uses will be good ones.
 

continuum

Ars Legatus Legionis
94,897
Moderator
I have said this a few times here but I went from 4C/8T Haswell to 16C/32T Zen 2, and I definitely felt like I waited at least a generation too long because I saw such a significant performance uplift in my daily use (probably should have gone to Coffee Lake in 2017 and then to Zen 2 in 2019/2020...).

Since it's the same socket and chipset I then went from Zen 2 to Zen 3 (still 16C/32T) and, unexpectedly, saw another significant uplift in one particular common task I do. To be fair I did not expect that amount of uplift in a single generation, and I don't expect such a single-generation gain to be repeated.

I have skipped Zen 4 so far due to the early DDR5 stability issues, I could go to Zen 4 now, but I am thinking I will hold off since Zen 5 is so close.
 

DaveB

Ars Tribunus Angusticlavius
7,274
I wonder how long the 5600 w/32 GB upgrade will last me once I upgrade the video card. Me last lasted me ten years, so...
A 6C/12T CPU is considered just adequate today, so it's more like a 3-to-5-year plan. When you upgraded from the 4C/4T Haswell you should have gone to at least an 8 core CPU, such as the 5700/5700X, if you wanted to keep the system long term.
 

whm2074

Ars Centurion
459
Subscriptor
A 6C/12T CPU is considered just adequate today, so it's more like a 3-to-5-year plan. When you upgraded from the 4C/4T Haswell you should have gone to at least an 8 core CPU, such as the 5700/5700X, if you wanted to keep the system long term.
Amazon was out of stock on the 5700 that day. Otherwise I would have gotten it.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
A 6C/12T CPU is considered just adequate today, so it's more like a 3-to-5-year plan. When you upgraded from the 4C/4T Haswell you should have gone to at least an 8 core CPU, such as the 5700/5700X, if you wanted to keep the system long term.
6C/12T is far more than "just adequate" for the vast majority of users, even a power user. Average users are fine with 4C/4T even. You have to be very dedicated to doing a lot of heavy work to need more than that, or be extremely impatient. For most users, it's more than enough threads, and the higher base frequency of the cores in a CPU with fewer cores is very often more useful than having more cores that aren't getting used. (The boost speeds may be equal, but may also produce more heat needlessly.) Two extra cores is not going to make that big a difference. It all comes down to the cost and the needs of the user, though. If 20% more money is going to give you a real-world improvement of 2% (90% of the time 0 improvement, 10% of the time significant improvement, don't feel like doing the math) then for most people it's not worth it. Right now of course, the Ryzen 5000 prices are really good and really close to each other, so it's an easier choice, but it wasn't always that way.
 

malor

Ars Legatus Legionis
16,093
If they could manufacture them, we'd be much better off with fewer, faster cores. The original plan for the Pentium 4, for example, was to scale up to 10GHz, and then keep going. That was a lot of why it sucked so badly; it had very long pipelines, and suffered terribly from pipeline stalls. That would have been mostly invisible at 10GHz, but when Intel slammed into the difficulty cliff at 4GHz, it was stuck with that major performance problem.

We're getting a bazillion cores, now, because that's what they can make, not because it's what we really need. Instead of 6 cores at 4GHz, we'd be way happier with a single core at 24GHz. (or 2 cores at 12GHz, I suppose.) The wider you go on multicore, the less useful the extra cores become. All algorithms benefit from faster per-core performance, but only a subset benefits from additional cores. Few workloads benefit from widely multicore systems. There are definitely some uses for the monster chips, but probably even most of us here, one of the most technical audiences I know, don't benefit much past 16 threads, at least for computers at home. Regular folks mostly don't benefit at all.

I'm hopeful that the TSMC process improvements will mean there are a couple more big upgrades possible in the PC space, but Moore's Law has slowed down a bunch. We've gotten some major bumps in the last few years, but the whole era of massively cheaper and better computers every couple of years already seems to be firmly in the rear-view mirror. I think it's quite possible that the next one or two generations of PC may be about the limit for silicon processes. Going faster, if it can be done, may require entirely new materials.

It'll be weird to no longer be upgrading our PCs because they can't make them run faster anymore.
 

DaveB

Ars Tribunus Angusticlavius
7,274
6C/12T is far more than "just adequate" for the vast majority of users, even a power user. Average users are fine with 4C/4T even.
Your opinion but the OP is thinking of a ten-year system. And the cost of the additional two cores (currently $30 at Newegg) is negligible when compared to the total system cost. But argue on if you must.
 

Made in Hurry

Ars Praefectus
4,553
Subscriptor
My systems have varied through time for sure, since 2011 or so i had low-end laptops for some years, even some i found at the landfill as i was too poor to buy anything. The last one had a knife stuck in it, but i used it for three years until it just came apart.

I received a motherboard and a 5700X donation last year from a very good forum friend here that will last me years and years i think. I really do not do much gaming on it, and i can't keep up either as things just are too expensive these days, but i will at least be able to for a few years more. You will find me on this forum scraping the barrels for usability going forward :)
 

whm2074

Ars Centurion
459
Subscriptor
If they could manufacture them, we'd be much better off with fewer, faster cores. The original plan for the Pentium 4, for example, was to scale up to 10GHz, and then keep going. That was a lot of why it sucked so badly; it had very long pipelines, and suffered terribly from pipeline stalls. That would have been mostly invisible at 10GHz, but when Intel slammed into the difficulty cliff at 4GHz, it was stuck with that major performance problem.

We're getting a bazillion cores, now, because that's what they can make, not because it's what we really need. Instead of 6 cores at 4GHz, we'd be way happier with a single core at 24GHz. (or 2 cores at 12GHz, I suppose.) The wider you go on multicore, the less useful the extra cores become. All algorithms benefit from faster per-core performance, but only a subset benefits from additional cores. Few workloads benefit from widely multicore systems. There are definitely some uses for the monster chips, but probably even most of us here, one of the most technical audiences I know, don't benefit much past 16 threads, at least for computers at home. Regular folks mostly don't benefit at all.

I'm hopeful that the TSMC process improvements will mean there are a couple more big upgrades possible in the PC space, but Moore's Law has slowed down a bunch. We've gotten some major bumps in the last few years, but the whole era of massively cheaper and better computers every couple of years already seems to be firmly in the rear-view mirror. I think it's quite possible that the next one or two generations of PC may be about the limit for silicon processes. Going faster, if it can be done, may require entirely new materials.

It'll be weird to no longer be upgrading our PCs because they can't make them run faster anymore.
I think that GPUs and Storage will still show improvements during the next 5 to 10 years. Hopefully power consumption will be improved overall since that is one area that We need to see major improvements.
 

Made in Hurry

Ars Praefectus
4,553
Subscriptor
If they could manufacture them, we'd be much better off with fewer, faster cores. The original plan for the Pentium 4, for example, was to scale up to 10GHz, and then keep going. That was a lot of why it sucked so badly; it had very long pipelines, and suffered terribly from pipeline stalls. That would have been mostly invisible at 10GHz, but when Intel slammed into the difficulty cliff at 4GHz, it was stuck with that major performance problem.

We're getting a bazillion cores, now, because that's what they can make, not because it's what we really need. Instead of 6 cores at 4GHz, we'd be way happier with a single core at 24GHz. (or 2 cores at 12GHz, I suppose.) The wider you go on multicore, the less useful the extra cores become. All algorithms benefit from faster per-core performance, but only a subset benefits from additional cores. Few workloads benefit from widely multicore systems. There are definitely some uses for the monster chips, but probably even most of us here, one of the most technical audiences I know, don't benefit much past 16 threads, at least for computers at home. Regular folks mostly don't benefit at all.

I'm hopeful that the TSMC process improvements will mean there are a couple more big upgrades possible in the PC space, but Moore's Law has slowed down a bunch. We've gotten some major bumps in the last few years, but the whole era of massively cheaper and better computers every couple of years already seems to be firmly in the rear-view mirror. I think it's quite possible that the next one or two generations of PC may be about the limit for silicon processes. Going faster, if it can be done, may require entirely new materials.

It'll be weird to no longer be upgrading our PCs because they can't make them run faster anymore.
That actually make me a bit curious. How would a semi-modern CPU running at 24 Ghz compare to today's offerings. Could they even make it?
 
  • Like
Reactions: marianklux

malor

Ars Legatus Legionis
16,093
That actually make me a bit curious. How would a semi-modern CPU running at 24 Ghz compare to today's offerings. Could they even make it?
DRAM is the really dire bottleneck, and if that was the same as it is now, superfast CPUs would have to run almost entirely from onboard RAM of some kind. They try to do that now, because they're already so much faster than RAM, but the problem would become even more pressing.

If, on the other hand, we could make really fast DRAM, everything would change. Most of the complexity of modern CPUs is oriented around dealing with slow RAM, via big caches and branch prediction. It would take a near-total redesign, likely simplifying away a lot of internal complexity, but presumably they could make CPUs that had far higher throughput than what we see now, even at the same clock speeds.

In other words, right now, most CPUs have substantial latency because DRAM is slow. If you just plugged those chips into fast DRAM, not much would happen. But with fast DRAM and then CPUs that were designed to use that fast DRAM properly, the difference in compute power would probably be massive.

Combine both fast DRAM and 25GHz CPUs, and things would change almost as much as they have since the first 32-bit systems hit the consumer market. Sadly, it looks like both those things are extraordinarily difficult, perhaps impossible.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
Your opinion but the OP is thinking of a ten-year system. And the cost of the additional two cores (currently $30 at Newegg) is negligible when compared to the total system cost. But argue on if you must.
I mentioned that NOW the prices are closer, but at the time of release, it was an extra $100 to get two more cores that were slower. The same is the case with the current 7000-series, and it's arguable whether it's worth an entirely new build right now with AM4 which might not last a decade like the last build since it's already out of date, and upgrades are even questionable. That's quite a price difference for most people, for a performance boost that will only appear in a small number of situations/applications and will even be a performance loss in some. So I still say it's not "just adequate"; 6C/12T is quite functional for the majority of people and will still serve them well for many years.
 
  • Like
Reactions: whm2074

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
Instead of 6 cores at 4GHz, we'd be way happier with a single core at 24GHz.
That's probably not the case. Something like 4 cores at 18GHz could work better, but, 8 cores at 12GHz might not be worthwhile in comparison. Multi-threading will always still be useful because the OS can schedule different applications to different cores, so that things can be done concurrently which don't depend on data from another thread. A single thread can stall, have a cache miss, be waiting for data from RAM, etc., and on a single core there is a performance loss associated with switching to another thread. With at least a handful of cores, other work can continue at virtually all times.

It comes down to needing a balance between raw speed and multi-threading, which has to be considered based on workload. There is no absolute rule about how many cores and what speed is best. For low-end users, just browsing the web or streaming TV, even 4 threads is good enough; my sister has a Ryzen 3200G and it runs perfectly fine for her and I barely even notice the slowness when I'm there and need to look things up, compared to my 5600X. What she's doing isn't requiring lots of threads, and the speed is good enough to get the work done. Someone doing heavy office tasks with multiple applications running can benefit from SMT or additional real cores, but much higher clock speed is more useful than a lot more cores. For modern games, additional cores and perhaps SMT is a greater benefit than a small clock speed increase, but there is still a balance point where more cores doesn't do jack. The benefit from more cores in the last two cases is because there's more threads that need to be running, as I mentioned above; games are multi-threaded AND there are usually a lot of background things running, but the games themselves are really only making use of 4 or maybe 6 threads.

Of course big.little is just muddying the waters horribly with all of this as you have no idea what kind of cores will be best for running the various things you need to run.
 
  • Like
Reactions: whm2074

whm2074

Ars Centurion
459
Subscriptor
DRAM is the really dire bottleneck, and if that was the same as it is now, superfast CPUs would have to run almost entirely from onboard RAM of some kind. They try to do that now, because they're already so much faster than RAM, but the problem would become even more pressing.

If, on the other hand, we could make really fast DRAM, everything would change. Most of the complexity of modern CPUs is oriented around dealing with slow RAM, via big caches and branch prediction. It would take a near-total redesign, likely simplifying away a lot of internal complexity, but presumably they could make CPUs that had far higher throughput than what we see now, even at the same clock speeds.

In other words, right now, most CPUs have substantial latency because DRAM is slow. If you just plugged those chips into fast DRAM, not much would happen. But with fast DRAM and then CPUs that were designed to use that fast DRAM properly, the difference in compute power would probably be massive.

Combine both fast DRAM and 25GHz CPUs, and things would change almost as much as they have since the first 32-bit systems hit the consumer market. Sadly, it looks like both those things are extraordinarily difficult, perhaps impossible.
I'm thinking they could DRAM chip put on the packaging with the CPU/SoC but, the amount of memory would be limited to what would fit on one or two dies.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
I think that GPUs and Storage will still show improvements during the next 5 to 10 years. Hopefully power consumption will be improved overall since that is one area that We need to see major improvements.
GPUs will run into the same limits that CPUs do, in terms of process improvements and ability to increase clock speeds, just as soon. They are having to POUR power into them in order to get high clock speeds, which the CPU vendors are not willing to do. I'm sure an i7's clock speed could ramp up pretty high if they designed them to take 500W of power, they just wouldn't be able to sell many of them. There aren't going to be many people who can even afford the GPUs in the first place, and even fewer that can subsequently afford the power requirements.
 

whm2074

Ars Centurion
459
Subscriptor
GPUs will run into the same limits that CPUs do, in terms of process improvements and ability to increase clock speeds, just as soon. They are having to POUR power into them in order to get high clock speeds, which the CPU vendors are not willing to do. I'm sure an i7's clock speed could ramp up pretty high if they designed them to take 500W of power, they just wouldn't be able to sell many of them. There aren't going to be many people who can even afford the GPUs in the first place, and even fewer that can subsequently afford the power requirements.
How many PC Gamers are even actually playing a 4K let alone 5K and 8K?
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
I'm thinking they could DRAM chip put on the packaging with the CPU/SoC but, the amount of memory would be limited to what would fit on one or two dies.
Intel tried that. They managed 128MB eDRAM (embedded) on the Broadwell CPUs. It had twice the bandwidth and extremely low latency compared to main RAM, and in the right applications such as games it made a huge difference. In most other things, it didn't make any difference. It basically acted like a pretty big cache at the time, and we know from the Ryzen X3D processors that even a little bit of extra cache close to the CPU makes a big difference. But the next generation of main RAM had the same bandwidth, and as @malor explained, CPUs are designed to deal with the latency pretty well. There's just no way currently for embedded DRAM to keep up in size and speed, plus if we depended on that for the main system RAM, it would mean replacing your entire CPU to add memory (which is the way Apple Silicon devices are designed), or just letting it act like a large cache for the much bigger modules on the motherboard, but making it large just makes it cost SO much more and is only currently useful in some situations.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
How many PC Gamers are even actually playing a 4K let alone 5K and 8K?
They still will hit the limit on process size for GPU transistors and only be able to increase clock speed via additional power, so adding cores will be the only way to make them "better" for a new generation. Eventually even the low-end GPUs might need 300W of dedicated power, to run at lower resolutions with the bloated and unoptimized modern games.
 

whm2074

Ars Centurion
459
Subscriptor
Intel tried that. They managed 128MB eDRAM (embedded) on the Broadwell CPUs. It had twice the bandwidth and extremely low latency compared to main RAM, and in the right applications such as games it made a huge difference. In most other things, it didn't make any difference. It basically acted like a pretty big cache at the time, and we know from the Ryzen X3D processors that even a little bit of extra cache close to the CPU makes a big difference. But the next generation of main RAM had the same bandwidth, and as @malor explained, CPUs are designed to deal with the latency pretty well. There's just no way currently for embedded DRAM to keep up in size and speed, plus if we depended on that for the main system RAM, it would mean replacing your entire CPU to add memory (which is the way Apple Silicon devices are designed), or just letting it act like a large cache for the much bigger modules on the motherboard, but making it large just makes it cost SO much more and is only currently useful in some situations.
How memory can they fit on a single chip these days? I'm thinking that a GPU would benefit more anyway.
 

Lord Evermore

Ars Scholae Palatinae
1,490
Subscriptor++
How memory can they fit on a single chip these days? I'm thinking that a GPU would benefit more anyway.
It still runs into the fact that DRAM is a relatively "big" set of transistors, given all the requirements for connections and traces and stuff. DRAM that is embedded is still physically separate from the chips, it's just on the same package. It's one step away from being a chiplet, because it's manufactured separately. Think about the chips on a DDR4 RAM module; each one of those is only say, 1GB, maybe 2. Imagine trying to stick 8 or even 4 of those onto the same CPU or GPU package as the chip that has the processing cores, just to get 8GB of RAM. This is what Apple does with the M1, M2, M3 chips, and it's expensive but brings very high performance in some but not all situations. It also limits the manufacturing significantly while adding to the cost. GPUs might get a lot of added performance from this at all times, or possibly none at all, because the GPU is still limited by the PCIe bus. I don't know enough to say whether it would actually help.

The rectangle to the right on the CPU here is the DRAM chip: https://images.anandtech.com/doci/16195/BDW CRB.jpg That was only 128MB. Think about trying to give that system 4GB of embedded DRAM (which would have been a just-okay amount for the time, not really enough for more than basic usage.) Of course RAM chips at that time had smaller capacity for the same physical size compared to now.

They did also make Skylake with eDRAM on some models. So popular I'd completely missed it.
 

malor

Ars Legatus Legionis
16,093
I think the eDRAM doesn't do much because it's still DRAM. It's a little lower latency, but it still has the long refresh cycles, because DRAM constantly leaks.

What CPUs really need on-die is static RAM, which can be clocked to very high speeds, but SRAM is much bigger than DRAM, and runs hot. That's what current CPUs use as cache, and that's why there isn't much of it. AMD, with its stacked static RAM on the X3D chips, gets major performance wins out of it, but has to clock the CPUs down because cooling through the SRAM layer on top is substantially harder.

(Weirdly, I get better temps out of my 5800X3D than I did out of my 5800X, but that might be better CPU goop or a better application on my part.)
 

hobold

Ars Tribunus Militum
2,657
A hypothetical ~20GHz processor would run into some hard limits. In a single clock cycle, light can travel ~1.5 centimeters / 0.6 inches. The speed of a relevant electrical signal in a semiconductor is usually described as "half the speed of light".

So that high clocked CPU core would need to be rather small to avoid latency cycles due to distance. So either we need very small silicon structures, or a CPU design with fewer logic gates, or we'd have to insert many pipeline stages everywhere (like Netburst / Pentium 4 did).

Much smaller transistors are very expensive and take a long time to arrive (as a mass fabrication technology). Simplistic CPU designs or very deep pipelines imply a loss of IPC (instructions per cycle, i.e. work done per cycle) compared to our existing CPUs.

Alternative routes to denser packaging of transistors are possible: die stacking, backside power delivery (which is similar to a sandwich of two stacked dies), or even a full 3D design, and maybe others.


Anyway, with the currently available technologies, a 20GHz speed demon CPU would not generally be faster than what we have. If anybody could make such a CPU and consistently deliver higher performance for conventional spaghetti code programs, those machines would actually exist and be sold to all the high frequency traders who could afford them. And possibly to the military and other governmental customers.
 

whm2074

Ars Centurion
459
Subscriptor
Anyway, with the currently available technologies, a 20GHz speed demon CPU would not generally be faster than what we have. If anybody could make such a CPU and consistently deliver higher performance for conventional spaghetti code programs, those machines would actually exist and be sold to all the high frequency traders who could afford them. And possibly to the military and other governmental customers.
If you could make such Speed Demon then you would still be bottlenecked by slower memory and storage speeds. And of course there is the massive cooling requirements to keep the system from cooking itself.
 

ChrisG

Ars Legatus Legionis
19,394
Personal rule of thumb is basically when you feel your existing hardware is feeling the strain. I've done major upgrades about once every 5 years since 2008, with smaller interim things like a new video card whenever I felt my existing one was struggling a bit. Went from an i5 2500K in 2009 > i7 8700K in 2018 > AMD 7800X3D in late 2023. The 7800X3D is a whole new build - case, CPU, mobo, PSU, GPU, RAM etc. (although still using my SSDs, sound card and peripherals from years ago), and is over twice as fast overall than the i7 + 2080 I had. Admittedly it was something like 3x as expensive, including a new monitor, but you only live once, I suppose, and it should tide me over for another few years.
 
  • Like
Reactions: continuum

Made in Hurry

Ars Praefectus
4,553
Subscriptor
I still appreciate some older hardware to be honest. My garage computer is a regular i5 2400 with 8GB running Linux mint, and for just browsing the net and listening to music and writing, it is overkill and i really do not feel any difference between this box and my Windows 11 5700X box.

But something happened to Windows 11 also with the major recent update, resource usage ballooned.
 
  • Like
Reactions: owdi