Next GPU releases: 2022 edition

wireframed

Ars Legatus Legionis
16,733
Subscriptor
That isn't gameplay, though. Immersion makes games more engrossing, but isn't itself gameplay.
Why should gameplay be the only thing that matters? We don’t say “well, the movie was shot ugly, lighting was bad and costume design atrocious, but it’s a 10/10 movie because it had a good story”.

We recognize that story, direction, costume design, atmosphere, effects and camera work had to come together for a movie to be great, even though a good movie can be less than stellar on some of those.

It also ignores the fact that some games use things like lighting as a part of the gameplay, or use large detailed environments as a part of what makes things work.
Elder Scrolls wasn’t relatively empty because that worked best, it just wasn’t possible to render the environment how they imagined it.
 

BO(V)BZ

Ars Tribunus Militum
2,082
It seems to me the “revolutionary” features that required specific hardware features, came along every couple generations or 3, and then were iterated on.

T&L was one, pixel- and vertex shaders were another. (First GeForce, Geforce 3 respectively).
RT/RTX was a big one, just because it’s been decades since we had a true revolution in how we deal with lighting, and it’s just SO expensive to do even halfway “correct”. I think we will see a LOT of evolution in this space, and perhaps separate path-traced render paths will become the norm, so older cards (with RT, since even the default path will eventually require RT hardware) aren’t locked out, but there is still a reason to upgrade to the newest cards. And of course because path tracing really is a whole other level when it comes to more accurate lighting and surfaces.

Now that RT is just another part of DirectX, it'll be interesting to see if there are any upcoming revolutionary changes as you mention, or if it's just iterative. Maybe AMD comes up with a new method that's 5x faster, but it would still be called using the same API AFAICT - even if it used some totally different data structure, the driver could still translate over the RT calls from the game to whatever the graphics card natively understands.

I also just expect to see more iteration and evolution. It'll be interesting to see how Blackwell and Navi 40 handle things. Hopefully AMD picks up the pace on their somewhat anemic RT, and I'm willing to bet Nvidia once more makes RT the focal point for Blackwell.

On a side note, what is the current most demanding raster-only game? If you want 4k60FPS, what's the minimum card to play that? I feel like the 4080 and above are already overkill, but I don't keep up with every game out there.
 

Xavin

Ars Legatus Legionis
30,167
Subscriptor++
The 5080 should perform like a 4090 and cost $800. It won't.
The 4090 is an outlier, so I don't expect the 5080 to get all the way there (or have 24GB VRAM). The 5070 will probably match the 4080 though. I also expect they will try to have some new hardware trick so they can add something like FG again that only works on the 50-series and up.
 
Why should gameplay be the only thing that matters?
It shouldn't. Presentation matters a ton, it's another important aspect to great games. Not necessarily perfect realism, stylized, exaggerated, or even thematically retro visuals count too, but games like Red Dead 2 would feel very different if they looked like Borderlands.

And I agree, lighting matters to gameplay particularly in stealth games but Thief came out close to contemporaneously with the original Xbox.

In truth the 5070 should match the 4090 (not in VRAM obviously) for $600. That's how it used to work. But I agree matching the 4080 for $200-300 less money is the best we can realistically expect. The 4090 was a real outlier in price, performance, and material cost-- that's one huge chip. It deserved to be called a Titan.
 

malor

Ars Legatus Legionis
16,093
Why should gameplay be the only thing that matters? We don’t say “well, the movie was shot ugly, lighting was bad and costume design atrocious, but it’s a 10/10 movie because it had a good story”.
Because you don't play the graphics, you play the game. If I've learned anything about games over my uncomfortably long lifetime, it's that titles focused on graphics pizazz first and foremost rarely have legs. Great gameplay, on the other hand, lasts forever.
 

wireframed

Ars Legatus Legionis
16,733
Subscriptor
Because you don't play the graphics, you play the game. If I've learned anything about games over my uncomfortably long lifetime, it's that titles focused on graphics pizazz first and foremost rarely have legs. Great gameplay, on the other hand, lasts forever.
No one said graphics first, but you don’t just “play” a game - you experience it. As good as Elder Scrolls was in the day, you just can tell me it is as immersive and engaging as Red Dead Redemption 2, or something like the Horizon games. Being able to bring a world truly alive with an ecosystem, detailed environments to explore and lighting that lets you paint rather than just dump static lighting on everything, opens up a lot of possibilities for artists.

It’s limiting to think of games only as the mechanics of the gameplay and nothing else.

And FTR, I played since my friend got an Amiga, and my first 3D card was a 3DFx, so I’ve played and enjoyed tons of classics. I still think artistic freedom makes a diffference.
 
  • Like
Reactions: steelghost

w00key

Ars Praefectus
5,907
Subscriptor
It's another tool in the toolbox. Retro style pixel games are still being made and plenty fun, but Watchdogs 2 / CP2077's light, transparency (glass) and reflection add another dimension to the world. Like, huh, yeah makes sense why game artists avoid these kind of surfaces on game objects, shiny is really hard to get right, same for a ton of dynamic lights in a big open neon lit world. Curved / weirdly textured glass is another that just requires RT to render.
 

hobold

Ars Tribunus Militum
2,657
Red Dead Redemption 2 is an example where I like the realism argument; I am not trying to sell a dogma here. Travelling through the mythical Wild West pretty much requires the grass to be muddy, the forest to be dark and damp, the prairie to be dusty and hazy. If it doesn't look like a majestically beautiful nature almost untouched by humans, then it isn't Hollywood's dream of a particular past.

Such games are the exception, though, because RDR2 still relies on art direction more than it relies on an accurate simulation of the physics of light. But RDR2 would likely fall flat if you tried the same with a cartoony art style. You don't get to subconsciously feel the mud under your boots, smell the humidity or the dust, or feel the wind and rain in your face, if the visuals aren't hollywood accurate. The art of filmmaking delivers many effective tools.
 

Carhole

Ars Legatus Legionis
14,461
Subscriptor
@wireframed brought it up I believe an entire page back but generational leaps don’t seem to be slouching which is how y’all got back into this circular conversation about lighting technologies yet again. A performance uplift of 30-50% gen on gen is pretty damned impressive. It’s not a constant, but Turing to Ampere, Ampere to Adalovelace, and likely next to Blackwell will kinda moot out any bitchin’ needed unless we are still sore on prices and that seems likely.

Somehow a discussion on Ars got sidetracked from what the high-end of Nvidia’s gaming offerings have been going back eons and that’s the xx80 part. They’re all good when introduced. Getting feisty and itchy about cost and availability, well that still seems relevant upon release or when massive market shifts occur but the prosumer halo offering isn’t what Joe Gamer is going for, it’s what they aspire to own but then likely get an xx60 or 70 series mainstream card. Some of us use the halo cards for work and play, or just for work. Anyhow, I think that a little bit of goalpost definition is order here.

Yeah, all next gen game engines are going to utilize ray and path and whatever else kind of tracing tech is introduced. Get over it. Try it for yourself, it’s truly not difficult to install a game engine on your PC and to build a simple level. Then try manually lighting the same scene to look as good as global illumination does. Fucking nope unless you e got a few spare weeks. This ship sailed years ago and the next big writing we should anticipate are technologies such as on-card LLM for things such as realtime NPC interactions, continued improvements of upscaling and probably much better (both dev and system focused) asset optimization that will alleviate the need for ludicrous amounts of vRAM, or perhaps slow the creep to a reasonable pace that more people can afford. All the while expect rasterization to keep going faster, sure, and new rendering techs thst nobody right now can foresee. Perhaps expect bigger things such as realtime water physics in addition to a multitude of other fun crap that we’ll all enjoy messing around with and of course cell shaded (cartoon, stylized, whatever) games can still exist while all of these other kickass technologies get added.
 

w00key

Ars Praefectus
5,907
Subscriptor

View: https://youtu.be/PneArHayDv4?si=_b7RCIBRGmUUzdcc


New Digital Foundry video comparing latest DLSS 3.7, XeSS 1.3 and FSR 2. XeSS on Arc is quite impressive now and DLSS fixed another trailing image bug. Interesting that it has multiple models internally, like GPT with 2/3/3.1 etc, and switching from model D to C via Nvidia Profiler also removes the massive trailing in certain games but at a cost of resolution/sharpness.

The new model E is C's better motion handling + D's sharpness, best of both worlds.

FSR, idk not much to report on I guess, still waiting for the new one.
 

wireframed

Ars Legatus Legionis
16,733
Subscriptor
Red Dead Redemption 2 is an example where I like the realism argument; I am not trying to sell a dogma here. Travelling through the mythical Wild West pretty much requires the grass to be muddy, the forest to be dark and damp, the prairie to be dusty and hazy. If it doesn't look like a majestically beautiful nature almost untouched by humans, then it isn't Hollywood's dream of a particular past.

Such games are the exception, though, because RDR2 still relies on art direction more than it relies on an accurate simulation of the physics of light. But RDR2 would likely fall flat if you tried the same with a cartoony art style. You don't get to subconsciously feel the mud under your boots, smell the humidity or the dust, or feel the wind and rain in your face, if the visuals aren't hollywood accurate. The art of filmmaking delivers many effective tools.
The thing is, you could make good looking games in the past, but you were constrained by technology a lot more than today. Borderlands 3 is an example of how powerful GPUs help realize a certain style, and each game, while using the same “Heavy Metal” style, has progressively refined their style and look.

Sure they didn’t NEED the increased fidelity but it makes the game more visually interesting and sells the unique style a lot better than early attempts at cel-shading/graphic novel rendering. (Though BL3 uses a lot more tools like amazing texture work, and their edge-shaders are a lot better and more consistent).
I think Oni was an early game that tried for a cel-shaded look and it was cool but also became a bit uniform and bland after the intial impression.
 

mpat

Ars Praefectus
5,951
Subscriptor
On a side note, what is the current most demanding raster-only game? If you want 4k60FPS, what's the minimum card to play that? I feel like the 4080 and above are already overkill, but I don't keep up with every game out there.
I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.
 

wireframed

Ars Legatus Legionis
16,733
Subscriptor
I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.
Star field is just not a good enough looking game to run that badly. I don’t know what they’re doing, it’s not AI, huge environments or incredibly triangle counts.
I suspect they just built a game their engine isn’t well suited for.

Currently playing Horizon: Forbidden West, and it doesn’t really use any RT, but looks pretty gorgeous. I haven’t played a ton of different games since upgrading, so I can’t compare to the games I played on the 3070, but it runs pretty well on a 4070 TiS. Apparently, later on in the game, texture load increases enough that 8GB cards start to struggle.

But at least this game looks it - the amount of detail and texture quality is pretty impressive, and there is a LOT of variety. There are textures you can walk the camera all the way up to, as close as it gets, and they still look sharp.
 

Xavin

Ars Legatus Legionis
30,167
Subscriptor++
Star field is just not a good enough looking game to run that badly. I don’t know what they’re doing, it’s not AI, huge environments or incredibly triangle counts.
I suspect they just built a game their engine isn’t well suited for.
It's just a very old engine/tooling pushing way more data than it was designed for. Based on how little they have actually changed from Fallout 4 and how the few engine things that have improved feel like something they got a contractor to add on, I assume they don't really have anyone who knows the engine internals even working there anymore. Their reaction time for simple engine and rendering issues when the game came out was not what it would be if they had an engine guy or three on staff who could fix obvious bugs.

But at least this game looks it - the amount of detail and texture quality is pretty impressive, and there is a LOT of variety. There are textures you can walk the camera all the way up to, as close as it gets, and they still look sharp.
The Decima engine had really bad issues with LoD and texture detail for a long time, so it seems like they put a bunch of effort into fixing the issue. My guess is that HFW will probably be one of the last big games that goes for that level of visual fidelity without some kind of RT being required. Spider-man 2 requiring RT on the PS5 kind of cemented that as the way forward, we just haven't seen the reaction from that yet. I expect from here on out raster only render paths on the PC are going to be more and more of an afterthought, like how software rendering quickly became after 3D GPUs showed up.
 

BO(V)BZ

Ars Tribunus Militum
2,082
I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.

I'd have chugging in Starfield in towns on my 4090, so that's certainly true, but definitely not a graphics showcase, unless your criteria is how a game engine with such ancient roots can continually be dragged forward into new games. I will say the textures in Starfield are solid and the props all look nice, but it's not a showcase by any means.

I will say that even if I didn't know the game engine ahead of time, I immediately recognized the the the 'NPC stops, turns around, and camera zooms in' thing that Bethesda's engines do when you initiate a conversation. Somebody else mentioned it elsewhere, but comparing that to conversations in Cyberpunk is completely night and day - hell, Deus Ex Human Revolution had some interesting dynamic 'boss' conversations.
 

tiosteven

Ars Tribunus Militum
1,723
Subscriptor++
DLSS/upscaling was its own generational performance increase. I bought a 3060Ti and the first game was Control at 2k with ray tracing. I wasn’t sure about this DLSS thing so stayed native. Amazing.

A couple years later my wife surprised me with a curved very widescreen Odyssey monitor, basically 4k, specifically twice the width my old monitor. First game was Control again. This time with DLSS and the experience was sublime.
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
MLID's sources have some updates on Blackwell:

View: https://www.youtube.com/watch?v=I_2I5K5r5jE&t=759s


The top tier mobile parts this time around looks like they will be using the same silicon for both the mobile 5090 and 5080. The reasoning for this is that the heat resistance for both parts are identical. That would mean GB203 for both parts, and thus translate into laptops with 16GB of VRAM for 5080 parts. His sources also say that it's being tested with GDDR7 and PCIe 5.0.

He believes the reason for this is because AMD's Strix Halo parts will have a 256 bit bus, so they can access a very large memory pool. He doesn't think AMD will beat Blackwell in performance, but it does mean AMD won't have problems with games that go over 8GB of VRAM. nVidia needs a solution to that.

As far as desktop parts go, it's looking more likely that we only get 5090 this year, and that it should be announced at Computex and go on sale early Q4.
 

ScifiGeek

Ars Legatus Legionis
16,349
He believes the reason for this is because AMD's Strix Halo parts will have a 256 bit bus, so they can access a very large memory pool.

AFAIK they can already go to 64GB on 128 bit bus, and theoretically more than that.

It's just common sense that they have a 256 bit bus for the extra bandwidth needed for the big GPU.
 

hobold

Ars Tribunus Militum
2,657
This is a relatively weak GPU for those kinds of extreme memory needs. This APU is mainly going into gaming laptops.
Well, if the choice is between a few ten thousand shader ALUs being fed by PCIe 5.0 speed on the one hand, and a few thousand shader ALUs being fed by a 256 bit wide DDR5 memory subsystem on the other hand, then there will be a few workloads where the big GPU is severely starved for data. For example machine learning with huge models during the training phase.

Scalpers rejoice. :-/
 

ScifiGeek

Ars Legatus Legionis
16,349
Well, if the choice is between a few ten thousand shader ALUs being fed by PCIe 5.0 speed on the one hand, and a few thousand shader ALUs being fed by a 256 bit wide DDR5 memory subsystem on the other hand, then there will be a few workloads where the big GPU is severely starved for data. For example machine learning with huge models during the training phase.

Training huge models also needs big performance, so people Training huge models are probably buying NVidia pro models with massive amount of HBM memory.
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
Training huge models also needs big performance, so people Training huge models are probably buying NVidia pro models with massive amount of HBM memory.
They're also buying AMD's MI300X, which has more memory (192GB) than nVIdia's H100 80GB, and their upcoming H200 141GB.

Neither of these play in the same space as a GPU with a local pool of memory. You're comparing apples to oranges by dragging HBM into a conversation about mobile GPUs/APUs.
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
And you are comparing a fisher price toy car, to Peterbilt dump truck if you think people are going to buy APUs for massive learning models.
People, today, are using Intel Arc cards for personal AI projects at home.

There are a growing number of people who want to tinker with AI, and they may very well only have a laptop to test it out with.
 

ScifiGeek

Ars Legatus Legionis
16,349
People, today, are using Intel Arc cards for personal AI projects at home.

There are a growing number of people who want to tinker with AI, and they may very well only have a laptop to test it out with.

Sure, but I see no evidence that people at home are creating massive training models, that need more than 16 GB, let alone more than 64GB (which is supported by the current 128 bit bus).

So the point remains, that the actual reason for wider bus, is for more gaming performance for the APU.

As long as we have been discussing bigger APU, the point has always come up, that they need more BW.
 

Scandinavian Film

Ars Scholae Palatinae
1,285
Subscriptor++
Sure, but I see no evidence that people at home are creating massive training models, that need more than 16 GB, let alone more than 64GB (which is supported by the current 128 bit bus).

So the point remains, that the actual reason for wider bus, is for more gaming performance for the APU.

As long as we have been discussing bigger APU, the point has always come up, that they need more BW.
I agree that "the 256-bit bus of Strix Halo will allow for a larger memory pool than before" is a really strange framing of the situation. IMO the more important factor is that the wider bus makes it worthwhile to add more CUs to the APU. Originally, this was mainly for gaming, but it will also create a GPU with more RAM than previous discrete options while (potentially) having enough grunt to train >16 GB models in a reasonable amount of time.
 

hobold

Ars Tribunus Militum
2,657
I agree that "the 256-bit bus of Strix Halo will allow for a larger memory pool than before" is a really strange framing of the situation.
My words were: "or somebody wants to accelerate GPU workloads with extreme VRAM requirements".

I did not say "larger than before". Nor did I put any framing around it. So let me fix that mistake now, and add such a frame. If anybody wants to experiment with workloads that require extreme amounts of VRAM (where "extreme" is meant to be in relation to the capacities commonly offered in the same market, i.e. the consumer market), they can either:

1. acquire hardware targeted at corporate customers
2. not buy a consumer GPU (because it lacks VRAM) and give up
3. get a hypothetical(!) strix halo laptop with a suitably large LPCAMM2 module

I'd say 1 is prohibitively expensive, 2 is a no-go, and 3 is hypothetical as of yet. Therefore, 3 might be an interesting topic to think about on some tech enthusiast forum.
 

Scandinavian Film

Ars Scholae Palatinae
1,285
Subscriptor++
My words were: "or somebody wants to accelerate GPU workloads with extreme VRAM requirements".

I did not say "larger than before". Nor did I put any framing around it.
I wasn't referring to you, but to MLID :
He believes the reason for this is because AMD's Strix Halo parts will have a 256 bit bus, so they can access a very large memory pool.
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
Whelp, the 5080 will at least paper launch this year, and it's because of the DoD :

View: https://www.youtube.com/watch?v=w17I28lOL84

His contacts say that the Department of Defense has told nVidia that they will not permit the 5090 to be sold to China, and they better not try a 5090"D" this time around.

nVidia apparently has more market share in China than in other regions (yes, more than the Steam survey's ~75%). If they were to launch a part that China cannot buy, this may be perceived as a slight against China. This is part of why the 5080 launch is being pulled in - it's a sanctions buster. The estimated performance of the 5080 is just below the current export limits.

The other reason is that the 5080 will be close enough to the 4090 such that nVidia can kill off the 4090 for consumers and send those dies to more lucrative AI-centric products (like the ones being smuggled into China). The 5080 won't be much cheaper than the 4090 ($1200-$1500), but it will cost less to make - smaller die, less VRAM.

This will result in a small delay for the 5090 as nVidia wasn't planning to launch the 5080 as early as they now have to.
 

Scandinavian Film

Ars Scholae Palatinae
1,285
Subscriptor++
Gotta love how sensationalized everything has to be, all the time.

"Nvidia designs their product to fit within export rules" "Nvidia wants to launch the 50 series globally, instead of with a card with a limited release"-> far too reasonable.
"Nvidia is making a SANCTIONS BUSTER" "Nvidia has to be careful not to OFFEND CHINA" -> now that's more like it!

Especially when the news here isn't that the 5090 won't exist, just that it will come out after the 5080, instead of a month before like was the case for the 40 series.
 
Gotta love how sensationalized everything has to be, all the time.

"Nvidia designs their product to fit within export rules" "Nvidia wants to launch the 50 series globally, instead of with a card with a limited release"-> far too reasonable.
"Nvidia is making a SANCTIONS BUSTER" "Nvidia has to be careful not to OFFEND CHINA" -> now that's more like it!

Especially when the news here isn't that the 5090 won't exist, just that it will come out after the 5080, instead of a month before like was the case for the 40 series.

More fake drama = more clicks.
 
The same guy that wrote the Sponsorblock addon also does DeArrow. Basically it's a user-generated database of YouTube titles and thumbnails with zero clickbait. Works great, strongly recommend.

Personally I leave the thumbnail replacements off and just use it for the titles, as often I can't tell what the hell a video is about without it.