That isn't gameplay, though. Immersion makes games more engrossing, but isn't itself gameplay.The enjoyment you derive from the immersiveness of an environment is, I would argue, part of the gameplay, for certain types of games.
That isn't gameplay, though. Immersion makes games more engrossing, but isn't itself gameplay.The enjoyment you derive from the immersiveness of an environment is, I would argue, part of the gameplay, for certain types of games.
Why should gameplay be the only thing that matters? We don’t say “well, the movie was shot ugly, lighting was bad and costume design atrocious, but it’s a 10/10 movie because it had a good story”.That isn't gameplay, though. Immersion makes games more engrossing, but isn't itself gameplay.
It seems to me the “revolutionary” features that required specific hardware features, came along every couple generations or 3, and then were iterated on.
T&L was one, pixel- and vertex shaders were another. (First GeForce, Geforce 3 respectively).
RT/RTX was a big one, just because it’s been decades since we had a true revolution in how we deal with lighting, and it’s just SO expensive to do even halfway “correct”. I think we will see a LOT of evolution in this space, and perhaps separate path-traced render paths will become the norm, so older cards (with RT, since even the default path will eventually require RT hardware) aren’t locked out, but there is still a reason to upgrade to the newest cards. And of course because path tracing really is a whole other level when it comes to more accurate lighting and surfaces.
The 4090 is an outlier, so I don't expect the 5080 to get all the way there (or have 24GB VRAM). The 5070 will probably match the 4080 though. I also expect they will try to have some new hardware trick so they can add something like FG again that only works on the 50-series and up.The 5080 should perform like a 4090 and cost $800. It won't.
It shouldn't. Presentation matters a ton, it's another important aspect to great games. Not necessarily perfect realism, stylized, exaggerated, or even thematically retro visuals count too, but games like Red Dead 2 would feel very different if they looked like Borderlands.Why should gameplay be the only thing that matters?
Because you don't play the graphics, you play the game. If I've learned anything about games over my uncomfortably long lifetime, it's that titles focused on graphics pizazz first and foremost rarely have legs. Great gameplay, on the other hand, lasts forever.Why should gameplay be the only thing that matters? We don’t say “well, the movie was shot ugly, lighting was bad and costume design atrocious, but it’s a 10/10 movie because it had a good story”.
Wow the 4090 FE is still available at the NVIDIA store almost an hour after I got the NowInStock alert.
No one said graphics first, but you don’t just “play” a game - you experience it. As good as Elder Scrolls was in the day, you just can tell me it is as immersive and engaging as Red Dead Redemption 2, or something like the Horizon games. Being able to bring a world truly alive with an ecosystem, detailed environments to explore and lighting that lets you paint rather than just dump static lighting on everything, opens up a lot of possibilities for artists.Because you don't play the graphics, you play the game. If I've learned anything about games over my uncomfortably long lifetime, it's that titles focused on graphics pizazz first and foremost rarely have legs. Great gameplay, on the other hand, lasts forever.
The thing is, you could make good looking games in the past, but you were constrained by technology a lot more than today. Borderlands 3 is an example of how powerful GPUs help realize a certain style, and each game, while using the same “Heavy Metal” style, has progressively refined their style and look.Red Dead Redemption 2 is an example where I like the realism argument; I am not trying to sell a dogma here. Travelling through the mythical Wild West pretty much requires the grass to be muddy, the forest to be dark and damp, the prairie to be dusty and hazy. If it doesn't look like a majestically beautiful nature almost untouched by humans, then it isn't Hollywood's dream of a particular past.
Such games are the exception, though, because RDR2 still relies on art direction more than it relies on an accurate simulation of the physics of light. But RDR2 would likely fall flat if you tried the same with a cartoony art style. You don't get to subconsciously feel the mud under your boots, smell the humidity or the dust, or feel the wind and rain in your face, if the visuals aren't hollywood accurate. The art of filmmaking delivers many effective tools.
I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.On a side note, what is the current most demanding raster-only game? If you want 4k60FPS, what's the minimum card to play that? I feel like the 4080 and above are already overkill, but I don't keep up with every game out there.
Star field is just not a good enough looking game to run that badly. I don’t know what they’re doing, it’s not AI, huge environments or incredibly triangle counts.I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.
It's just a very old engine/tooling pushing way more data than it was designed for. Based on how little they have actually changed from Fallout 4 and how the few engine things that have improved feel like something they got a contractor to add on, I assume they don't really have anyone who knows the engine internals even working there anymore. Their reaction time for simple engine and rendering issues when the game came out was not what it would be if they had an engine guy or three on staff who could fix obvious bugs.Star field is just not a good enough looking game to run that badly. I don’t know what they’re doing, it’s not AI, huge environments or incredibly triangle counts.
I suspect they just built a game their engine isn’t well suited for.
The Decima engine had really bad issues with LoD and texture detail for a long time, so it seems like they put a bunch of effort into fixing the issue. My guess is that HFW will probably be one of the last big games that goes for that level of visual fidelity without some kind of RT being required. Spider-man 2 requiring RT on the PS5 kind of cemented that as the way forward, we just haven't seen the reaction from that yet. I expect from here on out raster only render paths on the PC are going to be more and more of an afterthought, like how software rendering quickly became after 3D GPUs showed up.But at least this game looks it - the amount of detail and texture quality is pretty impressive, and there is a LOT of variety. There are textures you can walk the camera all the way up to, as close as it gets, and they still look sharp.
I don’t know if it is the most demanding and it certainly isn’t well optimized, but a 4080 cannot hit 60 FPS at 4K in Starfield on average, at least not in the launch-day tests. 7900XTX does hit 64 FPS on average but only 57 on 99th percentile, so the only card that consistently stays over 60 fps is 4090.
He believes the reason for this is because AMD's Strix Halo parts will have a 256 bit bus, so they can access a very large memory pool.
Or somebody wants to accelerate GPU workloads with extreme VRAM requirements ...It's just common sense that they have a 256 bit bus for the extra bandwidth needed for the big GPU.
Or somebody wants to accelerate GPU workloads with extreme VRAM requirements ...
Well, if the choice is between a few ten thousand shader ALUs being fed by PCIe 5.0 speed on the one hand, and a few thousand shader ALUs being fed by a 256 bit wide DDR5 memory subsystem on the other hand, then there will be a few workloads where the big GPU is severely starved for data. For example machine learning with huge models during the training phase.This is a relatively weak GPU for those kinds of extreme memory needs. This APU is mainly going into gaming laptops.
Well, if the choice is between a few ten thousand shader ALUs being fed by PCIe 5.0 speed on the one hand, and a few thousand shader ALUs being fed by a 256 bit wide DDR5 memory subsystem on the other hand, then there will be a few workloads where the big GPU is severely starved for data. For example machine learning with huge models during the training phase.
I cannot convince you; so here, take your point.Training huge models also needs big performance, so people Training huge models are probably buying NVidia pro models with massive amount of HBM memory.
They're also buying AMD's MI300X, which has more memory (192GB) than nVIdia's H100 80GB, and their upcoming H200 141GB.Training huge models also needs big performance, so people Training huge models are probably buying NVidia pro models with massive amount of HBM memory.
You're comparing apples to oranges by dragging HBM into a conversation about mobile GPUs/APUs.
If budget and value for money were no concern, then this thread would not exist, because we'd all buy (the current equivalent of) 4090s and be done.
People, today, are using Intel Arc cards for personal AI projects at home.And you are comparing a fisher price toy car, to Peterbilt dump truck if you think people are going to buy APUs for massive learning models.
People, today, are using Intel Arc cards for personal AI projects at home.
There are a growing number of people who want to tinker with AI, and they may very well only have a laptop to test it out with.
I agree that "the 256-bit bus of Strix Halo will allow for a larger memory pool than before" is a really strange framing of the situation. IMO the more important factor is that the wider bus makes it worthwhile to add more CUs to the APU. Originally, this was mainly for gaming, but it will also create a GPU with more RAM than previous discrete options while (potentially) having enough grunt to train >16 GB models in a reasonable amount of time.Sure, but I see no evidence that people at home are creating massive training models, that need more than 16 GB, let alone more than 64GB (which is supported by the current 128 bit bus).
So the point remains, that the actual reason for wider bus, is for more gaming performance for the APU.
As long as we have been discussing bigger APU, the point has always come up, that they need more BW.
My words were: "or somebody wants to accelerate GPU workloads with extreme VRAM requirements".I agree that "the 256-bit bus of Strix Halo will allow for a larger memory pool than before" is a really strange framing of the situation.
I wasn't referring to you, but to MLID :My words were: "or somebody wants to accelerate GPU workloads with extreme VRAM requirements".
I did not say "larger than before". Nor did I put any framing around it.
He believes the reason for this is because AMD's Strix Halo parts will have a 256 bit bus, so they can access a very large memory pool.
Gotta love how sensationalized everything has to be, all the time.
"Nvidia designs their product to fit within export rules" "Nvidia wants to launch the 50 series globally, instead of with a card with a limited release"-> far too reasonable.
"Nvidia is making a SANCTIONS BUSTER" "Nvidia has to be careful not to OFFEND CHINA" -> now that's more like it!
Especially when the news here isn't that the 5090 won't exist, just that it will come out after the 5080, instead of a month before like was the case for the 40 series.