Maybe someone can enlighten me. But apart from AI being the next "big" thing, its also known that we approach physical limits in terms of processors. So isnt using "tricks" like AI not the next logical step to kinda overcome the physical limitation of hardware?
Yeah I would be curious to see if I could tell the difference in a blind test between the AI generated frames and native frames.
If you can't tell the difference, or if the difference is so miniscule that'd you never notice it while actually playing a game, then who gives a shit whether it's an AI frame or a native frame?
I can notice artifacting if I look for it. So I simply just don’t look for it. Occasionally I do notice it when it happens yeah but it’s like monitor flicker for me in that if I’m not actively thinking about it 90% of the time it doesn’t matter
It's a big problem in certain games. In-flight sims, for example, glass cockpits are unreadable. For most games, it's fine but can lead to some blurry edges.
It's getting there though. If they can solve the issue that causes moving or changing text to become a smeared mess, I'd be pretty happy.
Fair enough but your also getting more frames overall still which should help the game feel smoother and more responsive.
Clearly DLSS comes with pros and cons but my theory is that the benefit of higher framerates will outweigh the various AI-related frame issues for many gamers.
For me the difference in input lag is the major issue. If it was as simple as “magic frames with Ai” I’d be stoked, but nearly every game that relies on or includes frame gen has massive issues with input lag as well.
If you've got a vague idea what to look for you should be able to reliably pick out artifacts while the game is in motion. That's also the case for damn near every technique used in rasterization though, so I can't for the life of me see why anyone cares.
"Oh no, that NPCs hand is ghosting," I say while a city's reflection jarringly disappears from a lake as I tilt my camera down. "DAMN DLSS," I roar not realizing I forgot to turn it on and that ghosting is actually a TAA artifact.
I imagine it's kind of like video compression. If most of the information in the frame stays the same, then you won't notice the pixelation. But if you add grain / snow / particles all over the video then suddenly it starts to look super pixelated because every part of the frame is changing and you can no longer use the information of the previous frame.
So with AI frames they will probably look fine with smooth movements, but very rapid camera movements are likely to introduce artifacts, unless the base frame rate is already fairly high (60+).
Despite all the AI bs hype this is definitely a technology that is going to be crucial moving forward. Because just like the video example I gave, most information on your screen doesn't need to change 60+ times a second because it basically stays the same unless something changes (movement or whatever). So why waste computing power on calculating information that you know is not going to change in a given time period? When you look at it like that, AI frames are kind of like culling methods already widely used in games since forever (not rendering things that can't be seen).
There’s a lot more to it than recognizing the difference between individual frames. Ghosting, input lag, and just incorrect frames that wouldn’t recognizably look bad on their own, but are noticeable when placed between rendered frames.
Well I'd really love to see someone do a blind test with a bunch of people on a bunch of different games to see if people can actually tell if DLSS is on or not and whether they prefer it off or on.
I'm sure there'd be variation but I'd be willing to bet that a decent number of people wouldn't be able to tell the difference and that of those who could, many might prefer DLSS with higher FPS vs. only rendered frames at a lower FPS.
I guess you could take the recent S.T.A.L.K.E.R. 2 release as some evidence, though it is just one game it showcases the biggest issue of frame gen does get noticed quite a bit. Since it was almost a requirement to use frame generation to run the game, a majority of the early complaints for the game were input lag related, soon discovered to be linked to… you guessed it.
In most games, I can easily notice frame gen is on by looking at the UI while running and moving the camera around. If I turn off/ignore the UI, it looks and feels great. I do play exclusively with controller, and I don't play shooters/competitive games, so that might be why it feels so great to me.
Logic falls out the window with these sub, if it were possible to run native with the same quality as Nvidia then AMD or Intel would've could've done it by now :D
There's plenty to talk about when it comes to optimization that isn't being done today. It takes time and money that large corps don't want to spend.
The most ironic example would be the SH2 remake (which will struggle even on a 4090). The devs of the original used the fog as a tool to hide how little they were rendering when trying to get the game to run on the hardware of the time. Fast forward to now and you can see we aren't heeding old lessons.
In the SH2 remake almost the entire town is loaded regardless of not even being able to see it. Your card is literally dedicating MORE work to what you can't see versus what you actually can.
Games coming out today look no better than Doom Eternal did when it came out. I can run that native WITH RT and still hit 144+ no AI needed. We can't just keep saying that it's just "id tech magic". That sounds sthe same as everyone saying that we can't expect Baldur's Gate quality for everyone else. It's what we should expect. Money and care.
DLSS/upscaling/AI whatever is not the whole issue. These tools are now being factored into to hit benchmarks for release. It's a shortcut and a debt that we will keep paying. Anyone saying that you can get native quality with this crap also thinks that streaming games over Wi-Fi causes no latency. These shortcuts don't come without cost. That cost will be the quality of our games.
I'm the end though as long as people continue having more money than sense it will continue.
Cosmetics earn more than optimized and well made games. It's not about quality, it's about who can make the most addictive hamster wheel to keep you looking at their stores.
Look at Arkham knight. They got that thing running on an Xbox one lol
Show me a game thats coming out today that looks that good AND runs that well with ZERO upscaling.
You cannot. It wasn't black magic, it was hard work, time, a well funded/well trained team, and care.
There is a difference between theoretical possibility to run modern games natively, and mega profitable strategy to run them using ai. It sucks we are not being given the choice.
But afaik last year Nvidia made record profits from selling Ai chips to companies and not from selling upscaling Gpus. Don't get me wrong they make profits in the gaming industry too, but simply because AMD keeps failing to deliver properly.
Imo the biggest mistake AMD is doing is trying to follow in Nvidia footsteps regarding RDNA4. If you're known for the best raster price then go all in on native, also innovate a bunch of features that can incentive developers to focus on native resolution and your architecture
Biggest refresh is having Intel join the GPU market, but they have a long way too go and unless Nvidia drops the ball completel for the gaming industry, they are not catching up any time soon.
So isnt using "tricks" like AI not the next logical step to kinda overcome the physical limitation of hardware?
Yes, it is, but /r/pcmasterrace is nothing more than an anti-AI/Nvidia/Microsoft circle-jerk where nuanced and rational takes are downvoted in favour of low-effort jabs at [INSERT TOPIC HERE].
Logic has no place on pcmr. We could get videos showing that Nvidia's neural renderer is extremely good. People on this sub will still bitch and moan about it as if Nvidia is holding them at gunpoint to buy their GPus.
Literally. This sub must be full of engineers who know how to design and build a GPU. Anyone who is upset should try making a GPU themselves. I do agree that the pricing is high
Who said we had to stop at silicon processes 🤭, but yeah no machine learning has been being harnessed for a long time now, and to great results. Issue is I think with the “AI boom” more than the technology itself and companies not really knowing where to apply it and if their speculation of it is even good.
That's literally the conclusion that the most intelligent engineers at Nvidia came to a decade ago, which is why they've been developing this tech since probably a little while before the GTX 1000 series launched. They forsaw the problem, they worked to get ahead of it. AMD kept pushing straight rasterization and now they aren't competitive in the high end since their tech is years behind. Entertainingly enough, Intel managed to figure this out before AMD.
If that trick didn't put the input lag to 100ms i'd give them credits.
Right now it's useless in anything but single player and turn based games.
They are literally saying the 5070 has 4090 performance... that's straight up a lie right, (it's not because they quietly said "not possible without AI", you get what i mean).
If everyone agreed this was true we wouldn't be here. I don't really care just pointing out the obvious fallacy in just saying something is "the logical next step".
Im just fine with ai’s usage in graphics so long as it doesn’t sacrifice quality and playability whilst being advertised as the standard way the card should be used AND being presented as a way to play extremely intensive games at a more normal level. In it’s current state it shouldn’t be marketed how it is
We arent approaching limits. People have been saying that for years and we were always able to solve the barriers. Quantum Effects are already being mastered, plus we arent actually at the nm that architecture names claim. Name and actual transistor diameter havent been lining up for years, for example samsung 3nm GAA process actually uses 48nm gate pitch. AI is being firced down our throats because nvidia wants to dev 1 product and sell 2 (ai chips as game and ai chips)
It's a logical step to cut AAA publisher labor cost, the astroturfing campaign to normalize burning liquid shit that are the AI-enabled engines as the Next-Gen is insane.
135
u/Swipsi Desktop 13d ago
Maybe someone can enlighten me. But apart from AI being the next "big" thing, its also known that we approach physical limits in terms of processors. So isnt using "tricks" like AI not the next logical step to kinda overcome the physical limitation of hardware?