Gran Turismo's Future: "4K Resolution is Enough", But 240fps is the Target

The main thing that needs improving is how the game handles lag, no amount of fps is going to fix that. Larger fields will unfortunately be detrimental to the effects of lag, better collision physics don't mean a thing when lag is the biggest factor in how collisions play out. However dynamic weather plus better physics for grip on cold/warm/wet/clean/worn surfaces will improve immersion a lot.

I'm mostly hoping there will be a way to race in VR next gen. If the game can run at 4K/120 fps, it certainly can run at 1080p60 in VR!
 
@NosOsH 1080p per eye would be twice as good as PSVR is currently. 60 fps with doubling works very well, the frame rate is no problem in the current version of GTS' VR. FOV could be a bit bigger.

It depends what you find acceptable. You can roughly divide the resolution by 3 to compare it to the equivalent of a screen. The current single 1080p, 960x1080 per eye look like 320x360. 3D stereo does make it look a bit higher res than 320x360 yet it is very low fidelity.

So yep 1440p per eye, (appearing closer to 853x480) would look better again.



What can it handle though. Currently it can only handle that shared 1080p for both eyes (dual 960x1080 frames at 60 fps) with only one other car on the track. Checkerboarded 1800p on screen with cars.

If the ps4 can run at native 4K120 with all cars then there's a good chance dual 1440p rendering for PSVR 2.0 should be possible for the complete game. If not at 120fps then certainly with 60fps doubled to 120hz like is done for the current PSVR.

I'm being conservative, the leap from PSVR to PSVR 2.0 will be huge regardless, whether it will be a dual 1080p, dual 1440p or single 4K screen. It will feel like the jump from ps2 to ps3. Dual 1440p would be best, single 4K (1920x2160 per eye) probably cheaper to produce.
 
Are you suggesting they can't provide those things at a high refresh rate? Seems perfectly feasible to me.

Capability is a budget, wasting it at resolutions and refresh rates that no screen is capable of is directly taking away from the potential of other improvements. The PS5 might have more power, it doesn't have inifinite. We have had multi generations of GT now which has been missing gameplay features and PD seemingly want to piss away power doing things noone will benefit from rather than focus on gameplay.
 
Capability is a budget, wasting it at resolutions and refresh rates that no screen is capable of is directly taking away from the potential of other improvements.
No it isn't. And no it isn't. Necessarily.

The PS5 might have more power, it doesn't have inifinite. We have had multi generations of GT now which has been missing gameplay features and PD seemingly want to piss away power doing things noone will benefit from rather than focus on gameplay.
I see, so you have no technical understanding of what's happening (or indeed, what has happened at PD historically) and are simply making an emotional argument.

The emotion is surely valid based on your experience, but your explanation is essentially wrong.

In future, just say you have little faith they will deliver what you personally would desire based on previous disappointments. None of this attempting to assert that the man hours put into fringe technologies comes even close to the total budget for something like a single track. Thanks.
 
In future I will say as I wish, especially to someone who thinks that capability of hardware is unlimited.
Once again you demonstrate a lack of understanding. I never said that. "Hardware capability" is not one-dimensional and "budgeting" is really a matter of optimisation, which is mostly a matter of time - which PD have had plenty of in the area of coming up with multiple rendering solutions in a single release (and for floor demos and tech shows etc. - Sony, remember).

But you have yet to explain how a high resolution and / or high framerate mode takes away from those specific features you mentioned (car count, dynamic weather and physics). No, it is not self-evident. No, I do not accept "common opinion" as fact. Fundamentally, if it cannot be demonstrated to be based on best available facts, an opinion is practically worthless.

I'm not in this for the cheap victory like so many of the master debaters on the internet, and I think if you have something to say, you should say it, as opposed to retreating from an imaginary argument ("unlimited capability").


So I have a question for you:

Which GT game, in your opinion / recollection, did not sacrifice those specific features for "resolutions and refresh rates"?
 
Rendering bloke here, for some more Mythbusters-style action!





Actually CPU is used a fair bit for graphics, and use could increase significantly with higher framerates. While the brunt of the load would be taken by the GPU (simply by having to handle 4x the number of pixels drawn per second) a lot of work is still handled by CPU.



Interlaced is super-easy to see problems with; watch some footage of a fighter plane or a jungle, things start to look like they've got sawtooth/fuzzy edges! In super slow-moving, super soft images it's ok-ish.


Beyond 4k (heck even beyond 1440p) the only really super-consistent way most people are able to tell is with text - smaller text 100% looks clearer at higher resolutions (effectively higher dpi, since screen size is usually similar). That's why phones and tablets have stupidly high resolution screens but TVs have been able to go so long at lower resolutions.

As for the rest of the graphics, are you getting 4x the improvement in visuals for the ~4x increase in load on the GPU and some extra on CPU? No, not really. There's better ways to blow the power available that are more noticeable.

A good test is this: what looks more realistic: a 4k gameplay capture, or a 480p youtube video (OF REAL FOOTAGE OF REAL THINGS FROM A CAMERA IRL) ? The only reason games aren't at 480p anymore is that we need some clarity and detail you can't get at 480p! Otherwise we'd dump every trick in the book we possibly could into super-realistic rendering, at a low res, then do the UI on top at full resolution.





Not particularly. AI would sample the world-state at the start of it's cycle. That would be just the latest state the physics was at that particular moment in time. So position, velocity, angular velocity, upcoming waypoint, etc. Then it applies its determined action until the next AI thinking cycle (sorry I'm trying to translate this into Normal, it's not easy).

Unless you're talking about having more cars on track takes up physics time (due to it having to run suspension/tyrephysicsbody calcs for each car every physics update). Then, yes. But the 'AI-thinking' part, no, not really.


One of the big myths that gets around is that Everything has to be calculated in sync with Everything Else, and that's not only wrong, but the worst thing you can do.

You can do Important Physics (ie your car handling stuff) at say 400Hz, AI (thinking, decision making) at 50Hz, Secondary Physics (ie bollards cones and anything not important) at 30Hz (sidenote: I think GT5P/GT5 was doing it at 12Hz or something crazy low), Network stuff (sending/receiving position/info in multiplayer) at 32/64Hz, and Rendering at 60Hz and it'll all be pretty OK.

What happens if you DON'T do that is you get physics or AI or movement speed or UI or whatever that is 100% affected by framerate. You can ask any Dark Souls players how dumb of an idea that was. Or the Fallout players what happens to the UI at high framerates.


If anyone has technical questions, just message here and ask.
Explain that comparison please! Never see that in that way before.
 
8k is a total meme as far as TVs and monitors go - it’s not needed, it’s past the point of diminishing returns. The human eye can’t distinguish the difference between a 4k and 8k resolution unless you’ve got your face essentially right up against the screen, where you’d have to turn your head left and right to see each side of the screen based on your ridiculously large field of view of your TV/monitor. For a normal field of view of a TV/monitor based on how far we all actually sit, 4k is perfectly good.

With the highest end PCs only in recent times being able to run games at max settings at 4k 60fps - asking them to then render four times the amount of pixels per frame (33.96 million pixels per frame for 8k vs. 8.29 million pixels per frame for 4K) for an unnoticeable improvement for the human eye based on how far we all sit from our displays, is just wasteful and stupid.

So I’m glad that he shares this point of view on the topic.

Once we’re at 4k - the focus for the future should be looking to boost frame rates to 60fps and above with high refresh rate screens, for increases in motion fluidity/reduction of blur/faster pixel response times.

———
All of this above is with TV/monitors considered. In terms of VR headsets - higher resolutions above 4k may and probably will be of benefit due to how that technology works.
 
I use a 36mp full frame camera for work, but that hasn't stopped Samsung bringing out a 108mp phone - if they can do it, they will push the technology regardless.

It's the whole Jeff Goldblum/Jurassic Park analogy.
 
8k is a total meme as far as TVs and monitors go - it’s not needed, it’s past the point of diminishing returns. The human eye can’t distinguish the difference between a 4k and 8k resolution unless you’ve got your face essentially right up against the screen, where you’d have to turn your head left and right to see each side of the screen based on your ridiculously large field of view of your TV/monitor. For a normal field of view of a TV/monitor based on how far we all actually sit, 4k is perfectly good.

With the highest end PCs only in recent times being able to run games at max settings at 4k 60fps - asking them to then render four times the amount of pixels per frame (33.96 million pixels per frame for 8k vs. 8.29 million pixels per frame for 4K) for an unnoticeable improvement for the human eye based on how far we all sit from our displays, is just wasteful and stupid.

So I’m glad that he shares this point of view on the topic.

Once we’re at 4k - the focus for the future should be looking to boost frame rates to 60fps and above with high refresh rate screens, for increases in motion fluidity/reduction of blur/faster pixel response times.

———
All of this above is with TV/monitors considered. In terms of VR headsets - higher resolutions above 4k may and probably will be of benefit due to how that technology works.

Full agree..

Only VR could benefit from 8k resolution but it is LONG LONG LONG way ahead... It isn't in 4k to be honest right now.
 
I've had it explained to me many times, but I still don't feel satisfied that there's any point to a framerate literally ten times faster than the framerate of human vision.

I understand "dropped frames" happen at 30fps - not that I think it's ever impacted me as a high impact, ultra skilled gamer in ~2 decades of kicking ass and taking names - but is ten times what's actually perceivable on a moment to moment basis actually going to achieve anything other than yet another console that's gonna cook itself in a couple of years time?

I know I worded all that like a nob but I genuinely don't know - is there really a tangible point to this or are hardware makers chasing empty numbers?
 
I've had it explained to me many times, but I still don't feel satisfied that there's any point to a framerate literally ten times faster than the framerate of human vision.

10 times? Who's talking about 1500 fps?
 
10 times? Who's talking about 1500 fps?

He's talking about 240fps.

You start perceiving smooth motion at 24 fps.
Being able to see flickering in brightness goes away at120hz.
Fighter pilots can distinguish details up to 300 fps. However that's based on being able to identify a plane that only flashes on screen for 1/300th of a second. Which is cheating, since it creates an after image on your retina giving you plenty time to 'see' it and then identify.

However when panning or following an object that moves across the screen the only limit is moving max 1 pixel per frame. The faster an object moves across the screen, the higher the frame rate needs to be to be able to accurately follow the object with your eyes (tracking it) to collect a clear image on your retina. The bigger the steps, the harder to track, the blurrier the object becomes. That's especially important in VR where you constantly track objects and turn your head at a huge fov.

Higher FPS also reduces latency, at diminishing returns.
30 fps adds about 66 ms latency (double the render time) most 30 fps games end up in the 90 to 100ms range, input to display latency.
60 fps adds about 33 ms latency, The fastest 60 fps games still have about 66ms overall input to display latency.
120 fps adds about 16ms latency, which would improve the overall to 46ms
240 fps adds about 8ms latency, which would improve the overall to 38ms

Internet lag is a much bigger factor at 60 fps already.

Variable frame rate for different elements is the way to go. However for that the base frame rate needs to be 240fps. Then you can use frame rate doubling like on PSVR to double or quadruple the frame rate to make panning (while turning) ultra smooth while you keep rendering at 60fps. Scenery approaching in the distance would be fine at 30 fps updates, while things close to the car can be updated at 120fps. You still get the latency benefits this way but avoid the high costs of rendering the whole scene at 240fps.
 
10 times? Who's talking about 1500 fps?
I think he's taking the mick about some people saying the human eye can't see past 30fps. Yes, those people exist. I understand not being able to percieve it all that well, but straight up saying you can't see it is just... well, you know how some people are.
 
He's talking about 240fps.

You start perceiving smooth motion at 24 fps.
Being able to see flickering in brightness goes away at120hz.
Fighter pilots can distinguish details up to 300 fps. However that's based on being able to identify a plane that only flashes on screen for 1/300th of a second. Which is cheating, since it creates an after image on your retina giving you plenty time to 'see' it and then identify.

However when panning or following an object that moves across the screen the only limit is moving max 1 pixel per frame. The faster an object moves across the screen, the higher the frame rate needs to be to be able to accurately follow the object with your eyes (tracking it) to collect a clear image on your retina. The bigger the steps, the harder to track, the blurrier the object becomes. That's especially important in VR where you constantly track objects and turn your head at a huge fov.

Higher FPS also reduces latency, at diminishing returns.
30 fps adds about 66 ms latency (double the render time) most 30 fps games end up in the 90 to 100ms range, input to display latency.
60 fps adds about 33 ms latency, The fastest 60 fps games still have about 66ms overall input to display latency.
120 fps adds about 16ms latency, which would improve the overall to 46ms
240 fps adds about 8ms latency, which would improve the overall to 38ms

Internet lag is a much bigger factor at 60 fps already.

Variable frame rate for different elements is the way to go. However for that the base frame rate needs to be 240fps. Then you can use frame rate doubling like on PSVR to double or quadruple the frame rate to make panning (while turning) ultra smooth while you keep rendering at 60fps. Scenery approaching in the distance would be fine at 30 fps updates, while things close to the car can be updated at 120fps. You still get the latency benefits this way but avoid the high costs of rendering the whole scene at 240fps.

24fps is not the framerate of human vision, so it doesn't make sense saying 240 fps is 10x that.

150 is more likely for the average person. That's why I mentioned 1500.

Maybe 240fps is too much (I have no idea because I never saw it on a monitor) but 140 is smoother than 60. We can see the difference. Most of us should anyway.
 
We don't see in "frames". We see in a multitude of asynchronous pinpoint chemical signals originating in a web or network of variable density and "function", as interpreted through temporal and spatial pattern recognition in our trainable neural networks, albeit with variable attention.

It's much more noisy and effervescent, and responsive and adaptable as a result. Renderers and displays will eventually take advantage of this stochastic and selective approach to conserving signal bandwidth for maximum performance (foveated rendering and sparse rendering / AI upscaling is a step in that direction).

But, just as with audio, there will continue to be benefits beyond that which is thought to be "perceptible". The closer to reality the better. All in good time.


The biggest hurdle to the adoption of high frame rates in recent times has been the elephantine input / output latency of modern systems basically dwarfing any improvement possible with frame rate alone. Thankfully this is now being addressed in software - and hardware will follow suit in due course. Funny how the basics get forgotten when an art form becomes an industry.
 
Back