If your renderer can draw 40fps in the highest density scenes (lots of polygons, particles, and effects) on specific hardware, then that is the most you'll be able to guarantee to that user without sacrificing detail. That will be your "fixed frame rate".
However, in simpler scenes the same renderer is likely to output significantly more frames (even up to thousands) at the same detail level.
So in this scenario, all you're doing by setting a fixed rate is throwing away tons of frames in low density scenes. Most gamers would prefer to have the most fps possible at any given moment, even if it means variability.
The most hardcore gamers I know use 120hz monitors, and machines that can deliver 241fps (120 * 2 + 1) in the highest detail scenes. They then set the engine to cap frames at 241fps which will eliminate tearing, negating the need for this technology. However, their gaming machines cost a LOT, so this would deliver similar results for a much wider range of hardware.
If you're generating thousands of frames and throwing them away, you wrote your rendering software wrong. I ask again, what is so impossible about just rendering the 60fps (just a bit under the limit of the rate at which a human is able to perceive any difference), and then not rendering any more? instead of rendering as fast as you can, do the different trade of of always meeting the deadline.
sigh why am I explaining this again? is it really hard to understand? why?
Your question was answered; it has nothing to do with the case where you're "generating thousands of frames and throwing them away". What you want is a rendering engine that will perform at 60fps in the worst case. What engine devs want to write is an engine that can do better (even much much better) than 60fps in the average case, and be allowed to slip in those pathological cases. Gamers want more frames. More frames than is noticeable. They want some slack so that if something totally unrelated to the game ties up the machine, the framerate drop is not noticeable. They want to be able to double it so that they can drive a 3D display but still have the same effective framerate per eye.
Having a consistent 40fps is much worse (for a gamer) than a variable framerate that will dip down to 40fps for 1% (or 10%) of the play time. Having to limit your most complex scene to what can be guaranteed rendered at 60fps is much less appealing to a developer than making sure all the likely scenes can render at 60fps.
> Having a consistent 40fps is much worse (for a gamer) than a variable framerate that will dip down to 40fps for 1%
stuttering animations is better than smooth animations?
Stuttering is better than smooth.
gotcha.
> Your question was answered;
for someone who doesn't appear to understand what I'm asking you have a high degree of confidence that I've been answered.
What is so terrible about having a lower complexity budget that guarantees 60fps? What if you had 60 fps no exceptions as a constraint in your hardware and software design, how far could you really go with some creativity? Think about it- is having a complexity ceiling the only possible way to ensure 60fps?
>What is so terrible about having a lower complexity budget that guarantees 60fps
>Think about it- is having a complexity ceiling the only possible way to ensure 60fps?
I'm really not following you. Are you asking what is the benefit of this technology when games come out every day at 60fps even now? This technology allows them to get the same fidelity and smoothness on less powerful hardware, with more complicated simulations, and lower latency.
>I ask again, what is so impossible about just rendering the 60fps
Are you asking why renders can't maintain perfectly steady frame rates without going above or below 60fps, regardless of WHAT they are rendering on screen or what is happening in the simulation? You can't see how that's a 'non trivial' problem?
The most notable example of a game using dynamic tradeoffs to maintain a solid 60FPS is ID's Rage engine -- written by Johh Carmack, one the people on stage at this very presentation, who was lauding this technology and saying he has been pushing GPU and monitor manufacturers to implement this for years.
Carmack notes that while they were able to stay at 60 with an incredible amount of work, if they had been able to target 90% of 60fps with this technology there would have been little visual difference but the gameplay and visual complexity ceiling would have been vastly higher.
Look into what is involved in modern 3D rendering of high-detail scenes. It is NON-TRIVIAL, and I can tell you this as someone who has done 3D programming for 17 years.
Pro Tip: If an entire industry of experienced people finds something very hard, and you don't know anything about the topic but you don't see why it would be hard, maybe the relevant factor here is the "you don't know."
It reminds me of my mom who said on multiple occasions "All these rockets are dangerous and they explode; I don't see why the scientists don't just use the majestic forces that keep the planets in their orbits to move the rocket."
Yes, I don't know. that's what I am saying.
I am not in this thread saying "You're wrong and I'm right", and I'm not asking you to say to me "no, you're wrong"
I am asking you to explain it*. not just say "well, it's hard, and this guy and this industry says it's hard" and call it a day.
do you understand? "because it's hard" is not an interesting answer. It's a boring and contentless answer.
This has to do with the way the software interacts with hardware on PC.
Basically, a GPU is a very complex computer with several hierarchies of execution streams e.g. there are vector SIMD streams that execute same code over different data, there are threads of such streams that preempt each other, there multiple processing units each running a set of such threads, and there are even structures of such processing units. Yet all of this is hidden from the programmer and only API there is an abstract "scene" description. E.g. you can say "first render these polygons with such and such settings, then render other polygons with other settings, then show what is rendered so far and return to the default state".
Going from such a high level description to the thousands of execution streams that GPU will execute is a very complex procedure that changes with each driver version and is not fully understood by any single person. On top of this you have other processes running on your machine while playing the game and they can and will steal CPU and the OS scheduling slots, adding a lot of variance to your frame time.
You can render the same data set several times and sometime it will take 10ms but other times it will take 100ms depending on what other processes decided to do at the time so it's impossible to guarantee constant frame time on PC.
On consoles it's not a big deal as you can program the GPU directly and don't compete with other processes. A great number of games do run with constant frame rate - it's not trivial but it's not a rocket science either.
Well, I left the start of an explanation in another reply. But the problem is that you are asking a question where it takes years to really understand the answer, and certainly hours for a commenter to write a summary. For someone to put that much effort in, they have to be motivated to put in the effort; but you are coming across in a very unpleasant way, and not offering anything in return, so why would anyone put in the effort just for you?
I have no control over how you perceive my posts. I write them in a neutral way, produce text, and it's up to you to read in a tone of voice and mood. You are free to perceive anything about them you want. You are certainly welcome to refuse to reply to them and you always were. I was apparently pleasant enough for you to give glib responses but not enough to reply with anything much of substance more than "you're asking a stupid question and you can never be as smart as me HAHAHA!!" I mean, if that's your answer then you probably are better off not responding to me at all, leaving my question hanging, and not polluting the conversation with such negativity.
As for the start of an answer, what was wrong with just .. starting with that? What was the point of all the other stuff you wrote? Think about it. what is your mission here? to be informative or just to attempt to make me feel bad for being curious?
The mission is to make you a little more self-aware of how your posts are being read and perceived. "why am I explaining this again? is it really hard to understand? why?" does not sound like a neutral tone to most people, it sounds condescending, which is confusing when (as you admit) you are the one who is asking for an explanation for something you do not understand.
Yes I don't understand something and I'm asking for an explanation. If you, or someone else does not understand my actual question, and has nothing more to contribute than "It's hard" then I am not interested in their answers or their condescension and I couldn't give two tosses if you turn around and perceive me as being condescending. It's projection. And I wasn't asking you anyway. But since I'm here, what is the point of YOUR post? I don't need more self awareness, people on hacker news need to stop answering questions they don't understand with bullshit nonsense condescension and getting pissy when the victim of their idiocy gets annoyed by it.
Seriously I will not tolerate this curiosity shaming, the ethic that one should feel embarrassed about asking questions. I do not buy the idea that it is condescending to be dissatisfied with shallow lazy nothing answers. I have no control over you perceiving it that way. It is just a flaw in your background that you should perhaps be more self aware about.
You are being rude to people that you are asking a favor from. Stop blaming everyone else; isolate the common factor. They're not shaming you because of the asking itself.
I am not being rude to people I am asking a favor from. I am being rude to people who have nothing to contribute here other than saying "oh it's too complicated for you to understand. Seriously I have been doing this for 15 years. it's just too hard to explain. You're just like my idiot mother who didn't understand how physics works. LOL". That is bullying. Not being helpful. I don't need favors from them.
No, they are being rude. There was no reason to post that verbal diarrhoea and I'm just not taking it like a doormat, and that bothers you. The common factor is them being dickheads. If you look, there are lots of other thoughtful people answering me (with real actual answers and creativity instead of condescension) that I am rewarding and conversing with appropriately.
The common factor is the culture of curiosity shaming here. And sorry, but quite frankly you and them can just get fucked. I don't care if you think I am being rude. I want to be rude about that. It is internet cancer. Other thoughtful people should be rude about it too. I want to shoo that attitude away from everywhere I can. Not walk on eggs just in case someone might take offense. You have no interest in making me "self aware". You just see a nail sticking up and you want to hammer it down.
I was really hoping you wouldn't pull this into an argument about semantics. Since you did, I'm almost certain this will be my last post.
'smug' was a description of your mannerisms, not your motivation. And 'snapping' is only pointing out that you made your pronouncements based on three sentences. I am making no claim to psychological insight.
I just think you're acting like a petty jerk, and blaming everyone else.
It's adorable the way you tell people to get fucked and use other insults of similar vitriol and then imply that my comments are invalid for using a term like 'petty'.
You have no problem with what I was asking, just the way in which I asked it, and perpetuated this whole thread to press into me that I wasn't abjectly deferential humble and thankful enough to the great overlords of HN for deigning to bother to waste their time on an obviously worthless scumbag like me.
THAT is petty and obnoxious. and it is to that I say, "get fucked." Criticizing superficial aspects about my manner, instead of the content of my question is the very definition of petty.
>I ask again, what is so impossible about just rendering the 60fps, and then not rendering any more?
There seem to be two ways to interpret your question:
>1.) Why can't games render 60 frames per second always?
Because some scenes are more complex than others. Rendering a complex scene can longer than 16.7 ms, and there is no way around that.
>2.) If a game comes far below the frame completion deadline (e.g., completes a frame in 0.5 ms, where the deadline is 16.7 ms), why doesn't it simply stop doing anything until the deadline has passed?
I do not know the answer. But I can say that many games actually do this. It is usually referred to as a frame cap.
Thanks I think this is an insightful interpretation of my question. I think I have been satisfactorily answered elsewhere on the thread. It totally makes sense now, but on the other hand I still don't see how this tech helps much. It would improve things marginally, in theory but it seems like a bandaid to what you really want, which is a steady stream of animation frames. it feels like a concession to the impossibility of creating rich smooth realtime animation in the presence of a multiprocess operating system.
60 fps means a budget of 16.6 ms per frame. If it takes you 16.7 ms per frame, you don't get 59.8 fps - you get 30 fps when vsync is enabled. So renderers in practice have to get a good way under that time budget to get a reliable 60 fps.
Furthermore, 60 fps is not "just under the limit" for what a human can see; read e.g.:
On consoles you used to get a vsync interrupt. A reliable signal-- an event which you could control very precisely from your game software.
Nowadays when I program games, I cannot get a promise that a frame event will fire. I can get a "well, this code may run, unless the operating system needs those cycles".
So, so is skipping a whole frame, OR halving the frame rate really the best that is possible? Why is it either/or? What are operating systems, hardware vendors and driver writers doing about it?
The problem is that you can only start sending a frame to the display at 16.67ms intervals. If you miss that deadline, you can either swap buffers now (causing tearing), or you can swap buffers at the next vertical blank interval (which results in an effictive 30Hz refresh rate). These are the only two options that are supported by current displays. There's nothing you can do on the computer side to get around these constraints, because the limitation comes from display connections like VGA, HDMI, etc. that only deal in fixed refresh rates. Trying to drive such connections with a variable refresh rate would be like changing the baud rate of a serial connection on the fly without any way to let the receiving device know about the change.
This is an interesting point. The point is made elsewhere in the thread that if you miss the deadline, but send what you have drawn so far anyway, with modern typical rendering software/hardware you'd get an incomplete drawing with holes and missing layers etc.
But that is not what tearing is. Tearing is, you've missed the deadline, so you finish the rendering to completion, then swap the buffer in the middle of a vertical scan. What you're saying is, you can do that, or wait until the next vertical scan before swapping the buffer. And further more, that can either result in simply a skipped frame, or the game switches down to a 30fps mode, perhaps based on some running statistical about frame render durations.
I'm reminded about a discussion Carmack (was it him? or am I having a brain fart) about mitigating the tearing and framerate problem by doing per scanline rendering instead of per frame rendering.
However, in simpler scenes the same renderer is likely to output significantly more frames (even up to thousands) at the same detail level.
So in this scenario, all you're doing by setting a fixed rate is throwing away tons of frames in low density scenes. Most gamers would prefer to have the most fps possible at any given moment, even if it means variability.
The most hardcore gamers I know use 120hz monitors, and machines that can deliver 241fps (120 * 2 + 1) in the highest detail scenes. They then set the engine to cap frames at 241fps which will eliminate tearing, negating the need for this technology. However, their gaming machines cost a LOT, so this would deliver similar results for a much wider range of hardware.