Hacker Newsnew | past | comments | ask | show | jobs | submit | babypuncher's commentslogin

The push for ray tracing comes from the fact that they've reached the practical limits of scaling more conventional rendering. RT performance is where we are seeing the most gen-on-gen performance improvement, across GPU vendors.

Poor RT performance is more a developer skill issue than a problem with the tech. We've had games like Doom The Dark Ages that flat out require RT, but the RT lighting pass only accounts for ~13% of frame times while pushing much better results than any raster GI solution would do with the same budget.


The literal multi-million dollar question that executives have never bothered asking: When is it enough?

Do I, as a player, appreciate the extra visual detail in new games? Sure, most of the time.

But, if you asked me what I enjoy playing more 80% of the time? I'd pull out a list of 10+ year old titles that I keep coming back to, and more that I would rather play than what's on the market today if they only had an active playerbase (for multiplayer titles).

Honestly, I know I'm not alone in saying this: I'd rather we had more games focused on good mechanics and story, instead of visually impressive works that pile on MTX to recoup insane production costs. Maybe this is just the catalyst we need to get studios to redirect budgets to making games fun instead of spending a bunch of budget on visual quality.


Well in the case of Doom: The Dark ages, it's not just about about fidelity but about scale and production. To make TDA's levels with the baked GI used in the previous game would have taken their artists considerably more time and resulted in a 2-3x growth in install size, all while providing lighting that is less dynamic. The only benefit would have been the ability to support a handful of GPUs slightly older than the listed minimum spec.

Ray tracing has real implications not just for the production pipeline, but the kind of environments designers can make for their games. You really only notice the benefits in games that are built from the ground up for it though. So far, most games with ray tracing have just tacked it on top of a game built for raster lighting, which means they are still built around those limitations.


I'm not even talking about RT, specifically, but overall production quality. Increased texture detail, higher-poly models, more shader effects, general environmental detail, the list goes on.

These massive production budgets for huge, visually detailed games, are causing publishers to take fewer creative risks, and when products inevitably fail in the market the studios get shuttered. I'd much rather go back to smaller teams, and more reasonable production values from 10+ years ago than keep getting the drivel we have, and that's without even factoring in how expensive current hardware is.


I can definitely agree with that. AAA game production has become bloated with out of control budgets and protracted development cycles, a lot of that due to needing to fill massive overbuilt game worlds with an endless supply of unique high quality assets.

Ray tracing is a hardware feature that can help cut down on a chunk of that bloat, but only when developers can rely on it as a baseline.


I think the problem is that, until recently, there was little impetus to actually run Windows on devices where ARM actually has a meaningful advantage over x86. The Windows ARM laptops out there today don't impress, not just because of the software, but because the hardware itself isn't "better enough" than Intel or AMD to justify the transition for most people the way Apple Silicon was, especially for games. That is to say nothing of desktops, where battery life isn't even a concern.

Valve is using ARM to run Windows games on "ultra portable" devices, starting with the Steam Frame. At least right now, there isn't a competitive x86 chip that fits this use case. It also feels like more of an experiment, as Valve themselves are setting the expectation that this is a "streaming first" headset for running games on your desktop, and they've even said not to expect a great experience playing Half-Life: Alyx locally (a nearly 7 year old title).

It will be interesting to see if Intel/AMD catch up to ARM on efficiency in time to keep handhelds like the Steam Deck and ROG Ally from jumping ship. Right now it seems Valve is hedging their bets.


> At least right now, there isn't a competitive x86 chip

I don't think there will ever be a competitive x86 chip. ARM is eating the world piece by piece. The only reason the Steam Deck is running x86 is because it's not performant enough with two translations (Windows to Linux, x86 to ARM). Valve is very wisely starting the switch with a VR headset, a far less popular device than its already niche Steam Deck. The next Steam Deck might already switch to ARM looking at what they announced last week.

x86 is on the way out. Not in two years, perhaps not in ten years. But there will come a time where the economics no longer make sense and no one can afford to develop competitive chips for the server+gamers market alone. Then x86 is truly dead.


My problem with this take is that it takes ARM > x86 as some kind of given, like there is an inherent flaw with the x6-64 ISA that means a chip that provides it can never be competitive with ARM on power consumption.

We've already seen Intel and AMD narrow the gap considerably, in part by adopting designs pioneered by ARM manufacturers like hybrid big-little cores.

Another aspect that I think gets forgotton in the Steam Deck conversation is the fact that AMD graphics performance is well ahead of Qualcomm, and that is extremely important for a gaming device. I'm willing to bet that the next Steam Deck goes with another custom AMD chip, but the generation after that is more of a question mark.

RISC-V is another wildcard that could end up threatening ARM's path to total dominance.


> My problem with this take is that it takes ARM > x86 as some kind of given, like there is an inherent flaw with the x6-64 ISA that means a chip that provides it can never be competitive with ARM on power consumption.

It's a distinction without a difference. x86 is not currently competitive in anything smaller than a laptop. Even in a laptop, the only reason it hasn't eaten the market is Microsoft is uninterested and Apple doesn't tell the Joker where it gets its wonderful toys.

Market forces are at play here, exactly like they were in the 90s with Intel's massive gains. ARM is making money hand over fist while x86 is getting squeezed. There will come a time where it won't make economic sense to invest in x86, technical merits be damned.


> ARM is making money hand over fist while x86 is getting squeezed

Do you have the profit margin data to back that statement up? Everything I've seen suggests that ARM is the lower-margin, less-profitable hardware averaged across all chips produced. Moreso when you count licensing costs against the profits.


> like there is an inherent flaw with the x6-64 ISA that means a chip that provides it can never be competitive with ARM on power consumption.

This is only one of many factors, but I know that high performance instruction decoding doesn't scale nearly as well on x86-64 due to the variable width instructions as it does on ARM. Any reasonable performance OoO core needs to read multilpe instructions ahead in order for the other OoO tricks to work. x86-64 is typically limited to about 5 instructions, and the complexity and power required to do that does not scale linearly since x86-64 instructions can be anywhere from 1 byte to 15 bytes making it very hard to guess where to start reading the second instruction before the first has been decoded. Arm cores have at most 2 widths to deal with and with ARV v8 I think there is only one leading to cores like M1 firestorm that can read 8 instructions ahead in a single cycle. Intel's E cores are able to read 3 instructions at two different addresses (6 total, just not sequential) that can help the core look at predicted branches but doesn't help as much in fast optimized code with fewer branches.

so at the low end of performance where mobile gaming sits you really need an OoO core in order to be able to keep up, but ARM really has a big leg up for that use-case because of the instruction encoding.


> x86-64 is typically limited to about 5 instructions

Intel Lion-cove decodes 8 instructions per cycle and can retire 12. Intel Skymont's triple decoder can even do 9 instructions per cycle and that's without a cache.

AMD's Zen 5 on the other hand has a 6K cache for instruction decoding allowing for 8 instructions per cycle, but still only a 4-wide decoder for each hyper-thread.

And yet AMD is still ahead of intel in both performance and performance-per-watt. So maybe this whole instruction decode thing is not as important as people are saying.


> like there is an inherent flaw with the x6-64 ISA that means a chip that provides it can never be competitive with ARM on power consumption.

It doesn't matter if there's an inherent, fundamental flaw in the ISA, if Intel can't, for whatever reason(s), develop an x86 chip that actually beats ARM on performance per watt in a broadly-applicable way.


I sure hope it takes a bit longer than that. It would not be fun having only Qualcomm chips to choose from as a CPU. Either that or Intel/AMD start making their own ARM chips

There are rumours Intel might be the fab for the base M7 chip from Apple. That's the future.

Oh, how the mighty have fallen.

I wonder if Apple's GPT (Game Porting Toolkit) could added to the macOS Steam client as a compatibility tool, like Proton is in the Linux client.

GPTK is mostly a bunch of developer tools for converting to Metal, and the closest it gets to anything like Proton is an "evaluation environment" that is nothing close to Proton's performance. Proton is mostly Wine, and Wine on macOS uses MoltenVK, so it's probably easier to just port Proton.

Direct3D -> Vulkan -> Metal is quite the translation layer sandwich, I wonder if that would have a meaningful impact on performance

The D3D -> VK layer actually seems to speed things up, so maybe we'll just end up back where we started :)

Apple's GPTK only supports D3D12 -> Metal. In addition, it's ambiguous if 3rd parties can distribute the D3DMetal dylib, as there's no license.

> These things do not prevent cheating at all.

Yes they do. They don't stop all cheating, but they raise the barrier to entry which means fewer cheaters.

I don't like arguments that sound like "well you can't stop all crime so you may as well not even try"


Ok, they prevent known cheats that the company has found online behind some subscription site run in the basement in Jersey. True. They do raise the bar, but they aren’t the barrier.

> If you are in the higher skill levels, you might end up playing too many cheaters who are impossible to beat.

It's almost kind of worse than this. If you are in higher skill levels, you end up getting matched with cheaters who lack the same fundamental understanding of the game that you do and make up for it with raw mechanical skill conferred by cheats.

So you get players who don't understand things like positioning, target priority, or team composition, which makes them un-fun to play with, while the aimbots and wallhacks make them un-fun to play against.

And as a skilled player, you are much better equipped to identify genuine cheaters in your games. Whereas in low skill levels cheaters may appear almost indistinguishable from players with real talent so long as they aren't flat out ragehacking with the aimbot or autotrigger.


You're saying it's not a problem when cheaters to completely ruin the experience for top-10% players

I hated wasting a whole half hour server hopping until I found one that didn't suck

PunkBuster and later VAC were commonplace. Anti-cheat middleware is not new by any stretch.

Community-run servers weren't a magic bullet, and they had a lot of other problems that modern matchmaking systems solve more effectively.

Maybe we can make a deal with the government. In exchange for making the development of open source software a tax exempt charitable work, we remove private jets from the list of purchases that can be deducted from income taxes. Seems like a win-win.


Why would the government wish to remove private jets from the list of purchases that can be deducted from income taxes? Why would they be unable to do this without making a deal with people who want open source software development to be designated a charitable purpose? How would making a deal with people who want open source software development fix this?


> Why would the government wish to remove private jets from the list of purchases that can be deducted from income taxes?

To bring in tax revenue to pay for things we actually need.

> Why would they be unable to do this without making a deal with people who want open source software development to be designated a charitable purpose? How would making a deal with people who want open source software development fix this?

Because my comment is this thing we call a joke, it was meant to highlight the absurdity of the fact that some obviously charitable work gets taxed, while toys for billionaires are tax exempt because...reasons?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: