Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is definitely very useful (and is the norm if you want to do something like, say, high quality scanning), but I failed to see how it warrants a new "format".

This warrants a separate answer. Cameras are getting to the point where they can capture far more information than we can display. Hence, we need a lot of bit depth to accurately store this added precision. But adding bits to the data signal requires a lot of extra bandwidth.

In principle, we should just store all of this as 16/32bit FP, and many modern NLEs use such a pipeline, internally. But by creating a non-linear curve on integer data, we can compress the signal and fine-tune it to our liking. Hence we can get away with using the 8-12bit range, which helps a lot in storage. With log-curves, 12bit is probably overkill given the current sensor capabilities.

There's a plethora of log-formats out there, typically one for each camera brand/sensor. They aren't meant for delivery, but for capture. If you want to deliver, you'd typically transform to a color space such as rec.709 (assuming standard SDR, HDR is a different beast). The log-formats give you a lot of post-processing headroom while doing your color grading work.



> Cameras are getting to the point where they can capture far more information than we can display.

Haven't professional-grade microphones been in a similar situation for decades now, or is it the magic of remastering that keeps recordings from the 50s sounding so good on modern speaker systems?


> Haven't professional-grade microphones been in a similar situation for decades now

Not really the microphones themselves since microphones today and decades ago all deliver an analog signal which contains way more information than our ears can process (but some amount of noise too which may or may not be audible).

The technology difference is in the analog-to-digital conversion (DAC) which converts that analog signal to a stream of integers.

The difference between audio and video is that essentially since the dawn of digital audio, devices have been able to produce as much information as our ears are able to distinguish. The standard digital sample rate since CDs first shipped is ~44k, which can represent frequences all the way up to 22k, which is beyond the range that almost all people are able to hear. The standard bit depth of 16 bits can likewise represent as much dynamic range as humans are able to distinguish.

(Hi-fi enthusiasts may argue with these claims but I consider that whole area to be almost entirely snake-oil and magical thinking. Actual scientific studies show that 44k 16-bit audio is indistinguishable from higher sample rates or bit depths.)

People working with audio may want higher sample rates and bit depths because, just like with coloring in the article, it gives them more leeway to change the audio while still producing a final result that covers the whole frequency and dynamic range. But for end listeners, 44k/16 is fine and has always been fine.

Video is very different. Our eyeballs can capture a monumental amount of input using a very complex, adaptive system. Eyes don't have a single well-defined "resolution" or "framerate" but basically digital video has been noticeably lower resolution and lower framerate than we're able to perceive for a long time and is only recently starting to approach perceptual limits.


Why would you still assume SDR, aren't we talking about amateur photography here ?

But yeah, I've been wondering why nonlinear formats would use integer values for a while now ?!?


I'm suggesting rec.709 because it's what is a currently expected default for a screen. In your typical setup, your working color space is something like ACEScct or DWG, so you can map to several possible output formats with a little extra work if needed.

The integer values are nice because existing video formats encode things as integers. So we can just stuff our stuff inside a codec we already have rather than having to reinvent the wheel on the codec side as well. Re-purposing existing toolchains for new uses tend to a thing that gets a lot of traction compared to building a new one from scratch. Even if the newly built toolchain is far better.


Aren't most phone screens "HDR" these days, and for years now ? (And Apple had wide gamut with excellent OS compatibility on computers for even longer ?)

Yes, but why existing formats are like this, we have been through quite a lot of new especially video formats in recent years...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: