Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most recently microphones/recorders started using it for recording sound


From my understanding, the ADC's are still fixed point and linear. Two (or more) then run in parallel over different signal levels to produce the 32-bit float output.

Encoding audio with different log-scale companding has been around for some time too (since the 1970's) with A-law and mu-law in G.711.


It doesn't really matter HOW they do it, as long as you get the advantages of float encoding (practically infinite headroom). Of course if you zoom in enough there will be something in there that uses integers, but this would be true for e.g. a floating point adder as well.


It should matter that "practiaclly infinite headroom" comes from the fact that the raw samples has 64 bit of dynamic range, than output format being float.

(does that mean there is a crossover in the middle of amplitude range that shows LSBs of one of ADCs poking their heads, I mean, in a hypothetical ultra-naive implementation?)


>the advantages of float encoding (practically infinite headroom)

The best implementations have a dynamic range of about 140-150 dBA. Floating point is not needed to achieve that and it isn't always used (look at Stagetec products).


I'm not entirely sure what you mean by floating point for an ADC.

from a super high level all ADCS do is quantize an analog signal. They take in a voltage from say 0 - 1.8V and quantize that on a 12 bit range. Return a value from 0-4095. You could build one that scales this range with non-linear steps. But this doesn't add any value. We won't get more accuracy at smaller steps. Our noise and accuracy problems won't be solved by this as they are due to thermal noise or mismatch. quantization noise is not the problem. (We already build segmented ADCs to try and do this)


That was in reference overall ADC stage in abstract, not component. As you note quantisation still maps to integers over some range of the input signal.

It's not my area so would love to correction from someone who is deep in the space. My current perception is the 32 bit float hype in the audio capture world is the marketing reality distortion field in effect. Having that representation expand further upstream than the DSP or DAW makes sense, but it's not magic. Even in 32 bit float there's only 24 bits of precision (assuming IEEE 754).

What is interesting, useful, and lost in that noise is devices have refined the multi-ADC design to enable full usage of that precision matched to the overall dynamic range of the analogue front-end. Previously the ADC would be the bottleneck, but that's now shifted to the upstream circuitry or transducer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: