Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> double precision (i.e. 32 bit) math

Is this standard nomenclature anywhere? IME 'double precision' generally refers to 64-bit floating values; and 32-bit is called 'single precision'.



It should also be noted that double in this context has nothing to do with floating point numbers, Forth implementations often do not have the functions (called "words") to manipulate FP numbers. Instead I believe it refers to the double-number word set, words like "d+", "d-". See <http://lars.nocrew.org/forth2012/double.html>.

Forth often uses different terminology, or slightly odd terminology, for more common computer science terms because of how the language evolved independently from universities and research labs. For example "meta-compiler" is used when "cross-compiler" would be more appropriate, instead of functions Forth has "words" which are combined into a "dictionary", because the term "word" is already used "cell" is used instead of "word" when referring to a machines word-width.

Edit: Grammar.


If you look at e.g. x86 ASM manuals, you have WORD (16 bits), double word (DWORD, 32 bits), and quadword (QWORD, 64 bits). So even if it's nowadays a 64-bit CPU, the nomenclature from the 16-bit days sticks.

Double precision usually refers to 64-bit floating point, like you say.

I would agree that this usage is not standard.


> Double precision usually refers to 64-bit floating point, like you say.

Is it? Doesn't `double` in c refers to a 32bit value?

EDIT:

So, it seems I've not dealt with this in much too long and am misremembering and therefor wrong.

    #include <stdio.h>
    int main() {
      printf("sizeof(float) = %d\n", sizeof(float));
      printf("sizeof(double) = %d\n", sizeof(double));
      return 0;
    }
yields

    sizeof(float) = 4
    sizeof(double) = 8
on an Intel(R) Core(TM) i5-6200U (more-or-less a run-of-the-mill-not-that-mill 64-bit x86-family core. I don't have a 32-bit processor handy to test, but I don't believe it'd change the results.


Not usually; `float` is 32 bits and `double` is 64 bits on virtually every common platform (maybe not on some DSP chips or certain embedded chips?). But the C++ standard (and probably the C one) only requires that `double` have at least as much precision as a `float`, so it's conceivable you could have a C++ implementation with 32-bit `float` and `double` or 16-bit `float` and 32-bit `double.


C never required that the size of int or double be the same across compilers. Even in the same machine they can have different sizes.


"F.2 Types

The C floating types match the IEC 60559 formats as follows:

— The float type matches the IEC 60559 single format.

— The double type matches the IEC 60559 double format.

— The long double type matches an IEC 60559 extended format, else a non-IEC 60559 extended format, else the IEC 60559 double format.

Any non-IEC 60559 extended format used for the long double type shall have more precision than IEC 60559 double and at least the range of IEC 60559 double.

Recommended practice

The long double type should match an IEC 60559 extended format."

ISO/IEC 9899:1999, Annex F


float == 32 bits, double == 64 bits.


It used to be the case that double precision meant two words, which for this 16-bit CPU fits. It's fairly rare these days now we care more about portability.


since it's 16 bit CPU, double precision implies 32 bits




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: