Recently in Number Category

So far we have discussed floating point arithmetic, fixed point arithmetic, arbitrary precision arithmetic, rational arithmetic and computer algebra and have found that, with the notable exception of the unfortunately impossible to achieve infinite precision that the last of these seems to promise, they all suffer from the problems that floating point arithmetic is so often criticised for.

Full text...

submit to reddit  
We have seen how fixed point and rational arithmetic do not completely solve the supposed deficiencies of floating point arithmetic, so the question still remains as to whether we can do any better.

Full text...

submit to reddit  
With fixed point arithmetic, arbitrary precision or otherwise, failing to solve the percieved problems of floating point, that is to say rounding errors, the question remains as to whether we can do any better. A tempting approach is to look to the rationals, rather than the integers, as a basis for our numeric calculations.

Full text...

submit to reddit  
Fixed point arithmetic's susceptibilities to rounding error and overflow both stem from the limited number of decimal digits available to represent them, as discussed in the previous post. A common solution to this problem is to use arbitrary precision integers, also known as bignums, to represent the digits.

Full text...

submit to reddit  
Fixed point arithmetic is perhaps the simplest alternative to floating point. Fixed point numbers maintain a fixed number of digits after the point, rather than a fixed number of digits of precision. Typically they are represented by an integer with the assumption that some constant number of the least significant base 10 digits fall below the decimal point. For example, assuming 2 decimal places, we would represent π with 314.

Full text...

submit to reddit  
Floating point arithmetic is a much maligned beast. Too many times have I heard programmers express the opinion that any program using floating point numbers is broken by design. Even Douglas Crockford, whose advice I have largely followed on matters of JavaScript, is guilty of it, placing them in the Awful Parts appendix of his JavaScript: The Good Parts.

Binary floating-point numbers are inept at handling decimal fractions, so 0.1 + 0.2 is not equal to 0.3. This is the most frequently reported bug in JavaScript, and it is an intentional consequence of having adopted the IEEE Standard for Binary Floating Point Arithmetic (IEEE 754). This standard is well-suited for many applications, but it violates most of the things you learned about numbers in middle school.
- Douglas Crockford, JavaScript: The Good Parts

The IEEE 754 floating point standard is, in fact, incredibly well designed and anyone who considers it broken because of the behaviour he describes has thoroughly misplaced their blame.

Full text...

submit to reddit