Is the decimal arithmetic ‘significance’ arithmetic?
Absolutely not. Every operation is carried out as though an infinitely precise result were possible, and is only rounded if the destination for the result does not have enough space for the coefficient. This means that results are exact where possible, and can be used for converging algorithms (such as Newton-Raphson approximation), too. To see how this differs from significance arithmetic, and why it is different, here’s a mini-tutorial… First, it is quite legitimate (and common practice) for people to record measurements in a form in which, by convention, the number of digits used – and, in particular, the units of the least-significant-digit – indicate in some way the likely error in a measurement. With this scheme, 1.23m might indicate a measurement which is believed to be 1.230000000… meters, plus or minus 0.005m (or “to the nearest 0.01m”), and 1.234m might indicate one that is ±0.0005m (“to the nearest millimeter”), and so on. (Of course, the error will sometimes be outside
Related Questions
- What disadvantages are there in using decimal arithmetic?
- What disadvantages are there in using decimal arithmetic?
- Which programming languages support decimal arithmetic?
- Which programming languages support decimal arithmetic?
- What rounding modes are needed for decimal arithmetic?
- What rounding modes are needed for decimal arithmetic?