On the Significance of Digits in Interval Notation

Autor: Van Emden, Maarten
Zdroj: Reliable Computing; February 2004, Vol. 10 Issue: 1 p45-58, 14p
Abstrakt: We review the various interval notations as starting point for the analysis of the significance of the decimals used for interval bounds. As intervals are used to indicate that a number is known with a certain degree of uncertainty, we also review the rules used in physics to ensure the significance of the decimals used in recording a measurement. It turns out that according to these rules, commonly used interval notation carries too many decimals. To be able to investigate whether rules similar to those used in physics can be used to improve interval notation, we use information theory to determine the information content of the last decimals of the numerals used to denote the interval's bounds. We introduce the Law of One Tenth, stating that in many situations this content diminishes by a factor of ten on average for successive decimals. The law is especially useful in conjunction with a novel and little-used way of writing intervals that we call “factored notation.” The law implies that it is usually futile to write more than two or three decimals inside the brackets of factored notation.
Databáze: Supplemental Index