Introduction to Numerical Methods/Measuring Errors

Measuring Errors edit

In this lesson we will learn how to quantify errors.

Learning objectives

  • identify true and relative true errors
  • identify approximate and relative approximate errors
  • explain the relationship between the absolute relative approximate error and the number of significant digits
  • identify significant digits

Reference

Chapter 1 of Holistic Numerical Methods

True and Relative True Errors edit

A true error ( ) is defined as the difference between the true (exact) value and an approximate value. This type of error is only measurable when the true value is available. You might wonder why we would use an approximate value instead of the true value. One example would be when the true value cannot be represented precisely due to the notational system or the limit of the physical storage we use.

true error ( ) = true value - approximate value

A true error doesn't signify how important an error is. For instance, a 0.1 pound error is a very small error when measuring a person's weight, but the same error can be disastrous when measuring the dosage of a medicine. Relative true error ( ) is defined as the ratio between the true error and the true value.

relative true error ( )   = true error / true value

Approximate and Relative Approximate Errors edit

Oftentimes the true value is unknown to us, especially in numerical computing. In this case we will have to quantify errors using approximate values only. When an iterative method is used, we get an approximate value at the end of each iteration. The approximate error ( ) is defined as the difference between the present approximate value and the previous approximation (i.e. the change between the iterations).

approximate error ( ) = present approximation – previous approximation

Similarly we can calculate the relative approximate error ( ) by dividing the approximate error by the present approximate value.

relative approximate error ( )  = approximate error / present approximation

Relative Approximate Error and Significant Digits edit

Assume our iterative method yield a better approximation as the iteration goes on. Oftentimes we can set an acceptable tolerance to stop the iteration at when the relative approximate error is small enough. We often set the tolerance in terms of the number of significant digits - the number of digits that carry meaning contributing to its precision. It corresponds to the number of digits in the scientific notation to represent a number's significand or mantissa.

An approximate rule for minimizing the error is as follows: if the absolute relative approximate error is less than or equal to a predefined tolerance (usually in terms of the number of significant digits), then the acceptable error has been reached and no more iterations would be required. Given the absolute relative approximate error, we can derive the least number of digits that are significant using the same equation.