(Numerical analysis:)

A way of measuring error in floating point calculations. The absolute value of error in floating point error is highly dependent on the exponent. So it is desirable to have measures of error which do not depend on the exponent, such at relative error or ulps. The *relative error* is the ratio between the absolute error and the true value.

Say we use a base of 10 and a mantissa of precision 3. If we express Avogadro's number as 6.03×10^{23} instead of as the "true" value of 6.022×10^{23}, we've made an absolute error of 0.008×10^{23}=8×10^{20}. But dividing by the true value, we see a relative error of 8/6.022×10^{-3}≈1.33×10^{-3}. If we'd used a values of 6.02×10^{23} (the best we can do with these parameters), the relative error would be ≈0.33×10^{-3}.

Relative error is distinct from ulps, but each can be bounded in terms of the other.