Propagation of Error
- Page ID
- 353
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Propagation of Error (or Propagation of Uncertainty) is defined as the effects on a function by a variable's uncertainty. It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables to provide an accurate measurement of uncertainty.
Introduction
Every measurement has an air of uncertainty about it, and not all uncertainties are equal. Therefore, the ability to properly combine uncertainties from different measurements is crucial. Uncertainty in measurement comes about in a variety of ways: instrument variability, different observers, sample differences, time of day, etc. Typically, error is given by the standard deviation (\(\sigma_x\)) of a measurement.
Anytime a calculation requires more than one variable to solve, propagation of error is necessary to properly determine the uncertainty. For example, lets say we are using a UV-Vis Spectrophotometer to determine the molar absorptivity of a molecule via Beer's Law: A = ε l c. Since at least two of the variables have an uncertainty based on the equipment used, a propagation of error formula must be applied to measure a more exact uncertainty of the molar absorptivity. This example will be continued below, after the derivation.
Derivation of Exact Formula
Suppose a certain experiment requires multiple instruments to carry out. These instruments each have different variability in their measurements. The results of each instrument are given as: a, b, c, d... (For simplification purposes, only the variables a, b, and c will be used throughout this derivation). The end result desired is \(x\), so that \(x\) is dependent on a, b, and c. It can be written that \(x\) is a function of these variables:
\[x=f(a,b,c) \label{1}\]
Because each measurement has an uncertainty about its mean, it can be written that the uncertainty of \(dx_i\) of the ith measurement of \(x\) depends on the uncertainty of the ith measurements of a, b, and c:
\[dx_i=f(da_i,db_i,dc_i)\label{2}\]
The total deviation of \(x\) is then derived from the partial derivative of x with respect to each of the variables:
\[dx=\left(\dfrac{\delta{x}}{\delta{a}}\right)_{b,c}da, \; \left(\dfrac{\delta{x}}{\delta{b}}\right)_{a,c}db, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)_{a,b}dc \label{3}\]
A relationship between the standard deviations of x and a, b, c, etc... is formed in two steps:
- by squaring Equation \ref{3}, and
- taking the total sum from \(i = 1\) to \(i = N\), where \(N\) is the total number of measurements.
In the first step, two unique terms appear on the right hand side of the equation: square terms and cross terms.
Square Terms
\[\left(\dfrac{\delta{x}}{\delta{a}}\right)^2(da)^2,\; \left(\dfrac{\delta{x}}{\delta{b}}\right)^2(db)^2, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)^2(dc)^2\label{4}\]
Cross Terms
\[\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{db}\right)da\;db,\;\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{dc}\right)da\;dc,\;\left(\dfrac{\delta{x}}{db}\right)\left(\dfrac{\delta{x}}{dc}\right)db\;dc\label{5}\]
Square terms, due to the nature of squaring, are always positive, and therefore never cancel each other out. By contrast, cross terms may cancel each other out, due to the possibility that each term may be positive or negative. If \(da\), \(db\), and \(dc\) represent random and independent uncertainties, about half of the cross terms will be negative and half positive (this is primarily due to the fact that the variables represent uncertainty about a mean). In effect, the sum of the cross terms should approach zero, especially as \(N\) increases. However, if the variables are correlated rather than independent, the cross term may not cancel out.
Assuming the cross terms do cancel out, then the second step - summing from \(i = 1\) to \(i = N\) - would be:
\[\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\label{6}\]
Dividing both sides by \(N - 1\):
\[\dfrac{\sum{(dx_i)^2}}{N-1}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\dfrac{\sum(da_i)^2}{N-1} + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\dfrac{\sum(db_i)^2}{N-1}\label{7}\]
The previous step created a situation where Equation \ref{7} could mimic the standard deviation equation. This is desired, because it creates a statistical relationship between the variable \(x\), and the other variables \(a\), \(b\), \(c\), etc... as follows:
The standard deviation equation can be rewritten as the variance (\(\sigma_x^2\)) of \(x\):
\[\dfrac{\sum{(dx_i)^2}}{N-1}=\dfrac{\sum{(x_i-\bar{x})^2}}{N-1}=\sigma^2_x\label{8}\]
Rewriting Equation \ref{7} using the statistical relationship created yields the Exact Formula for Propagation of Error:
\[\sigma^2_x=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sigma^2_a+\left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sigma^2_b+\left(\dfrac{\delta{x}}{\delta{c}}\right)^2\sigma^2_c\label{9}\]
Thus, the end result is achieved. Equation \ref{9} shows a direct statistical relationship between multiple variables and their standard deviations. In the next section, derivations for common calculations are given, with an example of how the derivation was obtained.
In the following calculations \(a\), \(b\), and \(c\) are measured variables from an experiment and \(\sigma_a\), \(\sigma_b\), and \(\sigma_c\) are the standard deviations of those variables.
Addition or Subtraction
If \(x = a + b - c\) then
\[\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2} \label{10}\]
Multiplication or Division
If \(x = \dfrac{ a \times b}{c}\) then
\[ \dfrac{\sigma_x}{x}=\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}\label{11} \]
Exponential
If \(x = a^y\) then
\[\dfrac{\sigma_x}{x}=y \left(\dfrac{\sigma_a}{a}\right) \label{12}\]
Logarithmic
If \(x = \log(a)\) then
\[\sigma_x=0.434 \left(\dfrac{\sigma_a}{a}\right) \label{13}\]
Anti-logarithmic
If \(x = \text{antilog}(a)\) then
\[\dfrac{\sigma_x}{x}=2.303({\sigma_a}) \label{14}\]
Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations.
Derivation of Arithmetic Example
The Exact Formula for Propagation of Error in Equation \(\ref{9}\) can be used to derive the arithmetic examples noted above. Starting with a simple equation:
\[x = a \times \dfrac{b}{c} \label{15}\]
where \(x\) is the desired results with a given standard deviation, and \(a\), \(b\), and \(c\) are experimental variables, each with a difference standard deviation. Taking the partial derivative of each experimental variable, \(a\), \(b\), and \(c\):
\[\left(\dfrac{\delta{x}}{\delta{a}}\right)=\dfrac{b}{c} \label{16a}\]
\[\left(\dfrac{\delta{x}}{\delta{b}}\right)=\dfrac{a}{c} \label{16b}\]
and
\[\left(\dfrac{\delta{x}}{\delta{c}}\right)=-\dfrac{ab}{c^2}\label{16c}\]
Plugging these partial derivatives into Equation \(\ref{9}\) gives:
\[\sigma^2_x=\left(\dfrac{b}{c}\right)^2\sigma^2_a+\left(\dfrac{a}{c}\right)^2\sigma^2_b+\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c\label{17}\]
Dividing Equation \(\ref{17}\) by Equation \(\ref{15}\) squared yields:
\[\dfrac{\sigma^2_x}{x^2}=\dfrac{\left(\dfrac{b}{c}\right)^2\sigma^2_a}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(\dfrac{a}{c}\right)^2\sigma^2_b}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c}{\left(\dfrac{ab}{c}\right)^2}\label{18}\]
Canceling out terms and square-rooting both sides yields Equation \ref{11}:
\[\dfrac{\sigma_x}{x}={\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}} \nonumber\]
Continuing the example from the introduction (where we are calculating the molar absorptivity of a molecule), suppose we have a concentration of 13.7 (±0.3) moles/L, a path length of 1.0 (±0.1) cm, and an absorption of 0.172807 (±0.000008). The equation for molar absorptivity is dictated by Beer's law:
\[ε = \dfrac{A}{lc}. \nonumber\]
Solution
Since Beer's Law deals with multiplication/division, we'll use Equation \ref{11}:
\[\begin{align*} \dfrac{\sigma_{\epsilon}}{\epsilon} &={\sqrt{\left(\dfrac{0.000008}{0.172807}\right)^2+\left(\dfrac{0.1}{1.0}\right)^2+\left(\dfrac{0.3}{13.7}\right)^2}} \\[4pt] &=0.10237 \end{align*}\]
As stated in the note above, Equation \ref{11} yields a relative standard deviation, or a percentage of the ε variable. Using Beer's Law, ε = 0.012614 L moles-1 cm-1 Therefore, the \(\sigma_{\epsilon}\) for this example would be 10.237% of ε, which is 0.001291.
Accounting for significant figures, the final answer would be:
ε = 0.013 ± 0.001 L moles-1 cm-1
If you are given an equation that relates two different variables and given the relative uncertainties of one of the variables, it is possible to determine the relative uncertainty of the other variable by using calculus. In problems, the uncertainty is usually given as a percent. Let's say we measure the radius of a very small object. The problem might state that there is a 5% uncertainty when measuring this radius.
To actually use this percentage to calculate unknown uncertainties of other variables, we must first define what uncertainty is. Uncertainty, in calculus, is defined as:
\[\left(\dfrac{dx}{x}\right) = \left(\dfrac{∆x}{x}\right) = \text{uncertainty} \nonumber\]
Let's look at the example of the radius of an object again. If we know the uncertainty of the radius to be 5%, the uncertainty is defined as
\[\left(\dfrac{dx}{x}\right)=\left(\dfrac{∆x}{x}\right)= 5\% = 0.05.\nonumber\]
Now we are ready to use calculus to obtain an unknown uncertainty of another variable. Let's say we measure the radius of an artery and find that the uncertainty is 5%. What is the uncertainty of the measurement of the volume of blood pass through the artery? Let's say the equation relating radius and volume is:
\[V(r) = c(r^2) \nonumber\]
where \(c\) is a constant, \(r\) is the radius and \(V(r)\) is the volume.
Solution
The first step to finding the uncertainty of the volume is to understand our given information. Since we are given the radius has a 5% uncertainty, we know that (∆r/r) = 0.05. We are looking for (∆V/V).
Now that we have done this, the next step is to take the derivative of this equation to obtain:
\[\dfrac{dV}{dr} = \dfrac{∆V}{∆r}= 2cr \nonumber\]
We can now multiply both sides of the equation to obtain:
\[∆V = 2cr(∆r) \nonumber\]
Since we are looking for (∆V/V), we divide both sides by V to get:
\[\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{V} \nonumber\]
We are given the equation of the volume to be \(V = c(r)^2\), so we can plug this back into our previous equation for \(V\) to get:
\[\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{c(r)^2} \nonumber \]
Now we can cancel variables that are in both the numerator and denominator to get:
\[\dfrac{∆V}{V} = \dfrac{2∆r}{r} = 2 \left(\dfrac{∆r}{r}\right) \nonumber \]
We have now narrowed down the equation so that ∆r/r is left. We know the value of uncertainty for ∆r/r to be 5%, or 0.05. Plugging this value in for ∆r/r we get:
\[\dfrac{∆V}{V} = 2 (0.05) = 0.1 = 10\% \nonumber\]
The uncertainty of the volume is 10%. This method can be used in chemistry as well, not just the biological example shown above.
- Error propagation assumes that the relative uncertainty in each quantity is small.3
- Error propagation is not advised if the uncertainty can be measured directly (as variation among repeated experiments).
- Uncertainty never decreases with calculations, only with better measurements.
Disadvantages of Propagation of Error Approach
In an ideal case, the propagation of error estimate above will not differ from the estimate made directly from the measurements. However, in complicated scenarios, they may differ because of:
- unsuspected covariances
- errors in which reported value of a measurement is altered, rather than the measurements themselves (usually a result of mis-specification of the model)
- mistakes in propagating the error through the defining formulas (calculation error)
Treatment of Covariance Terms
Covariance terms can be difficult to estimate if measurements are not made in pairs. Sometimes, these terms are omitted from the formula. Guidance on when this is acceptable practice is given below:
- If the measurements of a and b are independent, the associated covariance term is zero.
- Generally, reported values of test items from calibration designs have non-zero covariances that must be taken into account if b is a summation such as the mass of two weights, or the length of two gage blocks end-to-end, etc.
- Practically speaking, covariance terms should be included in the computation only if they have been estimated from sufficient data. See Ku (1966) for guidance on what constitutes sufficient data2.
References
- Skoog, D., Holler, J., Crouch, S. Principles of Instrumental Analysis; 6th Ed., Thomson Brooks/Cole: Belmont, 2007.
- Harry Ku (1966). Notes on the Use of Propagation of Error Formulas, J Research of National Bureau of Standards-C. Engineering and Instrumentation, Vol. 70C, No.4, pp. 263-273.
- Claudia Neuhauser. Calculus for Biology and Medicine; 3rd Ed. Pearson: Boston, 2011,2004,2000.