Skip to main content
Chemistry LibreTexts

3.14: Where Does the N - 1 Come from?

  • Page ID
    151942
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    If we know \(\mu\) and we have a set of \(N\) data points, the best estimate we can make of the variance is

    \[\sigma^2=\int^{u_{max}}_{u_{min}}{\left(u-\mu \right)}^2\left(\frac{df}{du}\right)du \approx \sum^N_{i=1}{\left(u_i-\mu \right)}^2\left(\frac{1}{N}\right) \nonumber \]

    We have said that if we must use \(\overline{u}\) to approximate the mean, the best estimate of \(\sigma^2\), usually denoted \(s^2\), is

    \[estimated\ \sigma^2=s^2 =\sum^N_{i=1}{\left(u_i-\overline{u}\right)}^2\left(\frac{1}{N-1}\right) \nonumber \]

    The use of \(N-1\), rather than \(N\), in the denominator is distinctly non-intuitive; so much so that this equation often causes great irritation. Let us see how this equation comes about.

    Suppose that we have a distribution whose mean is \(\mu\) and variance is \(\sigma^2\). Suppose that we draw \(N\) values of the random variable, \(u\), from the distribution. We want to think about the expected value of \({\left(u-\mu \right)}^2\). Let us write \(\left(u-\mu \right)\) as

    \[\left(u-\mu \right)=\left(u-\overline{u}\right)+\left(\overline{u}-\mu \right). \nonumber \]

    Squaring this gives

    \[{\left(u-\mu \right)}^2={\left(u-\overline{u}\right)}^2+{\left(\overline{u}-\mu \right)}^2+2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right). \nonumber \]

    From our definition of expected value, we can write:

    \[ \begin{array}{l} \text{Expected value of } \left(u-\mu \right)^2= \\ ~~~~ =expected\ value\ of\ \ {\left(u-\overline{u}\right)}^2 \\ \ \ \ \ +expected\ value\ of\ {\left(\overline{u}-\mu \right)}^2 \\ \ \ \ \ +expected\ value\ of\ 2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right) \end{array} \nonumber \]

    From our discussion above, we can recognize each of these expected values:

    • The expected value of \({\left(u-\mu \right)}^2\) is the variance of the original distribution, which is \(\sigma^2\). Since this is a definition, it is exact.
    • The best possible estimate of the expected value of \({\left(u-\overline{u}\right)}^2\) is \[\sum^N_{i=1}{{\left(u_i-\overline{u}\right)}^2\left(\frac{1}{N}\right)} \nonumber \]
    • The expected value of \({\left(\overline{u}-\mu \right)}^2\) is the expected value of the variance of averages of \(N\) random variables drawn from the original distribution. That is, the expected value of \({\left(\overline{u}-\mu \right)}^2\) is what we would get if we repeatedly drew \(N\) values from the original distribution, computed the average of each set of \(N\) values, and then found the variance of this new distribution of average values. By the central limit theorem, this variance is \({\sigma^2}/{N}\). Thus, the expected value of \({\left(\overline{u}-\mu \right)}^2\) is exactly \({\sigma^2}/{N}\).
    • Since \(\left(\overline{u}-\mu \right)\) is constant, the expected value of \(2\left(u-\overline{u}\right)\left(\overline{u}-\mu \right)\) is \[2\left(\overline{u}-\mu \right)\left[\frac{1}{N}\sum^N_{i=1}{\left(u_i-\overline{u}\right)}\right] \nonumber \] which is equal to zero, because \[\sum^N_{i=1}{\left(u_i-\overline{u}\right)} = \left(\sum^N_{i=1}{u_i}\right)-N\overline{u}=0 \nonumber \] by the definition of \(\overline{u}\).

    Substituting, our expression for the expected value of \({\left(u-\mu \right)}^2\) becomes:

    \[\sigma^2\approx \sum^N_{i=1} \left(u_i-\overline{u}\right)^2\left(\frac{1}{N}\right)+\frac{\sigma^2}{N} \nonumber \]

    so that

    \[\sigma^2\left(1-\frac{1}{N}\right)=\sigma^2\left(\frac{N-1}{N}\right)\approx \sum^N_{i=1} \frac{\left(u_i-\overline{u}\right)^2}{N} \nonumber \]

    and

    \[\sigma^2 \approx \sum^N_{i=1} \frac{\left(u_i-\overline{u}\right)^2}{N-1} \nonumber \]

    That is, as originally stated, when we must use \(\overline{u}\) rather than the true mean, \(\mu\), in the sum of squared differences, the best possible estimate of \(\sigma^2\), usually denoted \(s^2\), is obtained by dividing by \(N-1\), rather than by \(N\).


    This page titled 3.14: Where Does the N - 1 Come from? is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.