Skip to main content
Chemistry LibreTexts

16.1: Probability and Statistics

  • Page ID
    426510
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    A random variable X can have more than one value x as an outcome. Which value the variable has in a particular case is a matter of chance and cannot be predicted other than that we associate a probability to the outcome. Probability \(p\) is a number between 0 and 1 that indicates the likelihood that the variable \(X\) has a particular outcome \(x\). The set of outcomes and their probabilities form a probability distribution. There are two kinds of distributions:

    1. discrete ones
    2. continuous ones

    Total probability should always add up to unity.

    Discrete Distributions

    A good example of a discrete distribution is a true coin. The random variable X can have two values:

    1. heads (0)
    2. tails (1)

    Both have equal probability and as the sum must equal unity, the probability must be ½ for each. 'The probability that X=heads' is written formally as:

    \[Pr(X=heads) = Pr(X=0) = 0.5 \nonumber \]

    The random function is written as a combination of three statements.

    • Pr(X=0) = ½
    • Pr(X=1) = ½
    • elsewhere Pr = 0

    Continuous Distributions

    Now consider a spherical die. One could say it has an infinite number of facets that it can land on. Thus the number of outcomes \(n = ∞\), this make each probability

    \[p = 1/∞=0. \nonumber \]

    This creates a bit of a mathematical problem, because how can we get a total probability of unity by adding up zeros? Also, if we divide the sphere in a northern and a southern hemisphere clearly the probability that it lands on a point in say, the north should be ½. Still, p = 0 for all points. We introduce a new concept: probability density over which we integrate rather than sum. We assign an equal density to each point of the sphere and make sure that if we integrate over a hemisphere we get ½. (This involves two angles θ and φ and an integration over them and I won't go into that).

    A bit simpler example of a continuous distribution than the spherical die is the 1D uniform distribution. It is the one that the Excel function =RAND() produces to good approximation. Its probability density is defined as

    • f(x) = 1 for 0<1>
    • f(x) = 0 elsewhere

    The figure shows a (bivariate) uniform distribution.

    The probability that the outcome is smaller than 0.5 is written as Pr(X<0.5) and is found by integrating from 0 to 0.5 over f(x).

    Pr(X<0.5) = ∫ f(x).dx from 0 to 0.5 = ∫ 1.dx from 0 to 0.5 = [x]0.5-[x]0 = 0.5

    Notice that in each individual outcome b the probability is indeed zero because an integral from b to b is always zero, even if the probability density f(b) is not zero. Clearly the probability and the probability density are not the same. Unfortunately the distinction between probability and probability density is often not properly made in the physical sciences. Moments can also be computed for continuous distributions by integrating over the probability densityalt

    Another well-known continuous distribution is the normal (or Gaussian) distribution, defined as:

    \[f(x) = 1/[√(2π)σ] * exp(-½[(x-μ)/σ]2) \nonumber \]

    (Notice the normalization factor 1/[√(2π)σ])

    We can also compute moments of continuous distribution. Instead of using a summation we now have to evaluate an integral:

    \[ \langle X \rangle = \int [f(x)^*x] dx \nonumber \]

    \[ \langle X^2 \rangle =\int [f(x)^*x^2] dx \nonumber \]

    For the normal distribution \(\langle X \rangle = μ\)

    Exercise

    Compute 2> and 3> for the uniform distribution. answer>

    Indistinguishable Outcomes

    When flipping two coins we could get four outcomes: two heads (0), heads plus tails (1), tails plus heads (1), two tails (2)

    Each outcome is equally likely, this implies a probability of ¼ for each:

    Xtot = X1 + X2 = 0 + 0 = 0 p=¼
    Xtot = X1 + X2 = 0 + 1 = 1 p=¼
    Xtot = X1 + X2 = 1 + 0 = 1 p=¼
    Xtot = X1 + X2 = 1 + 1 = 2 p=¼

    The probability of a particular outcome is often abbreviated simply to p. The two middle outcomes collapse into one with p=¼+¼= ½ if the coins are indistinguishable. We will see that this concept has very important consequences in statistical thermodynamics.

    If we cannot distinguish the two outcomes leading to Xtot=1 we get the following random function:

    • Pr(Xtot=0) = ¼
    • Pr(Xtot=1) = ½
    • Pr(Xtot=2) = ¼
    • elsewhere Pr = 0

    Notice that it is quite possible to have a distribution where the probabilities differ from outcome to outcome. Often the p values are given as f(x), a function of x. An example:

    X3 defined as:

    • f(x) = (x+1)/6 for x=0,1,2;
    • f(x) =0 elsewhere;

    The factor 1/6 makes sure the probabilities add up to unity. Such a factor is known as a normalization factor. Again this concept is of prime importance in statistical thermodynamics.

    Another example of a discrete distribution is a die. If it has 6 sides (the most common die) there are six outcomes, each with p= 1/6. There are also dice with n=4, 12 or 20 sides. Each outcome will then have p= 1/n.

    Moments of Distributions

    In important aspect of probability distributions are the moments of the distribution. They are values computed by summing over the whole distribution.

    The zero order moment is simply the sum of all p and that is unity:

    0> = ΣX0*p= Σ1*p= 1

    The first moment multiplies each outcome with its probability and sums over all outcomes:

    = ΣX*p

    This moment is known as the average or mean. (It is what we have done to your grades for years...)

    For one coin is ½, for two coins is 1. (Exercise: verify this)

    The second moment is computed by summing the product of the square of X and p:

    2> = ΣX2*p
    For one coin we have 2> = ½,
    For two coins 2>= [0*¼ + 1*½ + 4*¼] = 1.5
    What is 2> for X3? answer

    The notation is used a lot in quantum mechanics, often in the form <ψ*ψ> or <ψ*|h|ψ>. The <.. part is known as the bra, the ..> part as the ket. (Together bra(c)ket)

    Intermezzo: The strange employer

    You have a summer job but your employer likes games of chance. At the end of every day he rolls a die and pays you the square of the outcome in dollars per hour. So on a lucky day you'd make $36.- per hour, but on a bad day $1.-. Is this a bad deal? What would you make on the average over a longer period?

    To answer this we must compute the second moment 2> of the distribution:

    2> = 1/6 *[1+4+9+16+25+36] = 91/6 = $15.17 per hour.

    (I have taken p=1/6 out of brackets because the value is the same for all six outcomes)

    As you see in the intermezzo, the value of the second moment is in this case what you expect to be making on the long term. Moments are examples of what is know as expectation values. Another term you may run into that of a functional. A functional is a number computed by some operation (such as summation or integration) over a whole function. Moments are clearly an example of that too.


    16.1: Probability and Statistics is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.