Skip to main content
Chemistry LibreTexts

4.3: Some Important Properties of Events

  • Page ID
    236432
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    If we know the probabilities of the possible outcomes of a trial, we can calculate the probabilities for combinations of outcomes. These calculations are based on two rules, which we call the laws of probability. If we partition the outcomes into exhaustive and mutually exclusive events, the laws of probability also apply to events. Since, as we define them, “events” is a more general term than “outcomes,” we call them the law of the probability of alternative events and the law of the probability of compound events. These laws are valid so long as three conditions are satisfied. We have already discussed the first two of these conditions, which are that the outcomes possible in any individual trial must be exhaustive and mutually exclusive. The third condition is that, if we make more than one trial, the outcomes must be independent; that is, the outcome of one trial must not be influenced by the outcomes of the others.

    We can view the laws of probability as rules for inferring information about combinations of events. The law of the probability of alternative events applies to events that belong to the same distribution. The law of the probability of compound events applies to events that can come from one or more distributions. An important special case occurs when the compound events are \(N\) successive samplings of a given distribution that we identify as the parent distribution. If the random variable is a number, and we average the numbers that we obtain from \(N\) successive samplings of the parent distribution, these “averages-of-\(N\)” themselves constitute a distribution. If we know certain properties of the parent distribution, we can calculate corresponding properties of the “distribution of averages-of-\(N\) values obtained by sampling the parent distribution.” These calculations are specified by the central limit theorem, which we discuss in Section 3.11.

    In general, when we combine events from two distributions, we can view the result as an event that belongs to a third distribution. At first encounter, the idea of combining events and distributions may seem esoteric. A few examples serve to show that what we have in mind is very simple.

    Since an event is a set of outcomes, an event occurs whenever any of the outcomes in the set occurs. Partitioning the outcomes of tossing a die into “even outcomes” and “odd outcomes” illustrates this idea. The event “even outcome” occurs whenever the outcome of a trial is \(2\), \(4,\) or \(6\). The probability of an event can be calculated from the probabilities of the underlying outcomes. We call the rule for this calculation the law of the probabilities of alternative events. (We create the opportunity for confusion here because we are illustrating the idea of alternative events by using an example in which we call the alternatives “alternative outcomes” rather than “alternative events.” We need to remember that “event” is a more general term than “outcome.” One possible partitioning is that which assigns every outcome to its own event.) We discuss the probabilities of alternative events further below.

    To illustrate the idea of compound events, let us consider a first distribution that comprises “tossing a coin” and a second distribution that comprises “drawing a card from a poker deck.” The first distribution has two possible outcomes; the second distribution has \(52\) possible outcomes. If we combine these distributions, we create a third distribution that comprises “tossing a coin and drawing a card from a poker deck.” The third distribution has \(104\) possible outcomes. If we know the probabilities of the outcomes of the first distribution and the probabilities of the outcomes of the second distribution, and these probabilities are independent of one another, we can calculate the probability of any outcome that belongs to the third distribution. We call the rule for this calculation the law of the probability of compound events. We discuss it further below.

    A similar situation occurs when we consider the outcomes of tossing two coins. We assume that we can tell the two coins apart. Call them coin \(1\) and coin \(2\). We designate heads and tails for coins \(1\) and \(2\) as \(H_1\), \(T_1\), \(H_2\), and \(T_2\), respectively. There are four possible outcomes in the distribution we call “tossing two coins:” \(H_1H_2\), \(H_1T_2\), \(T_1H_2\), and \(T_1T_2\). (If we could not tell the coins apart, \(H_1T_2\) would be the same thing as \(T_1H_2\); there would be only three possible outcomes.) We can view the distribution “tossing two coins” as being a combination of the two distributions that we can call “tossing coin \(1\)” and “tossing coin\(\ 2\).” We can also view the distribution “tossing two coins” as a combination of two distributions that we call “tossing a coin a first time” and “tossing a coin a second time.” We view the distribution “tossing two coins” as being equivalent to the distribution “tossing one coin twice.” This is an example of repeated trials, which is a frequently encountered type of distribution. In general, we call such a distribution a “distribution of events from a trial repeated N times,” and we view this distribution as being completely equivalent to N simultaneous trials of the same kind. Chapter 19 considers the distribution of outcomes when a trial is repeated many times. Understanding the properties of such distributions is the single most essential element in understanding the theory of statistical thermodynamics. The central limit theorem relates properties of the repeated-trials distribution to properties of the parent distribution.

    The Probability of Alternative Events

    If we know the probability of each of two mutually exclusive events that belong to an exhaustive set, the probability that one or the other of them will occur in a single trial is equal to the sum of the individual probabilities. Let us call the independent events A and B, and represent their probabilities as \(P(A)\) and \(P(B)\), respectively. The probability that one of these events occurs is the same thing as the probability that either A occurs or B occurs. We can represent this probability as \(P(A\ or\ B)\). The probability of this combination of events is the sum: \(P(A)+P(B)\). That is,

    \[P\left(A\ or\ B\right)=P\left(A\right)+P(B) \nonumber \]

    Above we define Y as the event that a single toss of a die comes up either \(1\) or \(3\). Because each of these outcomes is one of six, mutually-exclusive, equally-likely outcomes, the probability of either of them is \({1}/{6}\): \(P\left(tossing\ a\ 1\right)=P\left(tossing\ a\ 3\right)\)\(={1}/{6}\). From the law of the probability of alternative events, we have

    \[\begin{align*} P\left(event\ Y\right) &=(tossing\ a\ 1\ or\ tossing\ a\ 3) \\[4pt] &=P\left(tossing\ a\ 1\right)\ or P\left(tossing\ a\ 3\right) \\[4pt] &= {1}/{6}+{1}/{6} \\[4pt] &={2}/{6} \end{align*} \]

    We define \(X\) as the event that a single toss of a die comes up even. From the law of the probability of alternative events, we have

    \[\begin{align*} P\left(event\ X\right) &=P\left(tossing\ 2\ or\ 4\ or\ 6\right) \\[4pt] &=P\left(tossing\ a\ 2\right)+P\left(tossing\ a\ 4\right)+P\left(tossing\ a\ 6\right) \\[4pt] &={3}/{6} \end{align*} \]

    We define \(Z\) as the event that a single toss comes up \(5\).

    \[P\left(event\ Z\right)=P\left(tossing\ a\ 5\right)=1/6 \nonumber \]

    If there are \(\omega\) independent events (denoted \(E_1,E_2,\dots ,E_i,\dots ,E_{\omega }\)), the law of the probability of alternative events asserts that the probability that one of these events will occur in a single trial is

    \[ \begin{align*} P\left(E_1\ or\ E_2\ or\dots E_i\dots or\ E_{\omega }\right) &=P\left(E_1\right)+P\left(E_2\right)+\dots +P\left(E_i\right)+\dots +P\left(E_{\omega }\right) \\[4pt] &=\sum^{\omega }_{i=1} P\left(E_i\right) \end{align*} \]

    If these \(\omega\) independent events encompass all of the possible outcomes, the sum of their individual probabilities must be unity.

    clipboard_ede95eec728c6c2c7bcebadd530d0fcdc.png
    Figure 1. A simple case that illustrates the laws of probability.

    The Probability of Compound Events

    Let us now suppose that we make two trials in circumstances where event \(A\) is possible in the first trial and event \(B\) is possible in the second trial. We represent the probabilities of these events by \(P\left(A\right)\) and \(P(B)\) and stipulate that they are independent of one another; that is, the probability that \(B\) occurs in the second trial is independent of the outcome of the first trial. Then, the probability that \(A\) occurs in the first trial and \(B\) occurs in the second trial, \(P(A\ and\ B)\), is equal to the product of the individual probabilities.

    \[P\left(A\ and\ B\right)=P\left(A\right)\times P(B) \nonumber \]

    To illustrate this using outcomes from die-tossing, let us suppose that event \(A\) is tossing a \(1\) and event \(B\) is tossing a \(3\). Then, \(P\left(A\right)={1}/{6}\) and \(P\left(B\right)={1}/{6}\). The probability of tossing a 1 in a first trial and tossing a \(3\) in a second trial is then

    \[\begin{align*} P\left( \text{tossing a 1 first and tossing a 3 second}\right) &=P\left(\text{tossing a 1}\right)\times P\left(\text{tossing a 3}\right) \\[4pt] &={1}/{6}\times {1}/{6} \\[4pt] &={1}/{36} \end{align*} \]

    If we want the probability of getting one \(1\) and one \(3\) in two tosses, we must add to this the probability of tossing a \(3\) first and a \(1\) second.

    If there are \(\omega\) independent events (denoted \(E_1,E_2,\dots ,E_i,\dots ,E_{\omega }\)), the law of the probability of compound events asserts that the probability that \(E_1\) will occur in a first trial, and \(E_2\) will occur in a second trial, etc., is

    \[\begin{align*} P\left(E_1\ and\ E_2\ and\dots E_i\dots and\ E_{\omega }\right) &=P\left(E_1\right)\times P\left(E_2\right)\times \dots \times P\left(E_i\right)\times \dots \times P\left(E_{\omega }\right)\\[4pt] &=\prod^{\omega }_{i=1}{P(E_i)} \end{align*} \]


    This page titled 4.3: Some Important Properties of Events is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.