17: Equilibrium and the Second Law of Thermodynamics
- Page ID
- 87175
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Foundation
We have observed and defined phase transitions and phase equilibrium. We have also observed equilibrium in a variety of reaction systems. We will assume an understanding of the postulates of the Kinetic Molecular Theory and of the energetics of chemical reactions.
Goals
We have developed an understanding of the concept of equilibrium, both for phase equilibrium and reaction equilibrium. As an illustration, at normal atmospheric pressure, we expect to find \(\ce{H_2O}\) in solid form below \(0^\text{o} \text{C}\), in liquid form below \(100^\text{o} \text{C}\), and in gaseous form above \(100^\text{o} \text{C}\). What changes as we move from low temperature to high temperature cause these transitions in which phase is observed? Viewed differently, if a sample of gaseous water at \(120^\text{o} \text{C}\) is cooled to below \(100^\text{o} \text{C}\), virtually all of the water vapor spontaneously condenses to form the liquid:
\[\ce{H_2O} \left( g \right) \rightarrow \ce{H_2O} \left( l \right) \: \: \text{spontaneous below } 100^\text{o} \text{C}\]
By contrast, very little of liquid water at \(80^\text{o} \text{C} spontaneously converts to gaseous water:
\[\ce{H_2O} \left( l \right) \rightarrow \ce{H_2O} \left( g \right) \: \: \text{not spontaneous below } 100^\text{o} \text{C}\]
We can thus rephrase our question as, what determines which processes are spontaneous and which are not? What factors determine what phase is "stable"?
As we know, at certain temperatures and pressures, more than one phase can be stable. For example, at \(1 \: \text{atm}\) pressure and \(0^\text{o} \text{C}\),
\[\ce{H_2O} \left( s \right) \rightleftharpoons \ce{H_2O} \left( l \right) \: \: \text{equilibrium at } 0^\text{o} \text{C}\]
Small variations in the amount of heat applied or extracted to the liquid-solid equilibrium cause shifts towards liquid or solid without changing the temperature of the two phases at equilibrium. Therefore, when the two phases are at equilibrium, neither direction of the phase transition is spontaneous at \(0^\text{o} \text{C}\). We therefore need to understand what factors determine when two or more phases can coexist at equilibrium.
This analysis leaves unanswered a series of questions regarding the differences between liquids and gases. The concept of a gas phase or a liquid phase is not a characteristic of an individual molecule. In fact, it does not make any sense to refer to the "phase" of an individual molecule. The phase is a collective property of large numbers of molecules. Although we can discuss the importance of molecular properties regarding liquid and gas phases, we have not discussed the factors which determine whether the gas phase or the liquid phase is most stable at a given temperature and pressure.
These same questions can be applied to reaction equilibrium. When a mixture of reactants and products is not at equilibrium, the reaction will occur spontaneously in one direction or the other until the reaction achieves equilibrium. What determines the direction of spontaneity? What is the driving force towards equilibrium? How does the system know that equilibrium has been achieved? Our goal will be to understand the driving forces behind spontaneous processes and the determination of the equilibrium point, both for phase equilibrium and reaction equilibrium.
Observation 1: Spontaneous Mixing
We begin by examining common characteristics of spontaneous processes, and for simplicity, we focus on processes not involving phase transitions or chemical reactions. A very clear example of such a process is mixing. Imagine putting a drop of blue ink in a glass of water. At first, the blue dye in the ink is highly concentrated. Therefore, the molecules of the dye are closely congregated. Slowly but steadily, the dye begins to diffuse throughout the entire glass of water, so that eventually the water appears as a uniform blue color. This occurs more readily with agitation or stirring but occurs spontaneously even without such effort. Careful measurements shows that this process occurs without a change in temperature, so there is no energy input or release during the mixing.
We conclude that, although there is no energetic advantage to the dye molecules dispersing themselves, they do so spontaneously. Furthermore, this process is irreversible in the sense that, without considerable effort on our part, the dye molecules will never return to form a single localized drop. We now seek an understanding of how and why this mixing occurs.
Consider the following rather abstract model for the dye molecules in the water. For the glass, we take a row of ten small boxes, each one of which represents a possible location for a molecule, either of water or of dye. For the molecules, we take marbles, clear for water and red for ink. Each box will accommodate only a single marble, since two molecules cannot be in the same place at the same time. Since we see a drop of dye when the molecules are congregated, we model a "drop" as three red marbles in consecutive boxes. Notice that there are only eight ways to have a "drop" of dye, assuming that the three dye "molecules" are indistinguishable from one another. Two possibilities are shown in Figure 17.1a and Figure 17.1b. It is not difficult to find the other six.
a.
b.
c.
d.
Figure 17.1: Arrangement of Three Ink Molecules. (a) An unmixed state. (b) Another unmixed state. (c) A mixed state. (d) Another mixed state.
By contrast, there are many more ways to arrange the dye molecules so that they do not form a drop, i.e., so that the three molecules are not together. Two possibilities are shown in Figure 17.1c and Figure 17.1d. The total number of such possibilities is 112. (The total number of all possible arrangements can be calculated as follows: there are 10 possible locations for the first red marble, 9 for the second, and 8 for the third. This gives 720 possible arrangements, but many of these are identical, since the marbles are indistinguishable. The number of duplicates for each arrangement is 6, calculated from three choices for the first marble, two for the second, and one for the third. The total number of non-identical arrangements of the molecules is \(\frac{720}{6} = 120\).) We conclude that, if we randomly place the 3 marbles in the tray of 10 boxes, the chances are only 8 out of 120 (or 1 out of 15) of observing a drop of ink.
Now, in a real experiment, there are many, many times more ink molecules and many, many times more possible positions for each molecule. To see how this comes into play, consider a row of 500 boxes and 5 blue marbles. (The mole fraction of ink is thus 0.01.) The total number of distinct configurations of the red marbles in these boxes is approximately \(2 \times 10^{11}\). The number of these configurations which have all five ink marbles together in a drop is 496. If the arrangements are sampled randomly, the chances of observing a drop of ink with all five molecules together are thus about one in 500 million. The possibilities are remote even for observing a partial "droplet" consisting of fewer than all five dye molecules. The chance for four of the molecules to be found together is about one in 800,000. Even if we define a droplet to be only three molecules together, the chances of observing one are less than one in 1600.
We could, with some difficulty, calculate the probability for observing a drop of ink when there are \(10^{23}\) molecules. However, it is reasonably deduced from our small calculations that the probability is essentially zero for the ink molecules, randomly distributed into the water molecules, to be found together. We conclude from this that the reason why we observe ink to disperse in water is that the probability is infinitesimally small for randomly distributed dye molecules to be congregated in a drop.
Interestingly, however, when we set up the real ink and water experiment, we did not randomly distribute the ink molecules. Rather, we began initially with a drop of ink in which the dye molecules were already congregated. We know that, according to our kinetic theory, the molecules are in constant random motion. Therefore, they must be constantly rearranging themselves. Since these random motions do not energetically favor any one arrangement over any other one arrangement, we can assume that all possible arrangements are equally probable. Since most of the arrangements do not correspond to a drop of ink, then most of the time we will not observe a drop. In the case above with five red marbles in 500 boxes, we expect to see a drop only once in every 500 million times we look at the "glass". In a real glass of water with a real drop of ink, the chances are very much smaller than this.
We draw two very important conclusions from our model. First, the random motions of molecules make every possible arrangement of these molecules equally probable. Second, mixing occurs spontaneously simply because there are vastly many more arrangements which are mixed than which are not. The first conclusion tells us "how" mixing occurs, and the second tells us "why". On the basis of these observations, we deduce the following preliminary generalization: a spontaneous process occurs because it produces the most probable final state.
Probability and Entropy
There is a subtlety in our conclusion to be considered in more detail. We have concluded that all possible arrangements of molecules are equally probable. We have further concluded that mixing occurs because the final mixed state is overwhelmingly probable. Placed together, these statements appear to be openly contradictory. To see why they are not, we must analyze the statements carefully. By an "arrangement" of the molecules, we mean a specification of the location of each and every molecule. We have assumed that, due to random molecular motion, each such arrangement is equally probable. In what sense, then, is the final state "overwhelmingly probable"?
Recall the system illustrated in Figure 17.1, where we placed three identical red marbles into ten spaces. We calculated before that there are 120 unique ways to do this. If we ask for the probability of the arrangement in Figure 17.1a, the answer is thus \(\frac{1}{120}\). This is also the probability for each of the other possible arrangements, according to our model. However, if we now ask instead for the probability of observing a "mixed" state (with no drop), the answer is \(\frac{112}{120}\), whereas the probability of observing an "unmixed" state (with a drop) is only \(\frac{8}{120}\). Clearly, the probabilities are not the same when considering the less specific characteristics "mixed" and "unmixed".
In chemistry, we are virtually never concerned with microscopic details, such as the locations of specific individual molecules. Rather, we are interested in more general characteristics, such as whether a system is mixed or not, or what the temperature or pressure is. These properties of interest to us are macroscopic. As such, we refer to a specific arrangement of the molecules as a microstate, and each general state (mixed or unmixed, for example) as a macrostate. All microstates have the same probability of occurring, according to our model. As such, the macrostates have widely differing probabilities.
We come to an important result: the probability of observing a particular macrostate (e.g., a mixed state) is proportional to the number of microstates with that macroscopic property. For example, from Figure 17.1, there are 112 arrangements (microstates) with the "mixed" macroscopic property. As we have discussed, the probability of observing a mixed state is \(\frac{112}{120}\), which is obviously proportional to 112. Thus, one way to measure the relative probability of a particular macrostate is by the number of microstates \(W\) corresponding to that macrostate. \(W\) stands for "ways", i.e., there are 112 "ways" to get a mixed state in Figure 17.1.
Now we recall our conclusion that a spontaneous process always produces the outcome with greatest probability. Since \(W\) measures this probability for any substance or system of interest, we could predict, using \(W\), whether the process leading from a given initial state to a given final state was spontaneous by simply comparing probabilities for the initial and final states. For reasons described below, we instead define a function of \(W\),
\[S \left( W \right) = k \text{ln} \left( W \right)\]
called the entropy, which can be used to make such predictions about spontaneity. (The \(k\) is a proportionality constant which gives \(S\) appropriate units for our calculations.) Notice that the more microstates there are, the greater the entropy is. Therefore, a macrostate with a high probability (e.g. a mixed state) has a large entropy. We now modify our previous deduction to say that a spontaneous process produces the final state of greatest entropy. (Following modifications added below, this statement forms the Second Law of Thermodynamics.)
It would seem that we could use \(W\) for our calculations and that the definition of the new function \(S\) is unnecessary. However, the following reasoning shows that \(W\) is not a convenient function for calculations. We consider two identical glasses of water at the same temperature. We expect that the value of any physical property for the water in two glasses is twice the value of that property for a single glass. For example, if the enthalpy of the water in each glass is \(H_1\), then it follows that the total enthalpy of the water in the two glasses together is \(H_\text{total} = 2H_1\). Thus, the enthalpy of a system is proportional to the quantity of material in the system: if we double the amount of water, we double the enthalpy. In direct contrast, we consider the calculation involving \(W\) for these two glasses of water. The number of microstates of the macroscopic state of one glass of water is \(W_1\), and likewise the number of microstates in the second glass of water is \(W_1\). However, if we combine the two glasses of water, the number of microstates of the total system is found from the product \(W_\text{total} = W_1 \times W_1\), which does not equal \(2W_1\). In other words, \(W\) is not proportional to the quantity of material in the system. This is inconvenient, since the value of \(W\) thus depends on whether the two systems are combined or not. (If it is not clear that we should multiply the \(W\) values, consider the simple example of rolling dice. The number of states for a single die is 6, but for two dice the number is \(6 \times 6 = 36\), not \(6 + 6 = 12\).)
We therefore need a new function \(S \left( W \right)\), so that, when we combine the two glasses of water, \(S_\text{total} = S_1 + S_1\). Since \(S_\text{total} = S \left( W_\text{total} \right)\), \(S_1 = S \left( W_1 \right)\), and \(W_\text{total} = W_1 \times W_1\), then our new function \(S\) must satisfy the equation
\[S \left( W_1 \times W_1 \right) = S \left( W_1 \right) + S \left( W_1 \right)\]
The only function \(S\) which will satisfy this equation is the logarithm function, which has the property that \(\text{ln} \left( x \times y \right) = \text{ln} \left( x \right) + \text{ln} \left( y \right)\). We conclude that an appropriate state function which measures the number of microstates in a particular macrostate the entropy equation stated previously.
Observation 2: Absolute Entropies
It is possible, though exceedingly difficult, to calculate the entropy of any system under any conditions of interest from the equation \(S = k \text{ln} \left( W \right)\). It is also possible, using more advanced theoretical thermodynamics, to determine \(S\) experimentally by measuring heat capacities and enthalpies of phase transitions. Values of \(S\) determined experimentally, often referred to as "absolute" entropies, have been tabulated for many materials at many temperatures, and a few examples are given in Table 17.1. We treat these values as observations and attempt to understand these in the context of the entropy equation.
\(T \: \left( ^\text{o} \text{C} \right)\) | \(S \: \left( \frac{\text{J}}{\text{mol} \: ^\text{o} \text{C}} \right)\) | |
---|---|---|
\(\ce{H_2O} \left( g \right)\) | 25 | 188.8 |
\(\ce{H_2O} \left( l \right)\) | 25 | 69.9 |
\(\ce{H_2O} \left( l \right)\) | 0 | 63.3 |
\(\ce{H_2O} \left( s \right)\) | 0 | 41.3 |
\(\ce{NH_3} \left( g \right)\) | 25 | 192.4 |
\(\ce{HN_3} \left( l \right)\) | 25 | 140.6 |
\(\ce{HN_3} \left( g \right)\) | 25 | 239.0 |
\(\ce{O_2} \left( g \right)\) | 25 | 205.1 |
\(\ce{O_2} \left( g \right)\) | 50 | 207.4 |
\(\ce{O_2} \left( g \right)\) | 100 | 211.7 |
\(\ce{CO} \left( g \right)\) | 25 | 197.7 |
\(\ce{CO} \left( g \right)\) | 50 | 200.0 |
\(\ce{CO_2} \left( g \right)\) | 24 | 213.7 |
\(\ce{CO_2} \left( g \right)\) | 50 | 216.9 |
\(\ce{Br_2} \left( l \right)\) | 25 | 152.2 |
\(\ce{Br_2} \left( g \right)\) | 25 | 245.5 |
\(\ce{I_2} \left( s \right)\) | 25 | 116.1 |
\(\ce{I_2} \left( g \right)\) | 25 | 260.7 |
\(\ce{CaF_2} \left( s \right)\) | 25 | 68.9 |
\(\ce{CaCl_2} \left( s \right)\) | 25 | 104.6 |
\(\ce{CaBr_2} \left( s \right)\) | 25 | 130 |
\(\ce{C_8H_{18}} \left( s \right)\) | 25 | 361.1 |
There are several interesting generalities observed in Table 17.1. First, in comparing the entropy of the gaseous form of a substance to either its liquid or solid form at the same temperature, we find that the gas always has a substantially greater entropy. This is easy to understand from the entropy equation: the molecules in the gas phase occupy a very much larger volume. There are very many more possible locations for each gas molecule and thus very many more arrangements of the molecules in the gas. It is intuitively clear that \(W\) should be larger for a gas, and therefore the entropy of a gas is greater than that of the corresponding liquid or solid.
Second, we observe that the entropy of a liquid is always greater than that of the corresponding solid. This is understandable from our kinetic molecular view of liquids and solids. Although the molecules in the liquid occupy a comparable volume to that of the molecules in the solid, each molecule in the liquid is free to move through this entire volume. The molecules in the solid are relatively fixed in location. Therefore, the number of arrangements of molecules in the liquid is significantly greater than that in the solid, so the liquid has greater entropy by the entropy equation.
Third, the entropy of a substance increases with increasing temperature. The temperature is, of course, a measure of the average kinetic energy of the molecules. In a solid or liquid, then, increasing the temperature increases the total kinetic energy available to the molecules. The greater the energy, the more ways there are to distribute this energy amongst the molecules. Although we have previously only referred to the range of positions for a molecule as affecting \(W\), the range of energies available for each molecule similarly affects \(W\). As a result, as we increase the total energy of a substance, we increase \(W\) and thus the entropy.
Fourth, the entropy of a substance whose molecules contain many atoms is greater than that of a substance composed of smaller molecules. The more atoms there are in a molecule, the more ways there are to arrange those atoms. With greater internal flexibility, \(W\) is larger when there are more atoms, so the entropy is greater.
Fifth, the entropy of a substance with a high molecular weight is greater than that of a substance with a low molecular weight. This result is harder to understand, as it arises from the distribution of the momenta of the molecules rather than the positions and energies of the molecules. It is intuitively clear that the number of arrangements of the molecules is not affected by the mass of the molecules. However, even at the same temperature, the range of momenta available for a heavier molecule is greater than for a lighter one. To see why, recall that the momentum of a molecule is \(p = mv\) and the kinetic energy is \(KE = \frac{mv^2}{2} = \frac{p^2}{2m}\). Therefore, the maximum momentum available at a fixed total kinetic energy \(KE\) is \(p = \sqrt{2mKE}\). Since this is larger for larger mass molecules, the range of momenta is greater for heavier particle, thus increasing \(W\) and the entropy.
Observation 3: Condensation and Freezing
We have concluded from our observations of spontaneous mixing that a spontaneous process always produces the final state of greatest probability. A few simple observations reveal that our deduction needs some thoughtful refinement. For example, we have observed that the entropy of liquid water is greater than that of solid water. This makes sense in the context of the entropy equation, since the kinetic theory indicates that liquid water has a greater value of \(W\). Nevertheless, we observe that liquid water spontaneously freezes at temperatures below \(0^\text{o} \text{C}\). This process clearly displays a decrease in entropy and therefore evidently a shift from a more probable state to a less probable state. This appears to contradict directly our conclusion.
Similarly, we expect to find condensation of water droplets from steam when steam is cooled. On days of high humidity, water spontaneously liquefies from the air on cold surfaces such as the outside of a glass of ice water or the window of an air conditioned building. In these cases, the transition from gas to liquid is clearly from a higher entropy phase to a lower entropy phase, which does not seem to follow our reasoning thus far.
Our previous conclusions concerning entropy and probability increases were compelling, however, and we should be reluctant to abandon them. What we have failed to take into consideration is that these phase transitions involve changes of energy and thus heat flow. Condensation of gas to liquid and freezing of liquid to solid both involve evolution of heat. This heat flow is of consequence because our observations also revealed that the entropy of a substance can be increased significantly by heating. One way to preserve our conclusions about spontaneity and entropy is to place a condition on their validity: a spontaneous process produces the final state of greatest probability and entropy provided that the process does not involve evolution of heat. This is an unsatisfying result, however, since most physical and chemical processes involve heat transfer. As an alternative, we can force the process not to evolve heat by isolating the system undergoing the process: no heat can be released if there is no sink to receive the heat, and no heat can be absorbed if there is no source of heat. Therefore, we conclude from our observations that a spontaneous process in an isolated system produces the final state of greatest probability and entropy. This is one statement of the Second Law of Thermodynamics.
Free Energy
How can the Second Law be applied to a process in a system that is not isolated? One way to view the lessons of the previous observations is as follows: in analyzing a process to understand why it is or is not spontaneous, we must consider both the change in entropy of the system undergoing the process and the effect of heat released or absorbed during the process on the entropy of the surroundings. Although we cannot prove it here, the entropy increase of a substance due to heat \(q\) at temperature \(T\) is given by \(\Delta S = \frac{q}{T}\). From another study, we can calculate the heat transfer for a process occurring under constant pressure from the enthalpy change, \(\Delta H\). By conservation of energy, the heat flow into the surroundings must be \(-\Delta H\). Therefore, the increase in the entropy of the surroundings due to heat transfer must be \(\Delta S_\text{surr} = -\frac{\Delta H}{T}\). Notice that, if the reaction is exothermic, \(\Delta H < 0\) so \(\Delta S_\text{surr} > 0\). According to our statement of the Second Law, a spontaneous process in an isolated system is always accompanied by an increase in the entropy of the system. If we want to apply this statement to a non-isolated system, we must include the surroundings in our entropy calculation. We can say then that, for a spontaneous process,
\[\Delta S_\text{total} = \Delta S_\text{sys} + \Delta S_\text{surr} > 0\]
Since \(\Delta S_\text{surr} = -\frac{\Delta H}{T}\), then we can write that \(\Delta S - \frac{\Delta H}{T} > 0\). This is easily rewritten to state that, for a spontaneous process:
\[\Delta H - T \Delta S < 0\]
This equation is really just a different form of the Second Law of Thermodynamics. However, this form has the advantage that it takes into account the effects on both the system undergoing the process and the surroundings. Thus, this new form can be applied to non-isolated systems.
This equation reveals why the temperature affects the spontaneity of processes. Recall that the condensation of water vapor occurs spontaneously at temperature below \(100^\text{o} \text{C}\) but not above. Condensation is an exothermic process; to see this, consider that the reverse process, evaporation, obviously requires heat input. Therefore \(\Delta H < 0\) for condensation. However, condensation clearly results in a decrease in entropy, therefore \(\Delta S < 0\) also. Examining the above equation, we can conclude that \(\Delta H - T \Delta S < 0\) will be less than zero for condensation only if the temperature is not too high. At high temperature, the term \(-\Delta S\), which is positive, becomes larger than \(\Delta H\), so \(\Delta H - T \Delta S > 0\) for condensation at high temperatures. Therefore, condensation only occurs at lower temperatures.
Because of the considerable practical utility of the above equation in predicting the spontaneity of physical and chemical processes, it is desirable to simplify the calculation of the quantity on the left side of the inequality. One way to do this is to define a new quantity \(G = H - TS\), called the free energy. If we calculate from this definition the change in the free energy which occurs during a process at constant temperature, we get
\[\Delta G = G_\text{final} - G_\text{initial} = H_\text{final} - TS_\text{final} - \left( H_\text{initial} - TS_\text{initial} \right) = \Delta H - T \Delta S\]
and therefore a simplified statement of the Second Law of Thermodynamics is that
\[\Delta G < 0\]
for any spontaneous process. Thus, in any spontaneous process, the free energy of the system decreases. Note that \(G\) is a state function, since it is defined in terms of \(H\), \(T\), and \(S\), all of which are state functions. Since \(G\) is a state function, then \(\Delta G\) can be calculated along any convenient path. As such, the methods used to calculate \(\Delta H\) in another study can be used just as well to calculate \(\Delta G\).
Thermodynamic Description of Phase Equilibrium
As we recall, the entropy of vapor is much greater than the entropy of the corresponding amount of liquid. A look back at Table 17.1 shows that, at \(25^\text{o} \text{C}\), the entropy of one mole of liquid water is \(69.9 \: \frac{\text{J}}{\text{K}}\), whereas the entropy of one mole of water vapor is \(188.8 \: \frac{\text{J}}{\text{K}}\). Our first thought, based on our understanding of spontaneous processes and entropy, might well be that a mole of liquid water at \(25^\text{o} \text{C}\) should spontaneously convert into a mole of water vapor, since this process would greatly increase the entropy of the water. We know, however, that this does not happen. Liquid water will exist in a closed container at \(25^\text{o} \text{C}\) without spontaneously converting entirely to vapor. What have we left out?
The answer, based on our discussion of free energy, is the energy associated with evaporation. The conversion of one mole of liquid water into one mole of water vapor results in absorption of \(44.0 \: \text{kJ}\) of energy from the surroundings. Recall that this loss of energy from the surroundings results in a significant decrease in entropy of the surroundings. We can calculate the amount of entropy decrease in the surroundings from \(\Delta S_\text{surr} = -\frac{\Delta H}{T}\). At \(25^\text{o} \text{C}\), this gives \(\Delta S_\text{surr} = \frac{-44.0 \: \text{kJ}}{298.15 \: \text{K}} = -147.57 \: \frac{\text{J}}{\text{K}}\) for a single mole. This entropy decrease is greater than the entropy increase of the water, \(188.8 \: \frac{\text{J}}{\text{K}} - 69.9 \: \frac{\text{J}}{\text{K}} = 118.9 \: \frac{\text{J}}{\text{K}}\). Therefore, the entropy of the universe decreases when one mole of liquid water converts to one mole of water vapor at \(25^\text{o} \text{C}\). We can repeat this calculation in terms of the free energy change:
\[\begin{align} \Delta G &= \Delta H - T \Delta S \\ &= 44000 \: \frac{\text{J}}{\text{mol}} - \left( 298.15 \: \text{K} \right) \left( 118.9 \: \frac{\text{J}}{\text{K mol}} \right) \\ &= 8.55 \: \frac{\text{kJ}}{\text{mol}} > 0 \end{align}\]
Since the free energy increases in the transformation of one mole of liquid water to one mole of water vapor, we predict that the transformation will not occur spontaneously. This is something of a relief, because we have correctly predicted that the mole of liquid water is stable at \(25^\text{o} \text{C}\) relative to the mole of water vapor.
We are still faced with our perplexing question, however. Why does any water evaporate at \(25^\text{o} \text{C}\)? How can this be a spontaneous process?
The answer is that we have to be careful about interpreting our prediction. The entropy of one mole of water at \(25^\text{o} \text{C}\) and \(1.00 \: \textbf{atm}\) pressure is \(188.8 \: \frac{\text{J}}{\text{K}}\). We should clarify our prediction to say that one mole of liquid water will not spontaneously evaporate to form one mole of water vapor at \(25^\text{o} \text{C}\) and \(1.00 \: \text{atm}\) pressure. This prediction is in agreement with our observation, because we have found that the water vapor formed spontaneously above liquid water at \(25^\text{o} \text{C}\) has pressure \(23.8 \: \text{torr}\), well below \(1.00 \: \text{atm}\).
Assuming that our reasoning is correct, then the spontaneous evaporation of water at \(25^\text{o} \text{C}\) when no water vapor is present initially must have \(\Delta G < 0\). And, indeed, as water vapor forms and the pressure of the water vapor increases, evaporation must continue as long as \(\Delta G < 0\). Eventually, evaporation stops in a closed system when we reach the vapor pressure, so we must reach a point where \(\Delta G\) is no longer less than zero, that is, evaporation stops when \(\Delta G = 0\). This is the point where we have equilibrium between liquid and vapor.
We can actually determine the conditions under which this is true. Since \(\Delta G = \Delta H - T \Delta S\), then when \(\Delta G = 0\), \(\Delta H = T \Delta S\). We already know that \(\Delta H = 44.0 \: \text{kJ}\) for the evaporation of one mole of water. Therefore, the pressure of water vapor at which \(\Delta G = 0\) at \(25^\text{o} \text{C}\) is the pressure at which \(\Delta S = \frac{\Delta H}{T} = 147.6 \: \frac{\text{J}}{\text{K}}\) for a single mole of water evaporating. This is larger than the value of \(\Delta S\) for one mole and \(1.00 \: \text{atm}\) pressure of water vapor, which as we calculated was \(118.9 \: \frac{\text{J}}{\text{K}}\). Evidently, \(\Delta S\) for evaporation changes as the pressure of the water vapor changes. We therefore need to understand why the entropy of the water vapor depends on the pressure of the water vapor.
Recall that 1 mole of water vapor occupies a much smaller volume at \(1.00 \: \text{atm}\) of pressure than it does at the considerably lower vapor pressure of \(23.8 \: \text{torr}\). In the larger volume at lower pressure, the water molecules have a much larger space to move in, and therefore the number of microstates for the water molecules must be larger in a larger volume. Therefore, the entropy of one mole of water vapor is larger in a larger volume at lower pressure. The entropy change for evaporation of one mole of water is thus greater when the evaporation occurs to a lower pressure. With a greater entropy change to offset the entropy loss of the surroundings, it is possible for the evaporation to be spontaneous at lower pressure. And this is exactly what we observe.
To find out how much the entropy of a gas changes as we decrease the pressure, we assume that the number of microstates \(W\) for the gas molecule is proportional to the volume \(V\). This would make sense, because the larger the volume, the more places there are for the molecules to be. Since the entropy is given by \(S = k \text{ln} \left( W \right)\), then \(S\) must also be proportional to \(\text{ln} \left( V \right)\). Therefore, we can say that
\[\begin{align} S \left( V_2 \right) - S \left( V_1 \right) &= R \: \text{ln} \left( V_2 \right) - R \: \text{ln} \left( V_1 \right) \\ &= R \: \text{ln} \left( \frac{V_2}{V_1} \right) \end{align}\]
We are interested in the variation of \(S\) with pressure, and we remember from Boyle's Law that, for a fixed temperature, volume is inversely related to pressure. Thus, we find that
\[\begin{align} S \left( P_2 \right) - S \left( P_1 \right) &= R \: \text{ln} \left( \frac{P_1}{P_2} \right) \\ &= - \left( R \: \text{ln} \left( \frac{P_2}{P_1} \right) \right) \end{align}\]
For water vapor, we know that the entropy at \(1.00 \: \text{atm}\) pressure is \(188.8 \: \frac{\text{J}}{\text{K}}\) for one mole. We can use this and the equation above to determine the entropy at any other pressure. For a pressure of \(23.8 \: \text{torr} = 0.0313 \: \text{atm}\), this equation gives that \(S \left( 23.8 \: \text{torr} \right)\) is \(217.6 \: \frac{\text{J}}{\text{K}}\) for one mole of water vapor. Therefore, at this pressure, the \(\Delta S\) for evaporation of one mole of water vapor is \(217.6 \: \frac{\text{J}}{\text{K}} - 69.9 \: \frac{\text{J}}{\text{K}} = 147.6 \: \frac{\text{J}}{\text{K}}\). We can use this to calculate that for evaporation of one mole of water at \(25^\text{o} \text{C}\) and water vapor pressure of \(23.8 \: \text{torr}\) is \(\Delta G = \Delta H - T \Delta S = 44.0 \: \text{kJ} - \left( 298.15 \: \text{K} \right) \left( 147.6 \: \frac{\text{J}}{\text{K}} \right) = 0.00 \: \text{kJ}\). This is the condition we expected for equilibrium.
We can conclude that the evaporation of water when no vapor is present initially is a spontaneous process with \(\Delta G < 0\), and the evaporation continues until the water vapor has reached its equilibrium vapor pressure, at which point \(\Delta G = 0\).
Thermodynamic description of reaction equilibrium
Having developed a thermodynamic understanding of phase equilibrium, it proves to be even more useful to examine the thermodynamic description of reaction equilibrium to understand why the reactants and products come to equilibrium at the specific values that are observed.
Recall that \(\Delta G = \Delta H - T \Delta S < 0\) for a spontaneous process, and \(\Delta G = \Delta H - T \Delta S = 0\) at equilibrium. From these relations, we would predict that most (but not all) exothermic processes with \(\Delta H < 0\) are spontaneous, because all such processes increase the entropy of the surroundings when they occur. Similarly, we would predict that most (but not all) processes with \(\Delta S > 0\) are spontaneous.
We try applying these conclusions to the synthesis of ammonia
\[\ce{N_2} \left( g \right) + 3 \ce{H_2} \left( g \right) \rightarrow 2 \ce{NH_3} \left( g \right)\]
at \(298 \: \text{K}\), for which we find that \(\Delta S^0 = -198 \: \frac{\text{J}}{\text{mol K}}\). Note that \(\Delta S^0 < 0\) because the reaction reduces the total number of gas molecules during ammonia synthesis, thus reducing \(W\), the number of ways of arranging the atoms in these molecules. \(\Delta S^0 < 0\) suggests that ammonia synthesis should not occur at all. However, \(\Delta H^0 = -92.2 \: \frac{\text{kJ}}{\text{mol}}\). Overall, we find that \(\Delta G^0 = -33.0 \: \frac{\text{kJ}}{\text{mol}}\) at \(298 \: \text{K}\), which suggests that the synthesis of ammonia is spontaneous.
Given this analysis, we are now pressured to ask, if ammonia synthesis is predicted to be spontaneous, why does the reaction come to equilibrium without fully consuming all of the reactants? The answer lies in a more careful examination of the values given: \(\Delta S^0\), \(\Delta H^0\), and \(\Delta G^0\) are the values for this reaction at standard conditions, which means that all of the gases in the reactants and products are taken to be at \(1 \: \text{atm}\) pressure. Thus, the fact that \(\Delta G < 0\) for the synthesis of ammonia at standard conditions means that, if all three gases are present at \(1 \: \text{atm}\) pressure, the reaction will spontaneously produce an increase in the amount of \(\ce{NH_3}\). Note that this will reduce the pressure of the \(\ce{N_2}\) and \(\ce{H_2}\) and increase the pressure of the \(\ce{NH_3}\). This changes the value of \(\Delta S\) and thus of \(\Delta G\), because as we already know the entropies of all three gases depend on their pressures. As the pressure of \(\ce{NH_3}\) increases, its entropy decreases, and as the pressures of the reactant gases decrease, their entropies increase. The result is that \(\Delta S\) becomes increasingly negative. The reaction creates more \(\ce{NH_3}\) until the value of \(\Delta S\) is sufficiently negative that \(\Delta G = \Delta H - T \Delta S = 0\).
From this analysis, we can say by looking at \(\Delta S^0\), \(\Delta H^0\), and \(\Delta G^0\) that, since \(\Delta G < 0\) for ammonia synthesis, reaction equilibrium results in production of more product and less reactant than at standard conditions. Moreover, the more negative \(\Delta G^0\) is, the more strongly favored are the products over the reactants at equilibrium. By contrast, the more positive \(\Delta G^0\) is, the more strongly favored are the reactants over the products at equilibrium.
Thermodynamic Description of the Equilibrium Constant
Thermodynamics can also provide a quantitative understanding of the equilibrium constant. Recall that the condition for equilibrium is that \(\Delta G = 0\). As noted before, \(\Delta G\) depends on the pressures of the gases in the reaction mixture, because \(\Delta S\) depends on these pressures. Though we will not prove it here, it can be shown by application of the relationship between entropy and pressure to a reaction that the relationship between \(\Delta G\) and the pressures of the gases is given by the following equation:
\[\Delta G = \Delta G^0 + RT \text{ln} \left( Q \right)\]
(Recall again that the superscript \(^0\) refers to standard pressure of \(1 \: \text{atm}\). \(\Delta G^0\) is the difference between the free energies of the products and reactants when all gases are at \(1 \: \text{atm}\) pressure.) In this equation, \(Q\) is a quotient of partial pressures of the gases in the reaction mixture. In this quotient, each product gas appears in the numerator with an exponent equal to its stoichiometric coefficient, and each reactant gas appears in the denominator also with its corresponding exponent. For example, for the reaction
\[\ce{H_2} \left( g \right) + \ce{I_2} \left( g \right) \rightarrow 2 \ce{HI} \left( g \right)\]
\[Q = \frac{P^2_{HI}}{P_{H_2} P_{I_2}}\]
However, if the pressures in \(Q\) are the equilibrium partial pressures, then \(Q\) has the same value as \(K_p\), the equilibrium constant, by definition. Moreover, if the pressures are at equilibrium, we know that \(\Delta G = 0\). If we look back at the definition of \(\Delta G\), we can conclude that
\[\Delta G^0 = - \left( RT \text{ln} \left( K_p \right) \right)\]
This is an exceptionally important relationship, because it relates two very different observations. To understand this significance, consider first the case where \(\Delta G^0 < 0\). We have previously reasoned that, in this case, the reaction equilibrium will favor the products. From the above equation we can note that, if \(\Delta G^0 < 0\), it must be that \(K_p > 1\). Furthermore, if \(\Delta G^0\) is a large negative number, \(K_p\) is a very large number. By contrast, if \(\Delta G^0\) is a large positive number, \(K_p\) will be a very small (though positive) number much less than 1. In this case, the reactants will be strongly favored at equilibrium.
Note that the thermodynamic description of equilibrium and the dynamic description of equilibrium are complementary. Both predict the same equilibrium. In general, the thermodynamic arguments give us an understanding of the conditions under which equilibrium occurs, and the dynamic arguments help us understand how the equilibrium conditions are achieved.
Review and Discussion Questions
Each possible sequence of the 52 cards in a deck is equally probable. However, when you shuffle a deck and then examine the sequence, the deck is never ordered. Explain why in terms of microstates, macrostates, and entropy.
Assess the validity of the statement, "In all spontaneous processes, the system moves toward a state of lowest energy." Correct any errors you identify.
In each case, determine whether spontaneity is expected at low temperature, high temperature, any temperature, or no temperature:
\(\Delta H^0 > 0\), \(\Delta S^0 > 0\)
\(\Delta H^0 < 0\), \(\Delta S^0 > 0\)
\(\Delta H^0 > 0\), \(\Delta S^0 < 0\)
\(\Delta H^0 < 0\), \(\Delta S^0 < 0\)
Using thermodynamic equilibrium arguments, explain why a substance with weaker intermolecular forces has a greater vapor pressure than one with stronger intermolecular forces.
Why does the entropy of a gas increase as the volume of the gas increases? Why does the entropy decrease as the pressure increases?
For each of the following reactions, calculate the value of \(\Delta S^0\), \(\Delta H^0\), and \(\Delta G^0\) at \(T = 298 \: \text{K}\) and use these to predict whether equilibrium will favor products or reactants at \(T = 298 \: \text{K}\). Also calculate \(K_p\).
\(2 \ce{CO} \left( g \right) + \ce{O_2} \left( g \right) \rightarrow 2 \ce{CO_2} \left( g \right)\)
\(\ce{O_3} \left( g \right) + \ce{NO} \left( g \right) \rightarrow \ce{NO_2} \left( g \right) + \ce{O_2} \left( g \right)\)
\(2 \ce{O_3} \left( g \right) \rightarrow 3 \ce{O_2} \left( g \right)\)
Predict the sign of the entropy for the reaction
\[2 \ce{H_2} \left( g \right) + \ce{O_2} \left( g \right) \rightarrow 2 \ce{H_2O} \left( g \right)\]
Give an explanation, based on entropy and the Second Law, of why this reaction occurs spontaneously.
For the reaction \(\ce{H_2} \left( g \right) \rightarrow 2 \ce{H} \left( g \right)\), predict the sign of both \(\Delta H^0\) and \(\Delta S^0\). Should this reaction be spontaneous at high temperature or at low temperature? Explain.
For each of the reactions listed above, predict whether increases in temperature will shift the reaction equilibrium more towards products or more towards reactants.
Using the general definition of \(\Delta G\) and the definition of \(Q\), show that for a given set of initial partial pressures where \(Q\) is larger than \(K_p\), the reaction will spontaneously create more reactants. Also show that if \(Q\) is smaller than \(K_p\), the reaction will spontaneously create more products.
Contributors and Attributions
John S. Hutchinson (Rice University; Chemistry)