7. Thermodynamic equilibrium
7.0 Second Derivatives of Thermodynamic Potentials
Postulate 2 implies that \(S\) is maximized at constant \(U: dS=0, d^2S < 0\). The second inequality is required to make sure that the point where the slop is zero has negative curvature, so it is indeed a maximum.
Postulate 2’ similarly implies that U is minimized at constant \(S: dU=0, d^2U > 0\). The second inequality ensures that U has a positive curvature where its slope vanishes, yielding a minimum.
Similarly, \(d^2G >0\) at constant \(T\) and \(P\), \(d^2A >0\) at constant \(T\), and \(d^2H >0\) at constant \(P\). These potentials are derived from energy and minimize \(U_{tot}\) for the open system with the appropriate intensive variable. For Massieu functions, \(d^2f < 0\), as they are derived from entropy.
There is a bewildering array of second derivatives from all of these potentials. Fortunately, only three of them (not counting derivatives with respect to mole numbers) are independent, including all the possible potentials. So \((\partial P/ \partial T)_S, ( \partial \mu/ \partial n)_V, ( \partial G/ \partial S)_V, ( \partial U/ \partial T))P\) etc. cannot all be independent of one another.
The reason is that thermodynamic potentials are state functions. For a potential \(z=Xdx+Ydy+\sum \mu_i dn_i\), where \(X=(\partial z/ \partial x)_y\) and \(Y=(\partial z/ \partial y)_x\), only the following are independent second derivatives:
 \(\dfrac{\partial X}{\partial x} = \dfrac{\partial^2 z}{\partial x^2} \)
 \(\dfrac{\partial Y}{\partial y} = \dfrac{\partial^2 z}{\partial y^2} \)
 \(\dfrac{\partial X}{\partial y} = \dfrac{\partial^2 z}{\partial x \partial y} \)
\(\dfrac{\partial Y}{\partial x} = \dfrac{\partial^2 z}{\partial y \partial x} = \dfrac{\partial^2 z}{\partial x \partial y} \) is not independent of 3. Thus there is no fourth second derivative. In addition, Maxwell's relations and Jacobians can be used to calculate the derivatives of any other potential, once three have been picked for potential \(z\). This is so because other potentials are related to \(z\) by Legendre transforms, and do not contain independent information about stability. They are merely alternative formulations of \(U_{tot}\) for the closed system, cleverly written in terms of only variables of the open subsystem of interest.
In practice, chemical problems usually involve constant \(T\) and \(P\), so the derivatives of the Gibbs free energy are usually used:
1.
\[\dfrac{\partial S}{\partial T} =  \dfrac{\partial^2 G}{\partial T^2}\]
\[ \Rightarrow \dfrac{T}{n} \left( \dfrac{\partial S}{\partial T}\right)_P = \left(\dfrac{dq_{rev}}{dT}\right)_P = c_P \; \text{(heat capacity at constant P)}\]
2.
\[ \dfrac{ \partial V}{\partial P} = \dfrac{\partial^2G}{\partial P^2}\]
\[ \Rightarrow \dfrac{1}{V} \left( \dfrac{\partial V}{ \partial P}\right)_T = \kappa \; \text{(isothermal compressibility)}\]
3.
\[ \dfrac{\partial V}{ \partial T} = \dfrac{\partial ^2G}{\partial P \partial T}\]
\[ \Rightarrow \dfrac{1}{V} \left( \dfrac{\partial V}{\partial T} \right)_P = \alpha \; \text{(isobaric thermal expansion coefficient)}\]
7.1 Thermodynamic Equilibrium
Theorem: Thermodynamics Equilibrium 

\(c_p > 0\), \(C_v > 0\) and \(k > 0\) in a simple closed system at equilibrium. 
Proof 

For convenience let's introduce a notation: \[ u_{ss}= \left( \dfrac{\partial T}{\partial s} \right)_v\] \[ u_{sv}= \left( \dfrac{\partial T}{\partial v} \right)_s\] \[ u_{vv}= \left( \dfrac{\partial P}{\partial v} \right)_s\] We need to show that \( \kappa = \dfrac{1}{V} \left(\dfrac{\partial V}{\partial P} \right)_T > 0\). Let \[ d^2u = \dfrac{1}{2!} d(u_sds + u_v dv)\] \[= \dfrac{1}{2!} ( u_{ss}ds^2 + 2u_{sv} ds\, dv + u_{vv} dv^2)\] in matrix form \[ d^2u = \dfrac{1}{2} \] For this to be a minimum, the two eigenvalues of the matrix must be positive (otherwise, \(d^2u\) will not have upward curvature in both dimensions \(ds\) and \(dv\)): \[ \lambda = \dfrac{u_{ss}+u_{vv}}{2} \pm \dfrac{1}{2} \{ (u_{ss} u_{vv})^2 + 4u_{sv}^2\}^{1/2}\] Positiveness of the eigenvalues \(\lambda\) implies the following:
Now, \[ u_{ss} = \left(\dfrac{\partial T}{\partial S}\right)_V = \dfrac{T}{c_v} > 0\] and \[ u_{ss}u_{vv}  u_{sv}^2 \dfrac{T}{c_v\kappa V} > 0 \] therefore \(\kappa >0\) (see below for detailed derivation). We already demonstrated that \(c_p = c_v+\dfrac{Tv \alpha^2}{\kappa}\) so \(c_P > 0\) also. QED. 
Proof: Alternative Proof 

Again, \[ d^2u =\dfrac{1}{2!} (u_{ss}\,ds^2 + 2u_{sv} ds\,dv + u_{vv} \,dv^2)\] \[ = d(u_s ds + u_v dv) = \dfrac{1}{2!} (u_{ss}ds^2 + u_{sv}ds\,dv + u_{vs} dv\, ds + u_{vv} dv^2)\] \[ \dfrac{1}{2} \left( \dfrac{1}{u_{ss}} dT^2  \dfrac{2u_{sv}}{u_{ss}} dT\,dv + \dfrac{u_{sv}^2}{u_{ss}} dv^2 + \dfrac{2u_{sv}}{u_{ss}} dT\,dv \dfrac{2u_{sv}^2}{u_{ss}}dv^2 + u_{vv} dv^2 \right)\] \( \Rightarrow u_{ss} > 0\) and \(u_{vv}  \dfrac{u_{sv}^2}{u_{ss}} > 0\) as demonstrated already using the matrix method. Looking now at the second combination of derivatives in more detail,

7.2 Le Châtelier’s principle
The principle: A system is in stable equilibrium if a perturbation induces processes that restore equilibrium (\(d^2S < 0\) or \(d^2U > 0\)).
Example 7.1 

Let a small amount of heat be added to or subtracted from one side of a diathermal rigid wall. The resulting subsystem “2” is now slightly out of equilibrium. ≠ T_{1} on that side. We also know that will be the change of entropy as equilibrium is reestablished. In the above formula, we used \(dU_1 = dU_2\) by postulate 1. The latter equality holds if \(dT\) is small enough. 
Clearly, \(dS \neq 0\) for any small energy flow \(dU_1\), so the system is out of equilibrium. Equilibrium is restored spontaneously when are equalized again. This occurs through heat flow from 2 to 1 through the diathermal membrane. What is the direction of that heat flow? To satisfy Postulate 2 of thermodynamics, \(dS>0\).
since \(T^2 > 0\).
If \(\delta T > 0\), then \(dU_1 > 0\) and energy flows from 2 to 1 to increase the energy in “1” so \(T_1\) increases towards \(T_2\). If \(\delta T < 0\), then \(dU_1 < 0\) and energy flows from 1 to 2 to increase the energy in “2” so \(T_2\) increases towards \(T_1\).
Therefore,
 after a perturbation, changes in the system oppose the perturbation.
7.3 The Low Temperature Limit
The Nernst Postulate 3 (Planck version) states that \( \lim_{T\rightarrow 0} S =0\). Microscopically this implies only a single microstate is populated, so \(k_B\ln W = k_B \ln(1)=0\) (a choice of several microstates introduces disorder).
Ideally, this may be true: if two states nominally have the same energy \(E\), some small coupling \(V\) to the environment will lower the energy one of them, and as Tà 0, only E_{1}, not E_{2}, is populated.† In practice, the Nernst postulate is not appropriate for systems where barriers \(> k_BT\): The system may be trapped in a state of higher energy, \(E_2\). Even though a lower energy state \(E_1\) exists, it cannot be reached from \(E_2\). Glasses are a good example of such a situation: the glass structure is not the energy minimum (the crystal usually is), but dynamics of units within the glass are so slow that the equilibrium cannot be reached. Thus \(S > 0\) even when \(T = 0\), especially because things do not move very fast at low temperature.
Often we can get away treating the system as though Nernst’s postulate were true anyhow. If an experiment is done on a time scale much smaller than the barrier crossing time, one may take the minimum one happens to be in as ‘the one.” Effectively, the experiment knows nothing about the lower state on the other side of the barrier.
Because at constant volume => c_{v} à 0 as Tà 0 (or \(S\) is not finite).
Similarly => c_{p} à 0 as Tà 0
If S à 0 as T à 0, then the Schange of any isothermal process must also go to zero as Tà 0:
.
Use these consequences of postulate 3 with caution. If you know that your system is not in a minimal energy state as \(T\) approaches 0, at least make sure it really cannot get there, so that for all practical purposes your system is in its minimal energy state.
7.4 Chemical equilibrium at constant temperature and pressure
Consider a chemical reaction , where we define the stoichiometric coefficients for reactants and for products. This definition allows us to write the reaction as:
.
The curly brackets provide the initial condition, e.g. particle number, pressure, or concentration of all the reactants and products.
Because of mass balance, the changes in n_{i} are not independent: . Here, is a “progress parameter” or “reaction coordinate.” As x increases or decreases from 0 (the reaction begins), the concentrations vary in proportion to the stoichiometric coefficients. Integrating
Thus our initial condition can be written as =0.
At constant (T,P), equilibrium occurs when G is minimized:
.
Note that the chemical potentials also vary as the reaction progresses, i.e. they depend on x. For example, the partial pressures P_{i} of gaseous reactants and products change depending on x as the reaction proceeds, and this changes their chemical potentials, even for ideal gases. Differentiating the equation for G, we have in the Gibbs ensemble
at equilibrium and constant T, P. It therefore follows that
=>
at equilibrium. This is illustrated in the figure below. Note that our beloved DG is not the free energy of the system. It is the derivative of the free energy, or change in free energy has the reaction proceeds by a unit amount . Hence DG ~ dG = 0 at equilibrium, while G is minimized and certainly not necessarily 0.
Figure 7.1: Relationship between free energy G as a function of reaction coordinate x, and the derivative (or change in) free energy with respect to x, usually called DG(x) = ∂G/∂x. When G is at a minimum (equilibrium has been reached) its derivative equals zero (DG = 0).
For our particle number n_{i}, or our partial pressure P_{i}, or our concentration , we must substitute under nonideal conditions. But under ideal conditions, we have
as proved earlier in chapter 4. If needed, V can of course be calculated from the free energy as (∂G/∂P)_{T,ni}. Inserting the concentration into our definition of DG, we get
=>
=>
For given T, P, c_{i}, the derivative DG gives the change in free energy/mole when we allow the reaction to proceed by a small amount. The above is known as the mass action law, and describes how the derivative of the free energy changes with concentration (or pressure, etc.). When the derivative reaches 0, equilibrium has been established at constant T, P.
In the figure, initially, DG¹0. At equilibrium, DG=0
defines the equilibrium constant. Because DG is additive for different reactions, so ln(K) is additive also.
Because G = H – TS => . This formula relates the change of free energy as the reaction progresses to the change of enthalpy and entropy. We can integrate this equation on both sides to obtain:
.
Integration allows you to get the free energy difference for a finite progress of the reaction. For example, let’s say DG = 20 kJ/mole at the initial condition of a reaction. If the reaction is allowed to proceed by some very small amount, say x=0.01 moles, then the free energy G will indeed drop by –0.2 kJ. But this cannot continue: as equilibrium is approached, DG®0, so the free energy will drop less and less until it stops changing. To get the actual change in free energy , DG needs to be integrated from “i” to “f”.
Since the free energy contains the full information about the system, other thermodynamic quantities can be obtained from it. For example, the entropy can be obtained from
.
Similarly, the enthalpy can be obtained from
Important: note that derived this way are not expressed in terms of their natural variables (S and V, or U and V, respectively). Thus their derivatives are not independent when they are written in this form. You have obtained equations of state, not fundamental relations.
Example 7.2 

The cooperative transition. This is a unimolecular reaction where the free energy depends on a tunable parameter d. An example would be the unfolding of a protein when the denaturant urea is added to the protein solution. Expand the dependence of free energy in a Taylor series , where d is a thermodynamic variable (e.g. T, P, or a solute concentration such as [urea]). Because only A or B are present, the mole fractions are and ; also . We can thus write the equilibrium constant as . Since , we can rewrite this as . The figure below shows a plot of this equation, which is a Sigmoid curve. It makes a fairly sharp transition with slope –m/4RT at a transition value d_{0} = DG^{(0)}(0)/m. The curve shows us how the concentration of folded vs unfolded protein switches as denaturant is added.

Fig 7.2 Titration of reversible reaction with parameter \(d\) (e.g. temperature, urea concentration, pH, etc.)
Example 7.4: Van't Hoff Relation for Reference Enthalpy 

Because the temperature dependence of the entropy and enthalpy terms in the free energy is different, we can use the total derivative of the equilibrium constant K to obtain information about the enthalpy directly. This is a case where a total derivative, instead of a partial derivative, can be useful:
Note that G, and hence K, are functions of T, P, and n_{i} as independent variables, so in the second term above we do not take further derivatives like ∂P/∂T (P is an independent variable). Because is also true at the reference concentration. We can substitute this equation relating DS^{(0)} and DG^{(0)} to remove the derivative in the equation for ln K, obtaining
. 
If we know K(T,P), we immediately know . Integrating this derivative we obtain
;
Changes in K(T) can be obtained from only, without explicit use of . This is a nice bonus when you are doing calorimetry because enthalpy is much easier to measure calorimetrically than free energy or entropy.
Example 7.3 

. We will assume that pressure is constant (hence \(c_P\)), but we want to determine the temperature dependence of the equilibrium constant. Phase transitions from phase A to phase B often involve a change in heat capacity between states. Using the superscript ^{(P)} to indicate a constant reference pressure (usually 1 atm), we can write for the enthalpy and entropy
.
If c_{p} is temperatureindependent, we can integrate these two equations and combine to obtain the free energy change:
Note: can indeed be derived from by partial derivative, and solves the free energy minimum:
Note: as in the case of DG, DC_{P} is not the heat capacity of the reaction mixture after reaction minus the heat capacity before reaction, unless the reaction starts with pure A and goes to pure B. Rather, just like the case of DG. Consider it in detail for the simple reaction. Let’s take as reaction coordinate n_{B}, the number of moles of B. Thus n_{A} = n – n_{B}. The heat capacity of the reaction mix is = (ideal mixture case). Taking the derivative, . 
Example 7.5 

Interpretation of for system and environment
The Gibbs free energy is minimized for an open system in contact with a T, P reservoir:
For a moment, let us explicitly write “sys” for the system thermodynamic variables:
For a quasistatic heat release, . We can combine the two entropies into
As we showed earlier in conjunction with the Helmoltz potential (Ch. 4), when the system is connected to a bath that keeps intensive parameters I_{ij} fixed (e.g. I_{US} = T = ∂U/∂S)_{V,n}), then the potential whose natural variables are those I_{ij} is minimized when the total energy is minimized (equivalently, when the total entropy is maximized):
.
Note that postulate P2 (the “Second Law”) is not violated if virtually all the available free energy is converted to work: if , remains slightly positive. Therefore is the maximum amount of work available form a constant T, P reaction per mole of reaction. would imply , violating postulate 2. 
7.5 Partial molar quantities: derivatives with respect to \(n_i\) instead of \(\xi\)
The thermodynamic potentials, including the Gibbs potential, are rather unusual functions because not only is
\[dG=Sdt+Vdp + \mu dn,\]
but in fact
\[G=UTS+PV = \mu n\]
for the case of a simple system. It is true that (from chain rule)
\[\dfrac{dG}{dn}=\dfrac{\partial \mu}{\partial n} n + \mu \dfrac{ \partial n}{\partial n}=\dfrac{n \partial \mu}{\partial n}+\mu,\]
but this is irrelevant from the point of view of thermodynamic partial derivatives.The relevant partial derivative is
\[ \dfrac{\partial G}{\partial n}=\mu(T,P,n)\]
and it is also a function of \(T\), \(P\), and \(n\) in the Gibbs ensemble.
We saw when deriving the Gibbs free energy by Legendre transform that for a multicomponent system \(G=\sum \mu_i n_i\). From this follows for each component at constant temperature and pressure:
\[dG=\sum_i \mu_i dn_i\,\]
not
\[dG=\sum_i \mu_i dn_i + n_id\mu_i\]
because of the GibbsDuhem relation!) and
\[ \left( \dfrac{dG}{dn_i} \right)_{T,P} = \mu.\]
The partial molar derivative of the Gibbs free energy is the chemical potential of component “i”, and the Gibbs free energy can be obtained simply by adding these chemical potentials multiplied by the mole numbers.
We can similarly define
\[ \left( \dfrac{\partial V }{\partial n_i}\right)_{T,P} = v_i\]
\[\left( \dfrac{\partial S}{\partial n_i}\right)_{T,P} = s_i\]
\[\left( \dfrac{\partial H}{\partial n_i}\right)_{T,P} = h_i = \mu_i +Ts_i \]
These are the partial molar volumes, entropies and enthalpies. It is important to note that while \( G=\sum \mu_i n_i\), it is absolutely not true that \(V=\sum v_i n_i\), etc. Only the Gibbs free energy is the particular Legendre Transforms of \(U\) for which that simple formula holds. Think about it physically: if you mix two liquids, nonideal behavior could actually cause the total liquid volume to shrink (strong attraction between the two substances), so the partial molar volume under that condition for the substance added would be negative! Clearly, you could not just add up those partial volumes and get the total volume. Rather, the partial molar volume tells you how much, under a given composition, addition of an infinitesimal amount of substance \(n_i\) changes the total volume – including a decrease in volume. Only for \(G\) (or \(\Delta G\)) do the partial molar quantities (chemical potentials) simply add.
We can rewrite partial molar quantities by using Maxwell relations. For example,
\[ \large  \left( \dfrac{\partial S}{\partial n_i} \right)_{T,P,n_{j \neq i}} = \left( \dfrac{\partial \mu_i}{\partial T}\right)_{n_i,P} \]
or
\[ \large  \left( \dfrac{\partial V}{\partial n_i} \right)_{T,P,n_{j \neq i}} = \left( \dfrac{\partial \mu_i}{\partial P}\right)_{n_i,T} \]
Thus the partial molar quantities can all be derived from the chemical potential without explicit derivatives with respect to the \(n_i\). Only the chemical potential itself needs a derivative of \(G\) with respect to the \(n_i\). The total variation in a variable Y with respect to mole numbers at constant \(T\), \(P\) can always be written as
,
where \(X\) and \(Y\) are conjugate variables (in some cases, the conjugate variable such as \(P\) may be () a derivative, so do not forget the () sign in such cases.)
Example 7.x: Entropy of Mixing for two Ideal Gases 

For an ideal gas, \[ \mu_i = \mu_i^{(0)} + RT \ln P_i \Rightarrow s_i = \left (\dfrac{\partial \mu_i}{\partial T} \right) = \left (\dfrac{\partial \mu_i^{(0)}}{\partial T} \right)  R\ln P_i = s_i^{(0)}  R\ln P .\] For two components, the total change in entropy upon a small change in mole numbers is
For an ideal gas, \(s_i\) is independent of \(n_i\) and we can integrate. Consider a 2component ideal gas initially in two containers of volumes \(V_1\) and \(V_2\). Before and after mixing,
The difference between these two entropies is the entropy of mixing:
This is always greater than zero because the arguments of the logarithm are less than 1, \(n_i\) is positive, and R is negative. Unmixing is therefore not a spontaneous process. 
A bonus question to think about: let us say we discover after we pull out the thin membrane between our two gases that the two gases were of the same atom, e.g. Ar. Ar atoms are identical particles, so mixing the left and right sides would seem to change nothing detectable about the overall system: even if the left ones moved right and viceversa, you would never be able to tell. Yet the above formula still says \(\delta S_{mix} > 0\). Granted, the two types of particles corresponding to \(\xi_1\) and \(\xi_2\) are now the same, but aren’t you allowing them to move around more, thus increasing the entropy? The answer is no.
Fittingly, this situation was discovered by Gibbs himself, and is known as the Gibbs paradox. A satisfactory resolution, first provided by John von Neumann, requires the proper quantum mechanical treatment of identical particles, either Bosons or Fermions. We’ll deal with this when we do statistical mechanics. Suffice it to say for now that the classical description with the wall neglects the possibility of quantum mechanical tunneling and exchange of identical particles. Classical mechanics always (and incorrectly) treats every particle as unique and distinguishable.
† In practice, this could be calculated by diagonalizing the matrix. One of the resulting eigenvalues, \(E_1\), will indeed be lower than the other, E_{2}.