# 3. Basic properties of U, S, and their differentials

- Page ID
- 8661

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## 3.1 Energy minimum principle

\(S\) can be written as \(S(U,\bar{X})\), where \(\bar{X}\) is a vector of all independent internal extensive variable (e.g. all but one U_{k}, and all other X_{i}). Because *S* is monotonic in *U* and continuous, we can invert to \(U(S,\bar{X})\). This relation is fully equivalent to the fundamental relation. Because of the shape of \(S(U,\bar{X})\) or \(U(S,\bar{X})\), as shown in the figure, maximizing the entropy at constant U is equivalent to minimizing the energy at constant S.

This is the familiar version from mechanics, where system properties are usually formulated in terms of energies, instead of entropies.

Postulate 2': Minimizing Energy |
---|

The internal energy of a composite system at constant \(S\) is minimized at equilibrium. |

## 3.2 Intensive parameters: Temperature

Working for now with \(U\) for a simple system, \(U(S,\vec{X}\) we can write

\[du=\left(\dfrac{\partial U}{\partial S}\right)_{\vec{X}}dS + \left(\dfrac{\partial U}{\partial \vec{X}}\right)_S \cdot d\vec{X}\]

with appropriate constraints n each \(\dfrac{\partial U}{\partial X_i}\) derivative

\[dU = TdS + \vec{I} \cdot d\vec{X}\]

where \(T \equiv \left( \dfrac{\partial U}{\partial S} \right)_{\vec{X}} > 0\) by *postulate three*.

By construction, \(T\) and the \(\{I\}\) are intensive variables. For example,

\[ U \rightarrow \lambda U \]

and

\[ S \rightarrow \lambda S \Rightarrow T \rightarrow \left( \dfrac{\partial \lambda U}{\partial \lambda S} \right) =T\]

We consider in detail the properties of the energy derivative \(T\), and then briefly by analogy other intensive variables \(\{I_i\}\). Let all the \(\vec{X}\) (such as \(dV\), \(dn_i\), \(dM\), etc.) equal to zero: no mechanical macroscopic variables are being altered except for energy. It then follows that \(dU = dq\) because \(dw = 0\). Therefore

\[dq= TdS\]

for small (quasistatic) charges in heat, the change in system entropy is linearly proportional to the heat increment. Thus as we add energy to the uncontrollable degrees of freedom of our system, entropy increases, in accord with the notion that entropy is disorder. Furthermore, we can rewrite this as

\[ dS = \dfrac{dq}{T}.\]

When \(T\) is larger, the entropy increases less for a given heat input.

What is this quantity \(T\)? Consider a closed composite system \(\{S\}\) of two subsystems \(\{S_1\}\) and \(\{S_2\}\) separated by a diathermal wall. A diathermal wall allows only heat flow, so \(dX=0\) again.

At equilibrium,

\[ dS = 0 = \left( \dfrac{\partial S_1}{\partial U_1} \right)_x dU_1 + \left(\dfrac{\partial S_2}{\partial U_2} \right)_x dU_2\]

according to P2 or

\[dS = \dfrac{1}{T_1} dU_1 + \dfrac{1}{T_2} dU_2.\]

But \(dU = 0\) for a closed system by P1, from which follows that

\[dU_2 = -dU_1\]

or

\[ dS = \left ( \dfrac{1}{T_2} - \dfrac{1}{T_2} \right) dU_1.\]

At equilibrium, \(dS = 0\) for any variation of \(dU_1\), which can only be true if

\[ \left ( \dfrac{1}{T_2} - \dfrac{1}{T_2} \right) = 0 \Rightarrow T_1=T_2\]

Thus, \(T\) *is the quantity that is equalized between two subsystems when heat is allowed to flow between them*.

This is the most straightforward definition of temperature: the thing that becomes equal when heat stops flowing from one place to another. We can thus identify the intensive variable as the temperature of the system. Temperature is always guaranteed to be positive by P3 because entropy is a monotonically increasing function of energy.

Finally, if \(T = (\partial U/ \partial S)_X\), we can rewrite the third postulate as

\[ \lim_{T \rightarrow 0} S =0,\]

more commonly known as the *“third law of thermodynamics.”* As all the energy is removed from a system by lowering its temperature, the system becomes completely ordered. It is worth noting that there are systems (glasses), where reaching this limit takes an inordinate amount of time. A very general principle of quantum mechanics guarantees that the third law holds even in those cases, if we can actually get the system to equilibrium: a coordinate or spin Hamiltonian always has a single groundstate of \(A_1\) symmetry. This is the state any system reaches as \(T \rightarrow 0\). In practice, this state may just not be reachable even approximately in glasses, and heuristic replacements of the third law have been developed for this case, which is really a non-equilibrium case.

### To summarize

\[\Delta S _{closed} > 0\]

always by postulate P2

\[ds = \dfrac{dq}{T}\]

by P2 for a quasistatic process when no work is done

\[T>0\]

always by postulate P3

\[T_1=T_2\]

for two systems in thermal equilibrium

\[\lim _{T \rightarrow 0} S =0\]

always by P3, difficult to reach even approximately in some cases

Thus \(T\) and \(S\) have all the intuitive characteristics of temperature and disorder, and we can take them as representing temperature and disorder. The latter can be justified even more deeply by making use of statistical mechanics in later chapters, where the second postulate follows from microscopic properties of the system.

A note on units: \(TS\) must have units of energy. It would be convenient to let \(T\) have units of energy (as an “energy per unit size of the system”) and to let \(S\) be unitless, but for historical reasons, \(T\) has arbitrary units of Kelvin and S has units of Joules/Kelvin to compensate.

## 3.3 Other extensive-intensive Variable Pairs

The more complex a composite system becomes, the more extensive variables it requires beyond \(U\), leading to additional intensive variables. For example:

### Pressure

\(V\) (volume) leads to an energy change

\[ dU_v = \left( \dfrac{\partial U}{\partial V} \right)_{\vec{X}} dV \equiv -PdV.\]

The intensive derivative is called the pressure of the system. \(PV\) has units of Joules, so \(P\) must have units of Joules/m^{3} or N/m^{2}. Thus \(P\) certainly has the units we normally associate with pressure, or force per unit area. Usually \( \partial U/ \partial V < 0\) because squeezing a system increases its energy. Thus \(P\) is generally a positive quantity, again in accord with our intuition. Note however that there is no postulate that says \(P\) must be positive. In fact, we can bring systems to negative pressure by pulling on the system, or putting tension on it.

Is \(P\) is in fact pressure? It is easy to see that it is, by applying the minimum energy principle to a *diathermal *flexible wall, in analogy to what was done for temperature above:

\[ dU=0 = dU_1 +dU_2 \]

by Postulate 1

\[ dU= T_1dS_1 - P_!dV_1 + T_2dS_2 -P_2dV_2\]

by Postulate 2'

\[ dU = (T_1-T_2)dS_1 - (P_1 - P_2) dV_1\]

In the third line, we assume a *closed *system and *reversible *process, so \(dV = 0\) and \(dS = 0\). When the energy has reached equilibrium, the equation must hold for any small perturbation of the entropy or volume of subsystem 1, which can only be satisfied if \(T_1 = T_2\) (again), and \(P_1 = P_2\).

Thus, *P is the quantity which is the same in two subsystems when they are connected by a flexible wall*. This is the most straightforward definition of pressure: the thing that is equalized between two systems when the volume can change to whatever it wants. \(P\) is a pressure, not just in units, but agrees with our intuitive notion of what a pressure should be.

### Surface Area

\(A\) (area in surface system)

\[\Rightarrow dU_A = \left( \dfrac{\partial U}{\partial A} \right)_X dA = -\Gamma dA\]

where \(\Gamma\) has units of (N/m) and is therefore the surface tension.

### Magnetization

\(M\) (magnetization)

\[ \Rightarrow dU_M = \left(\dfrac{\partial U}{\partial M}\right)_{\vec{X}} dM \equiv HdM \]

where \(H\) is the externally applied magnetic field.

### Mole Number

\(n_i\) (mole number)

\[ \Rightarrow dU_{n_i} = \left(\dfrac{\partial U}{\partial n_i}\right)_{\vec{X}} dn_1 \equiv \mu_i dn_i \]

where \(\mu_i\) is the chemical potential equalized when particles are allowed to flow.

### Length

\(L\) (length)

\[ \Rightarrow dU_L = \left(\dfrac{\partial U}{\partial L}\right)_{\vec{X}} dL \equiv FdL \]

where \(F\) is the linear tension force.

### In general

Many more conjugate pairs of extensive and intensive variables are possible, but this gives the general picture. For an arbitrary variation in \(U\) we have

\[ dU = TdS + \vec{I} \cdot d\vec{X},\]

where \( \vec{I}\) is the vector of all intensive variables except temperature. Often, we will use

\[ dU = TdS - PdV + \mu dn\]

as an example, when dealing with a simple 3-dimensional 1-component system.

## 3.4 First order homogeneity

Consider *\(S\)* for a closed system. Because *\(S\)* is extensive, \(S(\lambda U,\lambda \vec{X}) = \lambda S(U, \vec{X})\). This agrees with the intuitive notion that 2 identical disordered systems amount to twice as much disorder as a single one. Similarly, \( U(\lambda S,\lambda \vec{X}) = \lambda U(S, \vec{X})\). Differentiating both sides with respect to \(\lambda\) yields

\[ \left( \dfrac{\partial U}{\partial \lambda S}\right)_{\vec{X}} \left( \dfrac{\partial \lambda S}{\partial \lambda }\right) + \left( \dfrac{\partial U}{\partial \lambda \vec{X}}\right)_{\vec{X}} \cdot \left( \dfrac{\partial \lambda \vec{X}}{\partial \lambda}\right) = U(S,\vec{X}) \]

or

\[ \left( \dfrac{\partial U}{\partial \lambda S}\right)_{\vec{X}} S + \left( \dfrac{\partial U}{\partial \lambda \vec{X}}\right)_{\vec{X}} \cdot \vec{X} =U(S,\vec{X})\]

When \(\lambda = 1\), this yields

\[ \left( \dfrac{\partial U}{\partial S}\right)_{\vec{X}} S + \left( \dfrac{\partial U}{\partial \vec{X}}\right)_{\vec{X}} \cdot \vec{X} = U\]

or

\[ U=TS + \vec{I} \cdot \vec{X}\]

Thus the energy has a surprisingly simple form: it is simply a bilinear function of the intensive and extensive parameters; it is known as **the Euler form**. The formula for energy looks like the formula for \(dU\) with the \(dS\) removed. For example, \( U=TS-PV + \mu n\) for a simple one-component system.

Solving for \(S\) yields an analogous formula in the entropy representation,

\[ S=\left( \dfrac{1}{T} \right) U - \left(\dfrac{\vec{I}}{T} \right) \cdot \vec{X}\]

for example

\[ S=\left( \dfrac{1}{T} \right) U - \left(\dfrac{P}{T} \right) V - \dfrac{\mu}{T} n \]

The entropy is also a *simple bilinear function* of its intensive and extensive parameters.

## 3.5 Gibbs-Duhem relation

The differential of \(U\) combined with first order homogeneity requires that not all intensive parameters be independent. For a completely arbitrary variation of \(U\),

\[dU =TdS + SdT + \vec{I} \cdot d\vec{X} + \vec{X} \cdot d\vec{I}\]

But we know from earlier that

\[dU =TdS + \vec{I} \cdot d\vec{X} \Rightarrow SdT + \vec{X} \cdot d\vec{I} = 0\]

Using this *Gibbs-Duhem relation*, one intensive parameter can be expressed in terms of the others. For example, consider a simple multicomponent system:

\[ U = TS-PV \sum_{i=1}^r \mu_in_i \Rightarrow SdT - VdP + \sum_{i=1}^r n_id\mu_i\]

\[\Rightarrow d\mu_i = \left( \dfrac{V}{n_1}\right) dP- \left( \dfrac{S}{n_1}\right) dT - \sum_{i=2}^r {\dfrac{n_i}{n_1} d\mu_i}\]

One chemical potential change can be expressed in terms of pressure, temperature, and the other chemical potentials. In general, an \(r\)-component simple 3-D system has only \(2 + (r-1) = r+1\) degrees of freedom. This will be useful for multi-phase systems. For example, let two phases of the same substance be at equilibrium, and particle flow is allowed from one phase to another. Then \(\mu_1 = \mu_2\) (or particles would flow to the phase of lower chemical potential according to 3.), and to remain at equilibrium when the chemical potential changes, \(d\mu_1 = d\mu_2\). Combining the Gibbs-Duhem relations for each phase,

\[ S_1dT - V_1dP =-d\mu_1 \]

and

\[S_2dT - V_2dP =-d\mu_2\]

\[ \overset{d\mu_1=d\mu_2} {\longrightarrow} (S_1-S_2)dT=(V_1-V_2)dP\]

or

\[ \dfrac{dP}{dT}=\dfrac{\Delta S_{12}}{\Delta V_{12}}\]

Thus letting \(d\mu_1 = d\mu_2\) traces out the *\(T\)*, \(*P\)* conditions where the two phases are at equilibrium. This is known as the *Clausius* equation.

## 3.6 Equations of State and the Fundamental Relation

Often we do not know the fundamental equation \(U(S,\vec{X})\) or \(S(U,\vec{X})\) instead we know equation involving intensive variables, known as* equations of state*. For example,

\[ U = U(S,X) \Rightarrow T=\left(\dfrac{\partial U}{\partial S}\right)_X=T(S,X).\]

Similarly, the derivative with respect to any other \(X\) yields the corresponding equation of state \(I(S, X)\). These are called *equations of state in normal form*, and express one intensive variable in terms of all the extensive variables. There are as many equations of state as there are extensive variables for the system (e.g. \(r+2\) for a simple \(r\)-component system). Note that an equation of state does not contain the same amount of information as the original fundamental relation; it can be integrated up to a constant that depends on all extensive variables except the one involved in the derivative, but that part of *\(U\)* (or \(S\)), if we derive equations of state from \(S(U, X)\) cannot simply be left out.

If all the equations of state in normal form are known, we can reconstruct the fundamental relation by using the Euler form from 4

\[ U= T(S,\vec{X})S+\vec{I}(S,\vec{X})\cdot \vec{X}\]

this is also solvable for \(S\) because of P3. If they are not known in normal form, we may also be able to obtain the fundamental relation by integrating a differential form, such as

\[ dS =\left(\dfrac{1}{T} \right) dU - \left(\dfrac{I}{T}\right) \cdot d\vec{X}.\]

If needed, we can compute one intensive variable from the Gibbs-Duhem relation, so we need one less equation of state (only \(r+1\) for a simple \(r\)-component system) to evaluate the fundamental relation. Finally, equations of state may also be substituted into one another, yielding equations that depend on more than one intensive variable. These are also referred to as equations of state, but they are not in normal form.

Let us consider two examples of how to determine a fundamental relation. We start with the fundamental relation for a rubber band, where we can write down reasonable guesses for both equations of state needed.

\[ dU = TdS + FdL \Rightarrow ds = \dfrac{1}{T} dU - \dfrac{F}{T} dL\]

We need equations of state so \(T\) and \(F\) can be eliminated to yield \(S(U,L)\):

a) \(F=c_1T(L-L_0)\); \(L_0\) is the relaxed length of the rubber band, and we are treating it like a linear spring once stretched. An unusual feature is that \(F\) increases with \(T\). At higher T polymer chains wrinkle into more random coils, causing shrinkage, and increasing the tension for the same length.

b) \(U=c_2L_0T\), as long as \(F\) depends only linearly on \(T \Rightarrow F/T=F(L)\) only. The reason is that

\[ \dfrac{\partial^2 S(U,L)}{\partial U \partial L} =\dfrac{\partial}{\partial U} \left( \dfrac{-F}{T} \right) = \dfrac{\partial}{\partial L} \left( \dfrac{1}{T} \right) =0 \]

so \(\dfrac{1}{T}\) can be any single-valued function of \(U\) as long as it is independent of \(L\); for simplicity we pick \(U \sim T\), as for an ideal gas.

We can now insert the two equations of state into the differential form, and integrate it

\[ dS = \dfrac{c_2L_o}{U} dU -c_1(L-L_0)dL \Rightarrow\]

\[ S=S_0 + c_2L_0 \ln \dfrac{U}{U_0} - \dfrac{c_1}{2} (L-L_0)^2\]

The constant can be determined by invoking the third law. However, note that this can lead to singularities if the equations of state themselves are not correct at low temperature, as is the case in this example. Moreover, note that \(c_2\) most be intensive, and \(c_1^{-1}\) must be extensive so that \(*S\)* is extensive. From the fundamental relation we can calculate any desired properties of the rubber band.

Alternatively, we could try to obtain the fundamental relation in terms of \(U = TS + FL\), but then we would need \(T(S,L)\) and \(*F*(*S*,*L*)\) instead of \(\frac{1}{T}(U,L)\) and \(\frac{F}{T}(U,L)\), which were not available. Similarly, to plug into \(S = U/T – FL/T\), we would need \(T(U, L)\) and \(F(U, L)\); we have the former, but not the latter: the equation of state for \(F=c_1T(L-L_0)\) is in terms of another intensive variables, and not in the basic form required for the Euler form.

Note that plugging

\[T = U/c_2L_0\]

into \(F(T, L)\) to get a \(F(U, L)\) will not help either because this does not yield an equation of state *in normal form* as it would have been obtained by taking the derivative of \(S\).

As another example, consider the fundamental relation for an ideal monatomic gas. In this case, we will derive one of the equations of state from the others, get all three equations of state in normal form, before inserting all three to obtain the fundamental relation. The gas has 1 component, so we need \(r+1=2\) equations of state to get started:

\[Pv-Rt \tag{1}\]

\[u=\dfrac{3}{2}RT \tag{2}\]

Here the two well-known equations of state for an ideal gas are written in terms of intensive variables \(u = U/n\) and \(V = V/n\). Again the first equation depends on two intensive variables and is not in standard form. We can bring both equations into standard form as follows:

\[ P=\dfrac{R}{v}T=\dfrac{R}{v} \left(\dfrac{2u}{3R}\right) = \dfrac{2}{3}\dfrac{u}{v} = -\left( \dfrac{\partial u}{\partial v} \right)_{S,n} \tag{1'}\]

\[ T=\dfrac{2u}{3R} = \left( \dfrac{\partial u}{\partial S}\right)_{V,n} \tag{2'}\]

We now need \(\mu(u,n)\) as the third equation of state. Proceeding with the Gibbs-Duhem relation,

\[ d\mu = -Sdt +vdP.\]

We must eliminate \(S\) since we formulated \(P\) and \(T\) as a function of \(U\), not \(S\). Using the bilinear form of \(S\),

\[ d\mu = -\left( \dfrac{u}{T} + \dfrac{Pv}{T} -\dfrac{\mu}{T} \right) dT + vdP.\]

Next we eliminate \(P\) and \(T\) by using equations 1’ and 2’:

\[d\mu = -d\mu -\dfrac{2}{3} du + \mu \dfrac{du}{u} + \dfrac{2}{3}du - \dfrac{2}{3}u\dfrac{dv}{v}.\]

We then divide by *u* on both sides, rearrange, and integrate:

\[ \dfrac{d\mu}{u}- \mu \dfrac{du}{u^2} = d\left(\dfrac{\mu}{u}\right) = -\dfrac{du}{u} - \dfrac{2}{3} \dfrac{dv}{d}\]

\[ \int_0^{final} d\left(\dfrac{\mu}{u}\right) = \left(\dfrac{\mu}{u}\right)-\left(\dfrac{\mu}{u}\right)_0= -\ln \dfrac{u}{u_0}-\dfrac{2}{3}\ln \dfrac{v}{v_0}\]

or

\[ \mu = -u \ln \dfrac{u}{u_0} - \dfrac{2}{3} u \ln \dfrac{v}{v_0} + u\left(\dfrac{\mu}{u}\right)_0\]

This is the third equation of state, for the chemical potential. We now have all intensive parameters as normal form equations of state, to construct the fundamental relations \(s(u,n)\) or \(u(s, v)\). (Of course, the homogeneous first order property means that to get \(S\) and \(U\), we just multiply by \(n\).) Doing \(s\), for example,

\[ s = \dfrac{1}{T} u + \dfrac{P}{T} v - \dfrac{\mu}{T}\]

\[ = \dfrac{3R}{2u} u + \dfrac{2u}{3v}\dfrac{3R}{2u} v - \dfrac{3R}{2u} u \left\{ -\ln \dfrac{u}{u_0} - \dfrac{2}{3} \ln \dfrac{v}{v_0} + \left( \dfrac{\mu}{u}\right)_0 \right\}\]

\[ = \dfrac{5}{2}R - \dfrac{3}{2} R \left(\dfrac{\mu}{u} \right)_0 +\dfrac{3}{2}R \ln \dfrac{u}{u_0} + R\ln \dfrac{v}{v_0}\]

\[ =\dfrac{3}{2}R\ln u + R\ln v +c\]

Note that this equation of state violates Postulate 3:

\[\left(\dfrac{\partial U}{\partial S} \right)_V=T=\dfrac{2u}{3R}\]

so \( T \rightarrow 0 \equiv u \rightarrow 0\); but \(S \rightarrow 0\) as \(T \rightarrow 0\), it approaches \(-\infty\). Thus, either \(PV=nRT\), or \(U=\dfrac{3}{2} nRT\), or both must be high-temperature approximations that fail as \(T \rightarrow 0\). At low \(T\), excluded volume effects, particle interaction, and quantum effects come into play. The ideal gas equation would have to be replaced by a more accurate equation, such as the van der Waals equation to satisfy the third law closer to \(T = 0\). In that sense, thermodynamics can point out to us when approximate equations of state break down.

## 3.7 Stability and Second Derivatives

The first derivatives (intensive parameters) are very useful because they correspond to quantities that are equalized among equilibrated subsystems. However, the first order relationship \(dS=0\), although necessary by Postulate 2 at equilibrium, is not sufficient. The extremum in \(S\) must be a *maximum*:

\[ d^2S < 0\]

or according to Postulate 1:

\[d^2U > 0\]

Extrema with \[ d^2S > 0\] or \[ d^2S = 0\] are also possible (minima, saddles, degenerate points). However, thermodynamics cannot make statements about such points without some further assumptions that go beyond the postulates. This suggests that the study of second derivative will be fruitful, to ensure that one is working near a stable equilibrium point. Three of these second derivatives encountered later are

\[ \alpha =\dfrac{1}{V} \left(\dfrac{\partial V}{\partial T}\right)_{P,n_i}\]

\[\kappa = -\dfrac{1}{V} \left( \dfrac{\partial V}{\partial P} \right)_T\]

\[ c_p = \dfrac{T}{n} \left ( \dfrac{\partial S}{\partial T} \right)_P = \left( \dfrac{dq}{dT} \right)_P\]

For a simple system, only three second derivatives are linearly independent if we exclude ones based on \(\dfrac{\partial }{\partial n_i}\). The reason is that the terms in the energy \(U = TS – PV + …\) have only three second derivatives,

- \[ \left( \dfrac{\partial T}{\partial S} \right)_V = \dfrac{\partial^2 U}{\partial S^2}\]
- \[ \left( \dfrac{\partial P}{\partial V} \right)_S = \dfrac{\partial^2 U}{\partial V^2}\]
- \[ \left( \dfrac{\partial T}{\partial V} \right)_S = -\left( \dfrac{\partial P}{\partial S} \right)_V=\dfrac{\partial^2 U}{\partial V \partial S}\]

or \(d \mu\) is not a perfect differential. Rather than picking those three, we will usually work with the first independent set, corresponding to quantities with more obvious physical interpretations to chemists working at constant pressure and temperature. We consider the corresponding fundamental relations in the next chapter.