# 10. Postulates of statistical mechanics

Thermodynamics puts constraints on the behavior of macroscopic systems without referencing the underlying microscopic properties. In particular, it does not provide a quantitative connection to the origin of its fundamental quantities $$U$$ and $$S$$. For $$U$$, this is less of a problem because we know from mechanics that

$U=\dfrac{1}{2} \sum m_i v_i^2 + V(x_i),$

and the macroscopic formula arises by integrating over most coordinates and velocities. Somehow the thermal motions end up as $$TS$$, and the mechanical and electrical motions end up as terms such as $$–PV+\mu n$$.

Statistical mechanics makes the macro-micro connection and provides a quantitative description of U and S is terms of microscopic quantities. For large systems (except near the critical point), its results are in agreement with thermodynamics: one can derive thermodynamic postulates 0 – 3 from statistical mechanics. For systems undergoing large fluctuations (small systems or those systems near a critical point), its prediction are different and more accurate.

In addition as the ‘mechanics’ implies, statistical mechanics can deal with time-varying systems and systems out of equilibrium. Averages over x(t) and p(t)=mv(t) of the microscopic particles are done, but not in such a way that all time-dependent information is lost, as in thermodynamics.

Unlike mechanics, statistical mechanics is not intended to discuss the time-dependence of an isolated particle. Rather, the time-dependent (e.g. diffusion coefficient) and time independent properties of whole systems of particles, and the averaged properties of whole ensembles of such systems, are of interest.

We begin with an introduction to important facts from mechanics and statistics, then proceed to the postulates of statistical mechanics, consider in detail equilibrium systems, and finally non-equilibrium systems.

## Goal of statistical mechanics

• have a system of many particles with positions $$x_i$$ and velocities (or wavefunctions .
• want values $$A(t)$$ of any observable as particles move about, averaged over all microstates of the system consistent with constraints, such as energy=U=constant, or V=constant.

Goal of thermodynamics: find relations among extensive observables $$X$$ and their derivatives at equilibrium only.

The postulates of statistical mechanics and connection to thermodynamics:

Postulate 1: Extenstion of Microscopic Laws
Hamiltonian dynamics applies to the density operator of any finite closed system; fully specified by its extensive constraint parameters and the Hamiltonian.
Postulate 2: Principle of Equal Probabilities

The principle of equal probabilities holds in its ensemble (weak) form and is assumed in it strong (time) form

1. weak form: all $$W$$ microscopic realizations of a system satisfying I have equal probability. The ensemble density matrix is therefore given by . The ensemble of these $$W$$ systems is the microcanonical ensemble.
2. strong form: for any ensemble satisfying i) at equilibrium, (ergodic principle). This states that averaging over time is equivalent to averaging over the ensemble of W microstates.
Postulate 3: Entropy

The entropy of an ensemble of systems satisfying postulates I and II.i) is given by Before we use them, these postulates require some explanation.

1. This is a strong statement; the system i usually has a >1020-dimensional phase space, and we assume that the dynamics are the same as for a few degrees of freedom! Classically, corresponds to a specific trajectory; quantum mechanically, to a specific initial condition of the system. Among the extensive variables fixed in a closed finite system: U (always by postulate I). Other constrained variables: V (or L, A, Ni=particle number … depending on the system).

Note that if is independent of time, the system is closed, and U is therefore constant (as it needs to be for ‘full’ specification of the system), P1 of thermodynamics is automatically satisfied.

1. This is the postulate that lets us perform macroscopic averages over the individual density matrices, so we can derive properties for the energy-conserving (microcanonical) ensemble.

1. Classically, this says that as long as a trajectory stratifies the constraints in I (has specific energy U), we can combine it with equal weight with all other such trajectories to obtain , the classical density function.

Quantum-mechanically, this means that all the linearly independent pure density matrices characterizing a system with the same extensive parameters (i.e. all the members of the microcanonical ensemble) can be averaged with equal weights to obtain the ensemble density matrix.

Example 10.1

Example: consider a state of energy U that can be realized in W ways (W-fold degenerate or W microstates). One set of initial conditions would be These are pure states. All of these are equally likely because they have the same energy (and volume, etc.), so This is a ‘mixed’ sate of constant energy U.

Note that there is a potentially embarrassing problem with this: a finite quantum system (e.g. particle in a box) for which all extensive parameters (e.g. U, or L for particles in a 1-D box) have been specified has as discrete energy spectrum given by For a large system, the level spacing may be very narrow, but it is nonetheless discrete. Thus, at some every U we pick, there is likely to be no state, so we have nothing to average!

In practice, this is resolved by having an energy window , and by considering all W levels within it. As discussed in more detail in III below, as the number of degrees of freedom N=6n of the system approaches infinity, the size of rigorously has no effect on the result.

1. This says we could take a single trajectory, or a single initial condition , propagate it in time, and all the possible microscopic states will also be visited in turn to yield again (classically) or (quantum mechanically). This is a much stronger statement than i): the full ensemble of W microstates by definition includes all realizations i of the macroscopic system compatible with H and the constraints; on the other hand ii) says a single microstate will, in time, evolve to explore all the others, or at least come arbitrarily close to them. This property is know as ‘ergodicity.’ In practice, ergodicity cannot really be satisfied, but we can use ii) for ‘all practical purposes.’

Example showing why ergodicity cannot be satisfied:

We will use a discrete system to illustrate. Consider a box with cells, filled with particles of volume . The dynamics is that the particles hop randomly to unoccupied neighboring cells at each time step . This model is called a lattice-ideal gas. The number of arrangements for N identical particles is Large factorials n! (or gamma functions =N!) can be approximated by Stirling’s formula .

Thus, .

Let us plug realistic numbers into this: For  The possible W that can be visited during the lifetime of the universe is a mere 1030, negligible compared to the actual number of microstates Wactualat constant energy.

Clearly, not even a warm gas, a system about as random as conceivable, even touches the microcanonical degeneracy W. Although the a priori probability of microstates (classically: of trajectories) may be the same (i), they simply cannot all be sampled in finite time. As seen in III, this provides a practical solution to the quantum-dilemma outlined in I.

Why assume ii) at all? In real life is always observed, but it is difficult to compute. W or are often much easier to compute. Although ii) fails by a factor boogol, surprisingly it sill works in most situations: most microstates in the ensemble of Wpossible microstates are indistinguishable (e.g. the gas atoms in the room right now vs. 10 seconds from now), so leaving many of them out of the average still yields the same average; sampling only one in 10-27 boogol still gives the same result as true ensemble averaging.

There are cases where this reasoning fails: in glasses, members of the ensemble can be so slowly interconnecting and so different from one another, that is not at all like unless very special care is taken.

1. This definition of the entropy was made plausible in our mathematical review, on grounds of information content: a system with many microstates has a greater potential for disorder than a system of a few microstates. But instead of measuring disorder multiplicatively, we want an additive (extensive) quantity. This postulate directly yields the microscopic definition for thermodynamic entropy ( ). Just as energy is microscopically defined as where so gives the thermodynamic entropy S in terms of the equilibrium density matrix. We must have , and by postulate II.i), all elements of must be of equal size if we are in the microcanonical (constant energy U) ensemble. This is satisfied only by Where is a diagonal W´W matrix. Inserting into S and evaluating the trace in the eigenfunction basis of (and ), which we can call : This is Boltzmann’s famous formula for the entropy. Postulate III is more general, but at equilibrium Boltzmann’s formula holds. It secures for S all the properties in postulates 2 and 3 of thermodynamics, and provides a microscopic interpretation for S:

W specifies disorder in a system: the more possible microstates correspond to the same macrostate, the more disorder a system has. For two independent systems, . However, thermodynamic entropy has the property of additivity: . The function that uniquely effects the transformation from multiplication to addition is the log function (within a constant factor) must be true so that both relations at the beginning of this paragraph are satisfied. The constant factor kB is provided to match the energy and temperature scales, which were independently defined in the early 19th century when the equivalence of temperature and average energy was not understood.

Consider a system divided into subsystems by constraints, with W=W0. When the constraints are removed at t=0, then by II.ii) the system now explores additional ensemble members as time goes on. Thus W(t > 0) = W1 > W0. If macroscopic equilibrium is reached .

Thus S in stat mech postulate III satisfies all requirements of postulate P2 of thermodynamics, which is a simple consequence of the fact that microscopic degrees of freedom tend to explore more of the available states (= all the available phase space in classical mechanics) as time goes on.

Also, because W is monotonic in U (at higher energy U, there are always more quantum states in a multidimensional system) and because S is monotonic in W (property of the ln function), S is monotonic in U. finally, we shall see in detail later that when , only the ground state is populated, so . Thus, the third postulate is also satisfied as long as the ground state is singly degenerate and the system can get to it during the experiment. (Glasses again would be a problem here!)

The error of thermodynamics: it identifies the most probable value of a quantity with its average, by assuming the spread is negligible. We will derive examples of this spread later on. Thermodynamic limit: N goes to infinity but N/V or any other ratio of extensive quantities remains constant.

To conclude this chapter, we turn to the problem of computing W in the quantum case. A closed finite quantum system has a discrete spectrum Ei. The figure below shows the number of states below energy U as a function of U. Because of quantum mechanics, the density of states is discontinuous, and the integrated density of states (= total number of states up to energy U) has steps in it: At any randomly picked U, is mostly likely zero, so W = 0 also!

However, because increases so enormously rapidly with energy, the states are very (understatement!) closely spaced in energy for any system with even just a few particles. If a system is observed for a finite time , the states are broadened by the uncertainty principle: where L indicates a broadened profile of finite width that replaces the delta function. L still counts a single state, so is taken as a Lorentzian . Thus, can be taken as a smooth function and its value tells us how many states contribute to the degeneracy at energy U. Figure 10.1 The density of states $$Omega$$ quantum-mechanically is a sum of delta functions located at the energies $$E_i$$ of the quantum system. Integrating it, we obtain , the sum of all states between energy $$U=0$$ up to $$U$$. Because of the uncertainty principle, we can never define the energies perfectly, as shown by the ‘broadened’ version of W below. Integrating this, we obtain a smoothed version of ≈ W to extremely high accuracy. Note that rises so fast (the figure does not do justice), that the number of states contained in an tiny energy interval at energy $$U$$, of width given by the uncertainty principle, is essentially the same as the number of all states up to that energy.

It is clear from the above that if (the broadening is greater than the spacing of adjacent levels), then (U) is indeed independent of the choice of . This is guaranteed by the astronomical number of states for a macroscopic system (see example in II.ii)). Because in the above figure grows so fast, (U)  (U), as illustrated in the bottom right panel of the figure

Another way to look at it is in state space (classically: action space). It has N coordinates for N degrees of freedom. is the number of states under the surface U = constant. If (where is the average characteristic energy step for one degree of freedom) then Letting $$\delta U$$ now be an uncertainty in $$U$$ instead of in individual energy levels, the number of states in the interval $$U-\delta U,U) is Because N~1020, as long as \(\delta U<U$$ (even if only a small amount!), the number of states in any width shell $$\delta U$$ is the same as the total number of states up to $$U$$:

In a hyperspace of 1020 dimensions, all states lie near the surface

Thus $$\Omega \approx \Omega(U,\deltaU) \approx W(u)$$ to extreme precision. This topic will be taken up once more in the examples of microcanonical calculations given in the next chapter.