# 2.1: The Ensemble Concept (Heuristic Definition)

- Page ID
- 5097

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

For a typical macroscopic system, the total number of particles \(N \sim 10^{23}\). Since an essentially infinite amount of precision is needed in order to specify the initial conditions (due to exponentially rapid growth of errors in this specification), the amount of information required to specify a trajectory is essentially infinite. Even if we contented ourselves with quadrupole precision, however, the amount of memory needed to hold just one phase space point would be about 128 bytes = \(2^7 \sim 10^2\) bytes for each number or \(10^2 \times 6 \times 10^{23} \sim 10^{17} \) Gbytes. The largest computers we have today have perhaps \(10^3\) Gbytes of memory, so we are off by 14 orders of magnitude just to specify 1 point in phase space.

Do we need all this detail?

###### Yes

There are plenty of chemically interesting phenomena for which we really would like to know how individual atoms are moving as a process occurs. Experimental techniques such as ultrafast laser spectroscopy can resolve short time scale phenomena and, thus, obtain important insights into such motions. From a theoretical point of view, although we cannot follow \(10^{23}\) particles, there is some hope that we could follow the motion of a system containing \(10^4\) or \(10^5\) particles, which might capture most of the features of true macroscopic matter. Thus, by solving Newton's equations of motion numerically on a computer, we have a kind of window into the microscopic world. This is the basis of what are known as *molecular dynamics calculations*.

###### No

Intuitively, we would expect that if we were to follow the evolution of a large number of systems all described by the same set of forces but having starting from different initial conditions, these systems would have essentially the same macroscopic characteristics, e.g. the same temperature, pressure, etc. even if the microscopic detailed evolution of each system in time would be very different. This idea suggests that the microscopic details are largely unimportant.

Since, from the point of view of macroscopic properties, precise microscopic details are largely unimportant, we might imagine employing a construct known as the *ensemble concept* in which a large number of systems with different microscopic characteristics but similar macroscopic characteristics is used to "wash out'' the microscopic details via an averaging procedure. This is an idea developed by individuals such as Gibbs, Maxwell, and Boltzmann.

###### Ensemble

Consider a large number of systems each described by the same set of microscopic forces and sharing some common macroscopic property (e.g. the same total energy). Each system is assumed to evolve under the microscopic laws of motion from a different initial condition so that the time evolution of each system will be different from all the others. Such a collection of systems is called an **ensemble**. The ensemble concept then states that macroscopic observables can be calculated by performing averages over the systems in the ensemble. For many properties, such as temperature and pressure, which are time-independent, the fact that the systems are evolving in time will not affect their values, and we may perform averages at a particular instant in time. Thus, let \(A\) denote a macroscopic property and let \(a\) denote a microscopic function that is used to compute \(A\). An example of \(A\) would be the temperature, and \(a\) would be the kinetic energy (a microscopic function of velocities). Then, \(A\) is obtained by calculating the value of \(a\) in each system of the ensemble and performing an average over all systems in the ensemble:

\[ A = \frac {1}{N} \sum _{\lambda = 1}^N a_{\lambda} \nonumber \]

where \(N\) is the total number of members in the ensemble and \(a_{\lambda}\) is the value of \(a\) in the \(\lambda\) th system.

The questions that naturally arise are:

- How do we construct an ensemble?
- How do we perform averages over an ensemble?
- How many systems will an ensemble contain?
- How do we distinguish time-independent from time-dependent properties in the ensemble picture?

Answering these questions will be our main objective in Statistical Mechanics.