# 1.9: The Physical Relevance of Wavefunctions, Operators and Eigenvalues

Quantum mechanics has a set of 'rules' that link operators, wavefunctions, and eigenvalues to physically measurable properties. These rules have been formulated not in some arbitrary manner nor by derivation from some higher subject. Rather, the rules were designed to allow quantum mechanics to mimic the experimentally observed facts as revealed in mother nature's data. The extent to which these rules seem difficult to understand usually reflects the presence of experimental observations that do not fit in with our common experience base.

The structure of quantum mechanics (QM) relates the wavefunction $$\Psi$$ and operators F to the 'real world' in which experimental measurements are performed through a set of rules. Some of these rules have already been introduced above. Here, they are presented in total as follows:

## 1: The Time Evolution

The time evolution of the wavefunction $$\Psi$$ is determined by solving the time-dependent Schrödinger equation (see pp 23-25 of EWK for a rationalization of how the Schrödinger equation arises from the classical equation governing waves, Einstein's $$E=h\nu$$, and deBroglie's postulate that $$\lambda = \frac{h}{p}$$)

$i\hbar \dfrac{\partial \Psi}{\partial t}= \textbf{H}\Psi,$

where H is the Hamiltonian operator corresponding to the total (kinetic plus potential) energy of the system. For an isolated system (e.g., an atom or molecule not in contact with any external fields), H consists of the kinetic and potential energies of the particles comprising the system. To describe interactions with an external field (e.g., an electromagnetic field, a static electric field, or the 'crystal field' caused by surrounding ligands), additional terms are added to H to properly account for the system-field interactions.

If H contains no explicit time dependence, then separation of space and time variables can be performed on the above Schrödinger equation $$\Psi = \psi e^{\dfrac{-iEt}{\hbar}}$$ to give

$\textbf{H}\psi = e\psi.$

In such a case, the time dependence of the state is carried in the phase factor $$e^{\frac{-Et}{\hbar}}$$; the spatial dependence appears in $$\psi(q_j)$$.

The so called time independent Schrödinger equation $$\textbf{H} \psi=E\psi$$ must be solved to determine the physically measurable energies $$E_k$$ and wavefunctions $$\psi_k$$ of the system. The most general solution to the full Schrödinger equation

$i\hbar\frac{\partial \Psi}{\partial t} = \textbf{H}\Psi$

is then given by applying $$e^{\frac{-i\textbf{H}t}{\hbar}}$$ to the wavefunction at some initial time (t=0)

$\Psi =\sum\limits_kC_k\psi_k$

to obtain

$\Psi(t)=\sum\limits_kC_k\psi_ke^{\frac{-itE_k}{\hbar}}.$

The relative amplitudes $$C_k$$ are determined by knowledge of the state at the initial time; this depends on how the system has been prepared in an earlier experiment. Just as Newton's laws of motion do not fully determine the time evolution of a classical system (i.e., the coordinates and momenta must be known at some initial time), the Schrödinger equation must be accompanied by initial conditions to fully determine $$\Psi(q_j ,t)$$.

Example $$\PageIndex{1}$$:

Using the results of Problem 11 of this chapter to illustrate, the sudden ionization of $$N_2$$ in its v=0 vibrational state to generate $$N_2^+$$ produces a vibrational wavefunction

$\Psi_0 = \sqrt[4]{\dfrac{\alpha}{\pi}}e^{\dfrac{-\alpha x^2}{2}} = 3.53333\dfrac{1}{\sqrt{Å}}$

that was created by the fast ionization of $$N_2$$. Subsequent to ionization, this $$N_2$$ function is not an eigenfunction of the new vibrational Schrödinger equation appropriate to $$N_2^+.$$ As a result, this function will time evolve under the influence of the $$N_2^+$$ Hamiltonian. The time evolved wavefunction, according to this first rule, can be expressed in terms of the vibrational functions {$$\Psi_v$$} and energies {$$E_v$$} of the $$N_2^+$$ ion as

$\Psi(t) = \sum\limits_vC_v\Psi_ve^{\dfrac{-iE_vt}{\hbar}}.$

The amplitudes $$C_v$$, which reflect the manner in which the wavefunction is prepared (at t=0), are determined by determining the component of each $$\Psi_v$$ in the function $$\Psi$$ at t=0. To do this, one uses

$\int\Psi_{v'}^{\text{*}}\Psi(t=0) d\tau = C_{v'},$

which is easily obtained by multiplying the above summation by $$\Psi^{\text{*}}_{v'}$$, integrating, and using the orthonormality of the {$$\Psi_v$$} functions.

For the case at hand, this results shows that by forming integrals involving products of the $$N_2$$ v=0 function $$\Psi(t=0)$$

$= \int\limits_{-\infty}^{\infty}3.47522e^{-229.113(r^{-1.11642})^2}3.53333e^{-244.83(r^{-1.09769})^2} dr$

As demonstrated in Problem 11, this integral reduces to 0.959. This means that the $$N_2$$ v=0 state, subsequent to sudden ionization, can be represented as containing |0.959|2 = 0.92 fraction of the v=0 state of the $$N_2^+$$ ion.

Example $$\PageIndex{1}$$ relates to the well known Franck-Condon principal of spectroscopy in which squares of 'overlaps' between the initial electronic state's vibrational wavefunction and the final electronic state's vibrational wavefunctions allow one to estimate the probabilities of populating various final-state vibrational levels.

In addition to initial conditions, solutions to the Schrödinger equation must obey certain other constraints in form. They must be continuous functions of all of their spatial coordinates and must be single valued; these properties allow $$\Psi^{\text{*}}\Psi$$ to be interpreted as a probability density (i.e., the probability of finding a particle at some position can not be multivalued nor can it be 'jerky' or discontinuous). The derivative of the wavefunction must also be continuous except at points where the potential function undergoes an infinite jump (e.g., at the wall of an infinitely high and steep potential barrier). This condition relates to the fact that the momentum must be continuous except at infinitely 'steep' potential barriers where the momentum undergoes a 'sudden' reversal.

## 2: Measurements are Eigenvalues

An experimental measurement of any quantity (whose corresponding operator is F) must result in one of the eigenvalues $$f_j$$ of the operator F. These eigenvalues are obtained by solving

$\textbf{F} \phi_j = f_j \phi_j,$

where the $$f_j$$ are the eigenfunctions of F. Once the measurement of F is made, for that subpopulation of the experimental sample found to have the particular eigenvalue $$f_j$$ , the wavefunction becomes $$\phi_j$$.

The equation $$\textbf{H}\psi_k = E_k\psi_k$$ is but a special case; it is an especially important case because much of the machinery of modern experimental chemistry is directed at placing the system in a particular energy quantum state by detecting its energy (e.g., by spectroscopic means). The reader is strongly urged to also study Appendix C to gain a more detailed and illustrated treatment of this and subsequent rules of quantum mechanics.

## 3: Operators that correspond to Measurables are Hermitian

The operators F corresponding to all physically measurable quantities are Hermitian; this means that their matrix representations obey (see Appendix C for a description of the 'bra' | \rangle and 'ket' \langle | notation used below):

$\langle \chi_j|\textbf{F}|\chi_k\rangle = \langle \chi_k|\textbf{F}\chi_j\rangle * = \langle \textbf{F}\chi_j|\chi_k\rangle$

in any basis {$$\chi_j$$} of functions appropriate for the action of F (i.e., functions of the variables on which F operates). As expressed through equality of the first and third elements above, Hermitian operators are often said to 'obey the turn-over rule'. This means that F can be allowed to operate on the function to its right or on the function to its left if F is Hermitian.

Hermiticity assures that the eigenvalues {$$f_j$$} are all real, that eigenfunctions {$$\chi_j$$} having different eigenvalues are orthogonal and can be normalized $$\langle \chi_j|\chi_k\rangle =\delta_{j,k},$$ and that eigenfunctions having the same eigenvalues can be made orthonormal (these statements are proven in Appendix C).

## 4: Stationary states do not have varying Measurables

Once a particular value $$f_j$$ is observed in a measurement of F, this same value will be observed in all subsequent measurements of F as long as the system remains undisturbed by measurements of other properties or by interactions with external fields. In fact, once $$f_i$$ has been observed, the state of the system becomes an eigenstate of F (if it already was, it remains unchanged):

$\textbf{F}\Psi = f_i\Psi.$

This means that the measurement process itself may interfere with the state of the system and even determines what that state will be once the measurement has been made.

Example $$\PageIndex{2}$$:

Again consider the v=0 $$N_2$$ ionization treated in Problem 11 of this chapter. If, subsequent to ionization, the $$N_2^+$$ ions produced were probed to determine their internal vibrational state, a fraction of the sample equal to $$|\langle \Psi (N_2; \nu =0) | \Psi(N_2^+; \nu=0)\rangle |^2 = 0.92$$ would be detected in the v=0 state of the $$N_2^+$$ ion. For this sub-sample, the vibrational wavefunction becomes, and remains from then on,

$\Psi(t) = \Psi(N_2^+; \nu=0)e^{\dfrac{-it E^+_{\nu=0}}{\hbar}},$

where $$E^+_{\nu=0}$$ is the energy of the $$N_2^+$$ ion in its $$\nu=0$$ state. If, at some later time, this subsample is again probed, all species will be found to be in the$$\nu=0$$ state.

## 5: Probability of observed a specific Eigenvalue

The probability $$P_k$$ of observing a particular value $$f_k$$ when F is measured, given that the system wavefunction is $$\Psi$$ prior to the measurement, is given by expanding $$\Psi$$ in terms of the complete set of normalized eigenstates of F

$\Psi = \sum\limits_j|\phi_j\rangle \langle \phi_j|\Psi\rangle$

and then computing $$P_k = |\langle \phi_k|\Psi\rangle |^2.$$ For the special case in which $$\Psi$$ is already one of the eigenstates of F (i.e., $$\Psi=\phi_k$$), the probability of observing $$f_j$$ reduces to $$P_j =\delta_{j,k}$$. The set of numbers $$C_j = \langle \phi_j|\Psi\rangle$$ are called the expansion coefficients of $$\Psi$$ in the basis of the {$$f_j$$}. These coefficients, when collected together in all possible products as $$D_{j,i} = C_i^{\text{*}} C_j$$ form the so-called density matrix $$D_{j,i}$$ of the wavefunction $$\Psi$$ within the {$$\psi_j$$} basis.

Example $$\PageIndex{3}$$:

If F is the operator for momentum in the x-direction and $$\Psi(x,t)$$ is the wave function for x as a function of time t, then the above expansion corresponds to a Fourier transform of $$\Psi$$

$\Psi(x,t) = \dfrac{1}{2\pi} \int e^{ikx}\int e^{-ikx'}\Psi(x',t) dx' dk.$

Here $$\sqrt{\frac{1}{2\pi}}e^{ikx}$$ is the normalized eigenfunction of $$\textbf{F} = -i\hbar \frac{\partial}{\partial x}$$ corresponding to momentum eigenvalue $$\hbar k$$. These momentum eigenfunctions are orthonormal:

$\dfrac{1}{2\pi}\int e^{-ikx}e^{ikx'}dk = \delta(x-x')$

because F is a Hermitian operator. The function $$\int e^{-ikx'} \Psi(x',t) dx'$$ is called the momentum-space transform of $$\Psi(x,t)$$ and is denoted $$\Psi(k,t)$$; it gives, when used as $$\Psi*(k,t)\Psi(k,t)$$, the probability density for observing momentum values $$\hbar k$$ at time t.

Example $$\PageIndex{4}$$:

Take the initial $$\psi$$ to be a superposition state of the form

$\psi = a (2p_0 + 2_{p-1} - 2_{p1}) + b(3p_0 - 3_{p-1}),$

where the a and b are amplitudes that describe the admixture of $$2_p$$ and $$3_p$$ functions in this wavefunction. Then:

a. If $$\textbf{L}^2$$ were measured, the value $$2\hbar^2$$ would be observed with probability $$3|a|^2 + 2|b|^2 = 1$$, since all of the functions in $$\psi$$ are p-type orbitals. After said measurement, the wavefunction would still be this same $$\psi$$ because this entire $$\psi$$ is an eigenfunction of $$\textbf{L}^2$$.

b. If $$\textbf{L}_z$$ were measured for this

$\psi = a(2p_0 + 2_{p-1} - 2_{p1}) + b(3p_0 - 3p_{-1}),$

the values $$0\hbar, 1\hbar, and -1\hbar$$ would be observed (because these are the only functions with non-zero $$C_m$$ coefficients for the $$L_z$$ operator) with respective probabilities $$|a|^2 + |b|^2, |-a|^2, \text{and} |a|^2 + |-b|^2.$$

c. After $$L_z$$ were measured, if the sub-population for which $$-1\hbar$$ had been detected were subjected to measurement of $$\textbf{L}^2$$ the value $$2\hbar^2$$ would certainly be found because the new wavefunction

$\psi ' = \left[ -a2p_{-1} -b 3p_{-1} \right] \dfrac{1}{\sqrt{|a|^2 + |b|^2}}$

is still an eigenfunction of $$\textbf{L}^2$$ with this eigenvalue.

d. Again after $$\textbf{L}_z$$ were measured, if the sub-population for which $$-1\hbar$$ had been observed and for which the wavefunction is now

$\psi ' = \left[ -a2p_{-1} -b 3pp_{-1} \right] \dfrac{1}{\sqrt{|a|^2 + |b|^2}}$

were subjected to measurement of the energy (through the Hamiltonian operator), two values would be found. With probability $$|-a|^2 \frac{1}{|a|^2 + |b|^2}$$the energy of the $$2p_{-1}$$ orbital would be observed; with probability $$|-b|^2\frac{1}{|a|^2 + |b|^2}$$, the energy of the $$3p_{-1}$$ orbital would be observed.

If $$\Psi$$ is a function of several variables (e.g., when $$\Psi$$ describes more than one particle in a composite system), and if F is a property that depends on a subset of these variables (e.g., when F is a property of one of the particles in the composite system), then the expansion $$\Psi=\sum\limits_j |\phi_j\rangle \langle \phi_j|\Psi\rangle$$ is viewed as relating only to $$\Psi$$'s dependence on the subset of variables related to F. In this case, the integrals $$\langle \phi_k|\Psi\rangle$$ are carried out over only these variables; thus the probabilities $$P_k = |\langle \phi_k|\Psi\rangle |^2$$ depend parametrically on the remaining variables.

Suppose that $$\Psi(r,\theta)$$ describes the radial (r) and angular ($$\theta$$) motion of a diatomic molecule constrained to move on a planar surface. If an experiment were performed to measure the component of the rotational angular momentum of the diatomic molecule perpendicular to the surface $$\left( \textbf{L}_z = -i\hbar\frac{\partial}{\partial \theta}\right)$$, only values equal to $$m\hbar$$(m=0,1,-1,2,-2,3,- 3,...) could be observed, because these are the eigenvalues of $$\textbf{L}_z$$ :

$\textbf{L}_z \phi_m = -i\hbar\frac{\partial}{\partial \theta}\phi_m = m\hbar \phi_m, \text{where}$

$\phi_m = \sqrt{\dfrac{1}{2\pi}}e^{im \theta}.$

The quantization of $$\textbf{L}_z$$ arises because the eigenfunctions $$\phi_mm(\theta)$$ must be periodic in $$\theta$$:
$\phi(\theta + 2\pi) = \phi(\theta).$

Such quantization (i.e., constraints on the values that physical properties can realize) will be seen to occur whenever the pertinent wavefunction is constrained to obey a so-called boundary condition (in this case, the boundary condition is $$\phi(\theta + 2\pi) = \phi (\theta ).$$

Expanding the $$\theta$$-dependence of $$\Psi$$ in terms of the $$\phi_m$$

$\Psi = \sum\limits_m \langle \phi_m|\Psi\rangle \phi_m(\theta)$

allows one to write the probability that $$m\hbar$$ is observed if the angular momentum $$\textbf{L}_z$$ is measured as follows:

$P_m = |\langle \phi_m|\Psi \rangle |^2 = |\int \phi_m^\text{*}(\theta ) \Psi (r,\theta) d\theta |^2.$

If one is interested in the probability that $$m\hbar$$ be observed when $$L_z$$ is measured regardless of what bond length r is involved, then it is appropriate to integrate this expression over the r-variable about which one does not care. This, in effect, sums contributions from all rvalues to obtain a result that is independent of the r variable. As a result, the probability reduces to:

$P_m = \int \phi^{\text{*}}(\theta ') \left[ \Psi^{\text{*}}(r,\theta ') \Psi(r,\theta) \right]\phi (\theta) d\theta ' d\theta,$

which is simply the above result integrated over r with a volume element r dr for the twodimensional motion treated here. If, on the other hand, one were able to measure $$L_z$$ values when r is equal to some specified bond length (this is only a hypothetical example; there is no known way to perform such a measurement), then the probability would equal:

$P_mr dr = r dr\int \phi_m^{\text{*}}(\theta ')\Psi^{\text{*}}(r,\theta ')\Psi(r,\theta )\phi_m (\theta) d\theta 'd \theta = |\langle \phi_m|\Psi\rangle |^2 r dr.$

## 6. Commuting Operators

Two or more properties F, G, J whose corresponding Hermitian operators F, G, J commute

FG-GF=FJ-JF=GJ-JG= 0

have complete sets of simultaneous eigenfunctions (the proof of this is treated in Appendix C). This means that the set of functions that are eigenfunctions of one of the operators can be formed into a set of functions that are also eigenfunctions of the others:

$\textbf{F}\phi_j = f_j\phi_j \Longrightarrow \textbf{G}\phi_j = g_j\phi_j \Longrightarrow \textbf{J}\phi_j=j_j\phi_j.$

Example $$\PageIndex{5}$$:

The $$p_x, p_y and p_z$$ orbitals are eigenfunctions of the $$\textbf{L}^2$$ angular momentum operator with eigenvalues equal to $$L(L+1) \hbar^2 = 2 \hbar^2$$. Since $$\textbf{L}^2 \text{and} \textbf{L}_z$$ commute and act on the same (angle) coordinates, they possess a complete set of simultaneous eigenfunctions.

Although the $$p_x, p_y \text{and} p_z$$ orbitals are not eigenfunctions of $$\textbf{L}_z$$ , they can be combined to form three new orbitals: $$p_0 = p_z , p_1= \frac{1}{\sqrt{2}} [p_x + ip_y], \text{and} p_{-1}= \frac{1}{\sqrt{2}} [p_x - ip_y]$$ that are still eigenfunctions of $$\textbf{L}^2$$ but are now eigenfunctions of $$\textbf{L}_z$$ also (with eigenvalues $$0\hbar, 1\hbar, and -1\hbar$$, respectively).

It should be mentioned that if two operators do not commute, they may still have some eigenfunctions in common, but they will not have a complete set of simultaneous eigenfunctions. For example, the $$L_z \text{and} L_x$$ components of the angular momentum operator do not commute; however, a wavefunction with L=0 (i.e., an S-state) is an eigenfunction of both operators.

The fact that two operators commute is of great importance. It means that once a measurement of one of the properties is carried out, subsequent measurement of that property or of any of the other properties corresponding to mutually commuting operators can be made without altering the system's value of the properties measured earlier. Only subsequent measurement of another property whose operator does not commute with F, G, or J will destroy precise knowledge of the values of the properties measured earlier.

Example $$\PageIndex{6}$$:

Assume that an experiment has been carried out on an atom to measure its total angular momentum $$L^2$$. According to quantum mechanics, only values equal to $$L(L+1) \hbar^2$$ will be observed. Further assume, for the particular experimental sample subjected to observation, that values of $$L^2$$ equal to $$2\hbar^2 \text{and} 0\hbar^2$$ were detected in relative amounts of 64 % and 36 % , respectively. This means that the atom's original wavefunction $$\psi$$ could be represented as:

$\psi = 0.8P + 0.6S,$

where P and S represent the P-state and S-state components of $$\psi$$. The squares of the amplitudes 0.8 and 0.6 give the 64 % and 36 % probabilities mentioned above.

Now assume that a subsequent measurement of the component of angular momentum along the lab-fixed z-axis is to be measured for that sub-population of the original sample found to be in the P-state. For that population, the wavefunction is now a pure P-function:

$\psi ' = P$

However, at this stage we have no information about how much of this $$\psi$$' is of m = 1, 0, or -1, nor do we know how much 2p, 3p, 4p, ... np components this state contains.

Because the property corresponding to the operator $$\textbf{L}_z$$ is about to be measured, we express the above $$\psi '$$ in terms of the eigenfunctions of $$\textbf{L}_z:$$

$\psi ' = P.$

However, at this stage we have no information about how much of this y' is of m = 1, 0, or -1, nor do we know how much 2p, 3p, 4p, ... np components this state contains.

Because the property corresponding to the operator $$\textbf{L}_z$$ is about to be measured, we express the above $$\psi$$' in terms of the eigenfunctions of $$\textbf{L}_z:$$

$\psi ' = P = \sum\limits_{m=1,0,-1}C'_mP_m.$

When the measurement of $$L_z$$ is made, the values $$1\hbar, 0\hbar, and -1\hbar$$ will be observed with probabilities given by $$|C'_1|^2, |C'_0|^2, \text{and} |C'_{-1}|^2,$$ respectively. For that sub-population found to have, for example, $$L_z$$ equal to $$-1\hbar$$, the wavefunction then becomes

$\psi ' = P_{-1}.$

At this stage, we do not know how much of $$2p_{-1}, 3p_{-1}, 4p_{-1}, ... np_{-1}$$ this wavefunction contains. To probe this question another subsequent measurement of the energy (corresponding to the H operator) could be made. Doing so would allow the amplitudes in the expansion of the above $$\psi ''= P_{-1}$$

$\psi '' = P_{-1} = \sum\limits_n C''_nnP_{-1}$

to be found.

The kind of experiment outlined above allows one to find the content of each particular component of an initial sample's wavefunction. For example, the original wavefunction has $$0.64 |C''_n|^2 |C'_m|^2$$ fractional content of the various $$nP_m$$ functions. It is analogous to the other examples considered above because all of the operators whose properties are measured commute.

Example $$\PageIndex{7}$$:

Let us consider an experiment in which we begin with a sample (with wavefunction $$\psi$$) that is first subjected to measurement of $$L_z$$ and then subjected to measurement of $$L^2$$ and then of the energy. In this order, one would first find specific values (integer multiples of $$\hbar$$) of $$L_z$$ and one would express \psi as

$\psi = \sum\limits_m D_m \psi_m.$

At this stage, the nature of each $$\m is unknown (e.g., the y1 function can contain np1, n'd1, n''f1, etc. components); all that is known is that ym has m h as its Lz value Taking that sub-population \((|D_m|^2 fraction)$$ with a particular m$$\hbar$$ value for $$L_z$$ and subjecting it to subsequent measurement of $$L^2$$ requires the current wavefunction $$\psi_m$$ to be expressed as

$\psi_m = \sum\limits_L D_{L,m}\psi_{L,m}.$

When $$L^2$$ is measured the value L(L+1)$$\hbar^2$$ will be observed with probability $$|D_{m,L}|^2$$, and the wavefunction for that particular sub-population will become

$\psi '' = \psi_{L,m}.$

At this stage, we know the value of L and of m, but we do not know the energy of the state. For example, we may know that the present sub-population has L=1, m=-1, but we have no knowledge (yet) of how much 2p-1, 3p-1, ... np-1 the system contains.

To further probe the sample, the above sub-population with L=1 and m=-1 can be subjected to measurement of the energy. In this case, the function $$\psi_{1,-1}$$ must be expressed as

$\psi_{1,-1} = \sum\limits_nD_n'' nP_{-1}.$

When the energy measurement is made, the state $$nP_{-1}$$ will be found $$|D_n''|^2$$ fraction of the time.

The fact that $$\textbf{L}_z , \textbf{L}^2$$, and H all commute with one another (i.e., are mutually commutative) makes the series of measurements described in the above examples more straightforward than if these operators did not commute.

In the first experiment, the fact that they are mutually commutative allowed us to expand the 64 % probable $$\textbf{L}^2$$ eigenstate with L=1 in terms of functions that were eigenfunctions of the operator for which measurement was about to be made without destroying our knowledge of the value of $$L^2$$. That is, because $$\textbf{L}^2$$ and $$\textbf{L}_z$$ can have simultaneous eigenfunctions, the L = 1 function can be expanded in terms of functions that are eigenfunctions of both $$\textbf{L}^2$$ and $$\textbf{L}_z.$$ This in turn, allowed us to find experimentally the sub-population that had, for example -1$$\hbar$$ as its value of $$L_z$$ while retaining knowledge that the state remains an eigenstate of $$\textbf{L}^2$$ (the state at this time had L = 1 and m = -1 and was denoted $$P_{-1}$$). Then, when this $$P_{-1}$$ state was subjected to energy measurement, knowledge of the energy of the sub-population could be gained without giving up knowledge of the $$L^2$$ and $$L_z$$ information; upon carrying out said measurement, the state became $$nP_{-1}$$.

We therefore conclude that the act of carrying out an experimental measurement disturbs the system in that it causes the system's wavefunction to become an eigenfunction of the operator whose property is measured. If two properties whose corresponding operators commute are measured, the measurement of the second property does not destroy knowledge of the first property's value gained in the first measurement.

On the other hand, as detailed further in Appendix C, if the two properties (F and G) do not commute, the second measurement destroys knowledge of the first property's value. After the first measurement, $$\Psi$$ is an eigenfunction of F; after the second measurement, it becomes an eigenfunction of G. If the two non-commuting operators' properties are measured in the opposite order, the wavefunction first is an eigenfunction of G, and subsequently becomes an eigenfunction of F.

It is thus often said that 'measurements for operators that do not commute interfere with one another'. The simultaneous measurement of the position and momentum along the same axis provides an example of two measurements that are incompatible. The fact that x = x and $$p_x = -i\hbar \frac{\partial}{\partial x}$$ do not commute is straightforward to demonstrate:

$\left[x\left(-i\hbar \dfrac{\partial}{\partial x}\right) - \left( -i\hbar \dfrac{\partial}{\partial x}\right)x\right]\chi = i\hbar \chi \neq 0.$

Operators that commute with the Hamiltonian and with one another form a particularly important class because each such operator permits each of the energy eigenstates of the system to be labelled with a corresponding quantum number. These operators are called symmetry operators. As will be seen later, they include angular momenta (e.g., $$L^2, L_z, S^2, S_z,$$ for atoms) and point group symmetries (e.g., planes and rotations about axes). Every operator that qualifies as a symmetry operator provides a quantum number with which the energy levels of the system can be labeled.

## 7: Expectation Values

If a property F is measured for a large number of systems all described by the same $$\Psi$$, the average value of \langle F\rangle for such a set of measurements can be computed as

$\langle F\rangle = \langle \Psi |\textbf{F}|\Psi\rangle .$

Expanding $$\Psi$$ in terms of the complete set of eigenstates of F allows \langle F\rangle to be rewritten as follows:

$\langle F\rangle = \sum\limits_jf_j|\langle \phi_j|\Psi\rangle |^2,$

which clearly expresses \langle F\rangle as the product of the probability $$P_j$$ of obtaining the particular value $$f_j$$ when the property F is measured and the value $$f_j$$ .of the property in such a measurement. This same result can be expressed in terms of the density matrix $$D_{i,j}$$ of the state $$\Psi$$ defined above as:

$\langle F\rangle = \sum\limits_{i,j} \langle \Psi |\phi\rangle \langle \phi_i|\textbf{F}|\phi_j\rangle \langle \phi_j|\Psi\rangle = \sum\limits_{i,j}C_i^{\text{*}}\langle \phi_i|\textbf{F}|\phi_j\rangle C_j$

$\sum\limits_{i,j}D_{i,j}\langle \phi_i|\textbf{F}|\phi_j\rangle = Tr(DF).$

Here, DF represents the matrix product of the density matrix $$D_{j,i}$$ and the matrix representation $$F_{i,j} = \langle \phi_i|\textbf{F}|\phi_j\rangle$$ of the F operator, both taken in the {$$\phi_j$$} basis, and Tr represents the matrix trace operation.

As mentioned at the beginning of this Section, this set of rules and their relationships to experimental measurements can be quite perplexing. The structure of quantum mechanics embodied in the above rules was developed in light of new scientific observations (e.g., the photoelectric effect, diffraction of electrons) that could not be interpreted within the conventional pictures of classical mechanics. Throughout its development, these and other experimental observations placed severe constraints on the structure of the equations of the new quantum mechanics as well as on their interpretations. For example, the observation of discrete lines in the emission spectra of atoms gave rise to the idea that the atom's electrons could exist with only certain discrete energies and that light of specific frequencies would be given off as transitions among these quantized energy states took place.

Even with the assurance that quantum mechanics has firm underpinnings in experimental observations, students learning this subject for the first time often encounter difficulty. Therefore, it is useful to again examine some of the model problems for which the Schrödinger equation can be exactly solved and to learn how the above rules apply to such concrete examples.

The examples examined earlier in this Chapter and those given in the Exercises and Problems serve as useful models for chemically important phenomena: electronic motion in polyenes, in solids, and in atoms as well as vibrational and rotational motions. Their study thus far has served two purposes; it allowed the reader to gain some familiarity with applications of quantum mechanics and it introduced models that play central roles in much of chemistry. Their study now is designed to illustrate how the above seven rules of quantum mechanics relate to experimental reality.

## An Example Illustrating Several of the Fundamental Rules

The physical significance of the time independent wavefunctions and energies treated in Section II as well as the meaning of the seven fundamental points given above can be further illustrated by again considering the simple two-dimensional electronic motion model.

If the electron were prepared in the eigenstate corresponding to $$n_x =1, n_y = 2,$$ its total energy would be

$E = \pi^2 \dfrac{\hbar^2}{2m}\left[ \dfrac{1^2}{L_x^2} + \dfrac{2^2}{L_y^2} \right].$

If the energy were experimentally measured, this and only this value would be observed, and this same result would hold for all time as long as the electron is undisturbed.

If an experiment were carried out to measure the momentum of the electron along the y-axis, according to the second postulate above, only values equal to the eigenvalues of $$-\hbar \dfrac{\partial}{\partial y}$$ could be observed. The p$$_y$$ eigenfunctions (i.e., functions that obey p$$_y F = -\hbar\frac{\partial}{\partial y} F = c F)$$ are of the form

$\sqrt{\frac{1}{L_y}}e^{ik_y y},$

where the momentum $$\hbar k_y$$ can achieve any value; the $$\sqrt{\frac{1}{L_y}}$$ factor is used to normalize the eigenfunctions over the range $$0 \leq y \leq L_y.$$ It is useful to note that the y-dependence of $$\psi$$ as expressed above $$\left[ e^{\frac{i2\pi y}{L_y}} - e^{\frac{-i2\pi y}{L_y}} \right]$$ is already written in terms of two such eigenstates of $$-i\hbar \frac{\partial}{\partial y}:$$

$-\hbar\dfrac{\partial}{\partial y}\left( e^{\dfrac{i2\pi y}{L_y}}\right) = \dfrac{2h}{L_y} \left(e^{\dfrac{i2\pi y}{L_y}}\right), \ \text{and}$

$-\hbar\dfrac{\partial}{\partial y}\left( e^{\dfrac{-i2\pi y}{L_y}}\right) = \dfrac{-2h}{L_y} \left( e^{\dfrac{-i2\pi y}{L_y}} \right).$

Thus, the expansion of $$\psi$$ in terms of eigenstates of the property being measured dictated by the fifth postulate above is already accomplished. The only two terms in this expansion correspond to momenta along the y-axis of $$\frac{2h}{L_y} \ \text{and} \ -\frac{2h}{L_y};$$ the probabilities of observing these two momenta are given by the squares of the expansion coefficients of $$\psi$$ in terms of the normalized eigenfunctions of $$-i\hbar \frac{\partial}{\partial y}$$. The functions $$\sqrt{\frac{1}{L_y}}\left( e^{\frac{i2\pi y}{L_y}}\right) \ \text{and} \ \sqrt{\frac{1}{L_y}}\left( e^{-\frac{2\pi y}{L_y}}\right)$$ are such normalized eigenfunctions; the expansion coefficients of these functions in $$\psi \ \text{are} \ \frac{1}{\sqrt{2}} \ \text{and} \ -\frac{1}{\sqrt{2}},$$ respectively. Thus the momentum $$\frac{2h}{L_y}$$ will be observed with probability $$\left( \frac{1}{\sqrt{2}}\right) ^2 = \frac{1}{2} \ \text{and} \ -\frac{2h}{L_y}$$ will be observed with probability $$\left( \frac{1}{\sqrt{-2}} \right)^2 = \frac{1}{2}.$$ If the momentum along the x-axis were experimentally measured, again only two values $$\frac{1h}{L_x} \ \text{and} \ -\frac{1h}{L_x}$$would be found, each with a probability of $$\frac{1}{2}$$.

The average value of the momentum along the x-axis can be computed either as the sum of the probabilities multiplied by the momentum values:

$\langle p_x\rangle = \frac{1}{2}\left[ \frac{1h}{L_x} - \frac{1h}{L_x} \right] = 0,$

or as the so-called expectation value integral shown in the seventh postulate:

$\langle p_x\rangle = \iint \psi^{\text{*}} \left(-\hbar \dfrac{\partial \psi}{\partial x}\right) \text{dx dy}.$

Inserting the full expression for $$\psi$$(x,y) and integrating over x and y from 0 to L$$_x \ \text{and} \ L_y,$$ respectively, this integral is seen to vanish. This means that the result of a large number of measurements of p$$_x$$ on electrons each described by the same $$\psi$$ will yield zero net momentum along the x-axis.; half of the measurements will yield positive momenta and half will yield negative momenta of the same magnitude.

The time evolution of the full wavefunction given above for the n$$_x$$=1, n$$_y$$=2 state is easy to express because this $$\psi$$ is an energy eigenstate:

$\Psi(x,y,t) = \psi(x,y) e^{\dfrac{-iEt}{\hbar}}.$

If, on the other hand, the electron had been prepared in a state $$\psi(x,y)$$ that is not a pure eigenstate (i.e., cannot be expressed as a single energy eigenfunction), then the time evolution is more complicated. For example, if at t=0 $$\psi$$ were of the form

$\psi = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}}\left[ \text{a}\: sin \left( \dfrac{2\pi x}{L_x} \right) sin \left( \dfrac{1\pi y}{L_y} \right) + \text{b} \: sin \left( \dfrac{1\pi x}{L_x} \right) sin \left( \dfrac{2\pi y}{L_y} \right) \right],$

with a and b both real numbers whose squares give the probabilities of finding the system in the respective states, then the time evolution operator $$e^{\dfrac{-i\textbf{H}t}{\hbar}}$$ applied to $$\psi$$ would yield the following time dependent function:

$\Psi = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} \left[ a\: e^{\dfrac{-iE_{2,1}t}{\hbar}} sin\left( \dfrac{2\pi x}{L_x} \right)sin \left( \dfrac{2\pi x}{L_x} \right) sin\left( \dfrac{1\pi x}{L_x} \right) + b\: e^{\dfrac{-iE_{1,2}t}{\hbar}}sin \left( \dfrac{1\pi x}{L_x} \right) sin\left( \dfrac{2\pi y}{L_y} \right) \right],$

where

$E_{2,1} = \pi^2\dfrac{\hbar^2}{2m}\left[ \dfrac{2^2}{L_x^2} + \dfrac{1^2}{L_y^2} \right], \text{and}$

$E_{1,2} = \pi^2\dfrac{\hbar^2}{2m}\left[ \dfrac{1^2}{L_x^2} + \dfrac{2^2}{L_y^2} \right], \text{and}$

The probability of finding $$E_{2,1}$$ if an experiment were carried out to measure energy would be $$a| e^{\dfrac{-iE_{2,1}t}{\hbar}}|^2 = |a|^2$$; the probability for finding $$E_{1,2}$$ would be $$|b|^2$$. The spatial probability distribution for finding the electron at points x,y will, in this case, be given by:

$|\Psi|^2 = |a|^2 |\psi_{2,1}|^2 + |b|^2 |\psi_{1,2}|^2 + 2 \:ab\: \psi_{2,1} \psi_{1,2} cos\left( \dfrac{\Delta Et}{\hbar} \right),$

where $$\Delta E$$ is $$E_{2,1} - E_{1,2},$$

$\psi_{2,1} = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} sin\left(\dfrac{2\pi x}{L_x}\right)sin\left(\dfrac{1\pi y}{L_y}\right),$

and

$\psi_{1,2} = \sqrt{\dfrac{2}{L_x}}\sqrt{\dfrac{2}{L_y}} sin\left(\dfrac{1\pi x}{L_x}\right)sin\left(\dfrac{2\pi y}{L_y}\right),$

This spatial distribution is not stationary but evolves in time. So in this case, one has a wavefunction that is not a pure eigenstate of the Hamiltonian (one says that $$\Psi$$ is a superposition state or a non-stationary state) whose average energy remains constant $$(E=E_{2,1} |a|^2 + E_{1,2} |b|^2)$$ but whose spatial distribution changes with time.

Although it might seem that most spectroscopic measurements would be designed to prepare the system in an eigenstate (e.g., by focusing on the sample light whose frequency matches that of a particular transition), such need not be the case. For example, if very short laser pulses are employed, the Heisenberg uncertainty broadening $$(\Delta E\Delta t\geq \hbar)$$ causes the light impinging on the sample to be very non-monochromatic (e.g., a pulse time of $$1 x10^{-12}$$ sec corresponds to a frequency spread of approximately $$5 cm^{-1}$$). This, in turn, removes any possibility of preparing the system in a particular quantum state with a resolution of better than $$30 cm^{-1}$$ because the system experiences time oscillating electromagnetic fields whose frequencies range over at least $$5 cm^{-1}$$).

Essentially all of the model problems that have been introduced in this Chapter to illustrate the application of quantum mechanics constitute widely used, highly successful 'starting-point' models for important chemical phenomena. As such, it is important that students retain working knowledge of the energy levels, wavefunctions, and symmetries that pertain to these models.

Thus far, exactly soluble model problems that represent one or more aspects of an atom or molecule's quantum-state structure have been introduced and solved. For example, electronic motion in polyenes was modeled by a particle-in-a-box. The harmonic oscillator and rigid rotor were introduced to model vibrational and rotational motion of a diatomic molecule

As chemists, we are used to thinking of electronic, vibrational, rotational, and translational energy levels as being (at least approximately) separable. On the other hand, we are aware that situations exist in which energy can flow from one such degree of freedom to another (e.g., electronic-to-vibrational energy flow occurs in radiationless relaxation and vibration-rotation couplings are important in molecular spectroscopy). It is important to understand how the simplifications that allow us to focus on electronic or vibrational or rotational motion arise, how they can be obtained from a first-principles derivation, and what their limitations and range of accuracy are.