# 1.2: The SchrÃ¶dinger Equation and Its Components

- Page ID
- 11542

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)It has been well established that electrons moving in atoms and molecules do not obey the classical Newton equations of motion. People long ago tried to treat electronic motion classically, and found that features observed clearly in experimental measurements simply were not consistent with such a treatment. Attempts were made to supplement the classical equations with conditions that could be used to rationalize such observations. For example, early workers required that the angular momentum \(\textbf{L} = \textbf{r} \times \textbf{p}\) be allowed to assume only integer multiples of \(h/2\pi\) (which is often abbreviated as \(\hbar\)), which can be shown to be equivalent to the Bohr postulate \(n \lambda = 2\pi r\). However, until scientists realized that a new set of laws, those of quantum mechanics, applied to light microscopic particles, a wide gulf existed between laboratory observations of molecule-level phenomena and the equations used to describe such behavior.

Quantum mechanics is cast in a language that is not familiar to most students of chemistry who are examining the subject for the first time. Its mathematical content and how it relates to experimental measurements both require a great deal of effort to master. With these thoughts in mind, i have organized this material in a manner that first provides a brief introduction to the two primary constructs of quantum mechanics- operators and wave functions that obey a Schrödinger equation. Next, I demonstrate the application of these constructs to several chemically relevant model problems. By learning the solutions of the Schrödinger equation for a few model systems, the student can better appreciate the treatment of the fundamental postulates of quantum mechanics as well as their relation to experimental measurement for which the wave functions of the known model problems offer important interpretations.

## Operators

*Each physically measurable quantity has a corresponding operator. The eigenvalues** of the operator tell the only values of the corresponding physical property that can be observed in an experimental probe of that property. Some operators have a continuum of eigenvalues, but others have only discrete quantized eigenvalues.*

Any experimentally measurable physical quantity \(F\) (e.g., energy, dipole moment, orbital angular momentum, spin angular momentum, linear momentum, kinetic energy) has a classical mechanical expression in terms of the Cartesian positions \(\{q_i\}\) and momenta \(\{p_i\}\) of the particles that comprise the system of interest. Each such classical expression is assigned a corresponding quantum mechanical operator \(\textbf{F}\) formed by replacing the \(\{p_i\}\) in the classical form by the differential operator

\[-i\hbar \dfrac{\partial}{\partial q_j} \tag{1.1}\]

and leaving the coordinates \(q_j\) that appear in \(F\) untouched. If one is working with a classical quantity expressed in terms of curvilinear coordinates, it is important that this quantity first be rewritten in Cartesian coordinates. The replacement of the Cartesian momenta by \(-i\hbar\dfrac{\partial}{\partial q_j}\) can then be made and the resultant expression can be transformed back to the curvilinear coordinates if desired.

Example 1.2.1 |
---|

For example, the classical kinetic energy of \(N\) particles (with masses \(m_l\)) moving in a potential field containing both quadratic and linear coordinate-dependence can be written as \[F=\sum_{l=1}^N { \left(\dfrac{p_l^2}{2m_l} + \dfrac{1}{2} k(q_l-q_l^0)^2 + L(q_l-q_l^0)\right)}. \tag{1.2}\] The quantum mechanical operator associated with this \(F\) is \[\textbf{F}=\sum_{l=1}^N \left(- \dfrac{\hbar^2}{2m_l} \dfrac{\partial^2}{\partial{q_l^2}} + \dfrac{1}{2} k(q_l-q_l^0)^2 + L(q_l-q_l^0) \right).\tag{1.3}\] Such an operator would occur when, for example, one describes the sum of the kinetic energies of a collection of particles (the first term) in Eq. 1.3), plus the sum of "Hookes' Law" parabolic potentials (the second term in Eq. 1.3), and the interactions of the particles with an externally applied field (the last term Eq. 1.3) whose potential energy varies linearly as the particles move away from their equilibrium positions \(\{q_l^0\}\). |

Let us try more examples. The sum of the \(z\)-components of angular momenta (recall that vector angular momentum \(\textbf{L}\) is defined as \(\textbf{L} = \textbf{r} \times \textbf{p}\) of a collection of \(N\) particles has the following classical expression

\[F=\sum_{j=1}^N (x_jp_{yj} - y_jp_{xj}),\tag{1.4}\]

and the corresponding operator is

\[\textbf{F}=-i\hbar \sum_{j=1}^N (x_j\dfrac{\partial}{\partial{y_j}} - y_j\dfrac{\partial}{\partial{x_j}}). \tag{1.5}\]

If one transforms these Cartesian coordinates and derivatives into polar coordinates, the above expression reduces to

\[\textbf{F} = -i \hbar \sum_{j=1}^N \dfrac{\partial}{\partial{\phi_j}} \tag{1.6}\]

where \(\phi_j\) is the azimuthal angle of the \(j^{th}\) particle.

The \(x\)-component of the dipole moment for a collection of \(N\) particles has a classical form of

\[F= \sum_{j=1}^N Zje \, x_j,\tag{1.7}\]

for which the quantum operator is

\[\textbf{F}= \sum_{j=1}^N Z_je \, x_j, \tag{1.8}\]

where \(Z_je\) is the charge on the \(j^{th}\) particle. Notice that in this case, classical and quantum forms are identical because \(\textbf{F}\) contains no momentum operators.

Remember, the mapping from \(F\) to \(\textbf{F}\) is straightforward only in terms of Cartesian coordinates. To map a classical function \(F\), given in terms of curvilinear coordinates (even if they are orthogonal), into its quantum operator is not at all straightforward. The mapping can always be done in terms of Cartesian coordinates after which a transformation of the resulting coordinates and differential operators to a curvilinear system can be performed.

The relationship of these quantum mechanical operators to experimental measurement lies in the eigenvalues of the quantum operators. Each such operator has a corresponding eigenvalue equation

\[\textbf{F} \chi_j = \alpha_j \chi_j \tag{1.9}\]

in which the \(\chi_j\) are called eigenfunctions and the (scalar numbers) \(a_j\) are called eigenvalues. All such eigenvalue equations are posed in terms of a given operator (\(\textbf{F}\) in this case) and those functions \(\{\chi_j\}\) that \(\textbf{F}\) acts on to produce the function back again but multiplied by a constant (the eigenvalue). Because the operator \(\textbf{F}\) usually contains differential operators (coming from the momentum), these equations are differential equations. Their solutions \(\chi_j\) depend on the coordinates that \(\textbf{F}\) contains as differential operators. An example will help clarify these points. The differential operator \(d/dy\) acts on what functions (of \(y\)) to generate the same function back again but multiplied by a constant? The answer is functions of the form \(\exp(ay)\) since

\[\dfrac{d (\exp(ay))}{dy} = a \exp(ay). \tag{1.10}\]

So, we say that \(\exp(ay)\) is an eigenfunction of \(d/dy\) and \(a\) is the corresponding eigenvalue.

As I will discuss in more detail shortly, the eigenvalues of the operator \(\textbf{F}\) tell us the only values of the physical property corresponding to the operator \(\textbf{F}\) that can be observed in a laboratory measurement. Some \(\textbf{F}\) operators that we encounter possess eigenvalues that are discrete or quantized. For such properties, laboratory measurement will result in only those discrete values. Other \(\textbf{F}\) operators have eigenvalues that can take on a continuous range of values; for these properties, laboratory measurement can give any value in this continuous range.

An important characteristic of the quantum mechanical operators formed as discussed above for any measurable property is the fact that they are Hermitian. An operator \(\textbf{F}\) that acts on coordinates denoted \(q\) is Hermitian if

\[ \int \phi_I^* \textbf{F} \phi_J dq = \int [\textbf{F} \phi_I]^* \phi_J dq \tag{1.11}\]

or, equivalently,

\[ \int \phi_I^* \textbf{F} \phi_J dq = [\int \phi_J^* \textbf{F} \phi_I dq]^* \tag{1.12}\]

for any functions \(\phi_I(q)\) and \(\phi_J(q)\). The operator corresponding to any power of the coordinate \(q\) itself is easy to show obeys this identity, but what about the corresponding momentum operator \(-i\hbar \dfrac{∂}{∂q}\)? Let’s take the left hand side of the above identity for

\[\textbf{F} = -i \hbar \dfrac{∂}{∂q} \tag{1.13}\]

and rewrite it using integration by parts as follows:

\[\int_{-\infty}^{+\infty}\phi_I^*(q) [-i\hbar \frac{\partial \phi_J(q)}{\partial q}]dq=-i\hbar \int_{-\infty}^{+\infty}\phi_I^*(q) [\frac{\partial \phi_J(q)}{\partial q}]dq\\=-i\hbar \{-\int_{-\infty}^{+\infty}\frac{\partial \phi_I^*(q)}{\partial q}\phi_J(q) dq+\phi_I^*(\infty)\phi_J(\infty)-\phi_I^*(-\infty)\phi_J(-\infty)\} \]

If the functions \(\phi_I(q)\) and \(\phi_J(q)\) are assumed to vanish at \(\pm\infty\), the right-hand side of this equation can be rewritten as

\[ih \int_{-\infty}^{+\infty}\frac{\partial\phi_I^*(q)}{\partial q}\phi_J(q) dq=\int_{-\infty}^{\infty} [-i\hbar \frac{\partial \phi_I(q)}{\partial q}]^*\phi_J(q) dq =[\int_{-\infty}^{\infty}\phi_J^*(q) [-i\hbar \frac{\partial \phi_I(q)}{\partial q}]dq]^* .\]

So, \(-i\hbar \dfrac{∂}{∂q}\) is indeed a Hermitian operator. Moreover, using the fact that \(q_j\) and \(p_j\) are Hermitian, one can show that any operator \(\textbf{F}\) formed using the rules described above is also Hermitian.

One thing you need to be aware of concerning the eigenfunctions of any Hermitian operator is that each pair of eigenfunctions \(\psi_n\) and \(\psi_{n’}\) belonging to different eigenvalues display a property termed orthonormality. This property means that not only may \(\psi_n\) and \(\psi_{n’}\) each normalized so their probability densities integrate to unity

\[1= \int |\psi_n|^2 dx = \int |\psi_{n’}|^2 dx,\tag{1.14}\]

but they are also orthogonal to each other

\[0 = \int \psi_n^* \psi_{n’} dx \tag{1.15}\]

where the complex conjugate * of the first function appears only when the \(\psi\) solutions contain imaginary components (e.g., the functions \(\exp(im\phi)\), which eigenfunctions of the \(z\)-component of angular momentum \(–i \hbar \dfrac{∂}{∂\phi}\)). The orthogonality condition can be viewed as similar to the condition of two vectors \(\textbf{v}_1\) and \(\textbf{v}_2\) being perpendicular, in which case their scalar (sometimes called dot) product vanishes \(\textbf{v}_1 \cdot \textbf{v}_2 = 0\). I want you to keep this property in mind because you will soon see that it is a characteristic of all eigenfunctions of any Hermitian operator.

It is common to write the integrals displaying the normalization and orthogonality conditions in the following so-called Dirac notation

\[1 = \langle \psi_n | \psi_n\rangle \tag{1.14}\]

and

\[ 0 = \langle \psi_n | \psi_{n’}\rangle ,\tag{1.14}\]

where the \(| \rangle\) and \(\langle\) | symbols represent \(\psi\) and \(\psi^*\), respectively, and putting the two together in the \(\langle | \rangle\) construct implies the integration over the variables that y depends upon. The Hermitian character of an operator \(\textbf{F}\) means that this operator forms a Hermitian matrix when placed between pairs of functions and the coordinates are integrated over. For example, the matrix representation of an operator \(\textbf{F}\) when acting on a set of functions denoted {\(\phi_J\)} is:

\[F_{I,J} = \langle \phi_I | \textbf{F}|\phi_J\rangle = \int \phi_I^* \textbf{F} \phi_J dq.\tag{1.14}\]

For all of the operators formed following the rules stated earlier, one finds that these matrices have the following property:

\[F_{I,J} = F_{J,I}^* \tag{1.14}\]

which makes the matrices what we call Hermitian. If the functions upon which F acts and F itself have no imaginary parts (i.e., are real), then the matrices turn out to be symmetric:

\[F_{I,J} = F_{J,I} . \tag{1.14}\]

The importance of the Hermiticity or symmetry of these matrices lies in the fact that it can be shown that such matrices have all real (i.e., not complex) eigenvalues and have eigenvectors that are orthogonal (or, in the case of degenerate eigenvalues, can be chosen to be orthogonal). Let’s see how these conditions follow from the Hermiticity property.

If the operator \(\textbf{F}\) has two eigenfunctions \(\psi_1\) and \(\psi_2\) having eigenvalues \(\lambda_1\) and \(\lambda_2\), respectively, then

\[\textbf{F} \psi_1 = \lambda_1 \psi_1. \tag{1.14}\]

Multiplying this equation on the left by \(\psi_2^*\) and integrating over the coordinates (denoted \(q\)) that \(\textbf{F}\) acts on gives

\[ \int \psi_2^*\textbf{F} \psi_1 dq = \lambda_1 \int \psi_2^*\psi_1 dq. \tag{1.14}\]

The Hermitian nature of \(\textbf{F}\) allows us to also write

\[ \int \psi_2^*\textbf{F} \psi_1 dq = \int ( \textbf{F} \psi_2)^* \psi_1 dq, \tag{1.14}\]

which, because

\[\textbf{F} \psi_2 = \lambda_2 \psi_2 \tag{1.14}\]

gives

\[ \lambda_1 \int \psi_2^*\psi_1 dq = \int \psi_2^*\textbf{F} \psi_1 dq = \int ( \textbf{F} \psi_2)^* \psi_1 dq = \lambda_2 \int \psi_2^*\psi_1 dq. \tag{1.14}\]

If \(\lambda_1\) is not equal to \(\lambda_2\), the only way the left-most and right-most terms in this equality can be equal is if

\[\int \psi_2^*\psi_1 dq = 0, \tag{1.14}\]

which means the two eigenfunctions are orthogonal. If the two eigenfunctions \(\psi_1\) and \(\psi_2\) have equal eigenvalues, the above derivation can still be used to show that \(\psi_1\) and \(\psi_2\) are orthogonal to the other eigenfunctions {\(\psi_3, \psi_4, \)etc.} of \(\textbf{F}\) that have different eigenvalues. For the eigenfunctions \(\psi_1\) and \(\psi_2\) that are degenerate (i.e., have equal eigenvalues), we cannot show that they are orthogonal (because they need not be so). However, because any linear combination of these two functions is also an eigenfunction of \(\textbf{F}\) having the same eigenvalue, we can always choose a combination that makes \(\psi_1\) and \(\psi_2\) orthogonal to one another.

Finally, for any given eigenfunction \(\psi_1\), we have

\[\int \psi_1^*\textbf{F} \psi_1 dq = \lambda_1 \int \psi_1^*\psi_1 dq \tag{1.14}\]

However, the Hermitian character of F allows us to rewrite the left hand side of this equation as

\[\int \psi_1^*\textbf{F} \psi_1 dq = \int [\textbf{F}\psi_1]^*\psi_1 dq = [\lambda_1]^* \int \psi_1^*\psi_1 dq. \tag{1.14}\]

These two equations can only remain valid if

\[[\lambda_1]^* = \lambda_1, \tag{1.14}\]

which means that \(\lambda_1\) is a real number (i.e., has no imaginary part).

So, all quantum mechanical operators have real eigenvalues (this is good since these eigenvalues are what can be measured in any experimental observation of that property) and can be assumed to have orthogonal eigenfunctions. It is important to keep these facts in mind because we make use of them many times throughout this text.

## Wave functions

The eigenfunctions of a quantum mechanical operator depend on the coordinates upon which the operator acts. The particular operator that corresponds to the total energy of the system is called the Hamiltonian operator. The eigenfunctions of this particular operator are called wave functions

A special case of an operator corresponding to a physically measurable quantity is the Hamiltonian operator \(H\) that relates to the total energy of the system. The energy eigenstates of the system \(Y\) are functions of the coordinates \(\{q_j\}\) that \(H\) depends on and of time t. The function \(|\Psi(q_j,t)|^2 = \Psi^*\Psi\) gives the probability density for observing the coordinates at the values \(q_j\) at time \(t\). For a many-particle system such as the \(H_2O\) molecule, the wave function depends on many coordinates. For \(H_2O\), it depends on the \(x\), \(y\), and \(z\) (or \(r\),\(\theta\), and \(\phi\)) coordinates of the ten electrons and the \(x\), \(y\), and \(z\) (or \(r\),\(\theta\), and \(\phi\)) coordinates of the oxygen nucleus and of the two protons; a total of thirty-nine coordinates appear in \(Y\).

If one is interested in what the probability distribution is for finding the corresponding momenta \(p_j\) at time \(t\), the wave function \(\Psi(q_j, t)\) has to first be written as a combination of the eigenfunctions of the momentum operators \(–i\hbar \dfrac{∂}{∂q}_j\). Expressing \(\Psi(q_j,t)\) in this manner is possible because the momentum operator is Hermitian and it can be shown that the eigenfunctions of any Hermitian operator form a complete set of functions. The momentum operator’s eigenfunctions are

\[\frac{1}{\sqrt{2\pi\hbar}} \exp(ip_j q_j/\hbar), \tag{1.14}\]

and they obey

\[–ih \dfrac{\partial}{\partial q_j} \frac{1}{\sqrt{2\pi\hbar}} \exp(i p_j q_j/\hbar) = p_j \frac{1}{\sqrt{2\pi\hbar}} \exp(ip_j q_j/\hbar). \tag{1.14}\]

These eigenfunctions can also be shown to be orthonormal.

Expanding \(\Psi(q_j,t)\) in terms of these normalized momentum eigenfunctions gives

We can find the expansion coefficients \(C(p_j,t)\) by multiplying the above equation by the complex conjugate of another (labeled \(p_{j’}\)) momentum eigenfunction and integrating over \(q_j\)

The quantities \( |C(p’_j,t)|^2\) then give the probability of finding momentum \(p’_j\) at time \(t\).

In classical mechanics, the coordinates \(q_j\) and their corresponding momenta \(p_j\) are functions of time. The state of the system is then described by specifying \(q_j(t)\) and \(p_j(t)\). In quantum mechanics, the concept that qj is known as a function of time is replaced by the concept of the probability density for finding coordinate qj at a particular value at a particular time \(|\Psi(q_j,t)|^2\) or the probability density \(|C(p’j,t)|^2\) for finding momentum \(p’_j\) at time \(t\).

The Hamiltonian eigenstates are especially important in chemistry because many of the tools that chemists use to study molecules probe the energy states of the molecule. For example, most spectroscopic methods are designed to determine which energy state (electronic, vibrational, rotational, nuclear sp_in, etc.) a molecule is in. However, there are other experimental measurements that measure other properties (e.g., the \(z\)-component of angular momentum or the total angular momentum).

As stated earlier, if the state of some molecular system is characterized by a wave function Y that happens to be an eigenfunction of a quantum mechanical operator F, one can immediately say something about what the outcome will be if the physical property F corresponding to the operator F is measured. In particular, since

\[F \chi_j = \lambda_j \chi_j, \tag{1.14}\]

where \(\lambda_j\) is one of the eigenvalues of \(F\), we know that the value \(\lambda_j\) will be observed if the property \(F\) is measured while the molecule is described by the wave function \(Y = \chi_j\). In fact, once a measurement of a physical quantity \(F\) has been carried out and a particular eigenvalue \(\lambda_j\) has been observed, the system's wave function \(Y\) becomes the eigenfunction \(\chi_j\) that corresponds to that eigenvalue. That is, the act of making the measurement causes the system's wave function to become the eigenfunction of the property that was measured. This is what is meant when one hears that the act of making a measurement can change the state of the system in quantum mechanics.

What happens if some other property G, whose quantum mechanical operator is \(G\) is measured in a case where we have already determined \(Y = \chi_j\)? We know from what was said earlier that some eigenvalue mk of the operator G will be observed in the measurement. But, will the molecule's wave function remain, after G is measured, the eigenfunction \(Y = \chi_j\) of \(F\), or will the measurement of G cause Y to be altered in a way that makes the molecule's state no longer an eigenfunction of \(F\)? It turns out that if the two operators F and G obey the condition

\[F G = G F, \tag{1.14}\]

then, when the property G is measured, the wave function \(Y = \chi_j\) will remain unchanged. This property that the order of application of the two operators does not matter is called commutation; that is, we say the two operators commute if they obey this property. Let us see how this property leads to the conclusion about Y remaining unchanged if the two operators commute. In particular, we apply the G operator to the above eigenvalue equation from which we concluded that \(Y = \chi_j\):

\[G F \chi_j = G \lambda_j \chi_j. \tag{1.14}\]

Next, we use the commutation to re-write the left-hand side of this equation, and use the fact that \(\lambda_j\) is a scalar number to thus obtain:

\[F G \chi_j = \lambda_j G \chi_j. \tag{1.14}\]

So, now we see that \(G\chi_j\) itself is an eigenfunction of F having eigenvalue \(\lambda_j\). So, unless there are more than one eigenfunction of F corresponding to the eigenvalue \(\lambda_j\) (i.e., unless this eigenvalue is degenerate), \(G\chi_j\) must itself be proportional to \(\chi_j\). We write this proportionality conclusion as

\[G \chi_j = \mu_j \chi_j, \tag{1.14}\]

which means that \(\chi_j\) is also an eigenfunction of G. This, in turn, means that measuring the property G while the system is described by the wave function \(Y = \chi_j\) does not change the wave function; it remains \(\chi_j\).

If there are more than one function {\(\chi_{j_1}, \chi_{j_2}, …\chi_{j_M}\)} that are eigenfunctions of F having the same eigenvalue \(\lambda_j\), then the relation \(F G \chi_j = \lambda_j G \chi_j\) only allows us to conclude that \(G \chi_j\) is some combination of these degenerate functions

\[G \chi_j = \sum_{k=1,M} C_k \chi_jk. \tag{1.14}\]

Below, I offer some examples that i hope will clarify what these rules mean and how the relate to laboratory measurements.

In summary, when the operators corresponding to two physical properties commute, once one measures one of the properties (and thus causes the system to be an eigenfunction of that operator), subsequent measurement of the second operator will (if the eigenvalue of the first operator is not degenerate) produce a unique eigenvalue of the second operator and will not change the system wave function. If either of the two properties is subsequently measured (even over and over, again), the wave function will remain unchanged and the value observed for the property being measured will remain the same as the original eigenvalue observed.

However, if the two operators do not commute, one simply cannot reach the above conclusions. In such cases, measurement of the property corresponding to the first operator will lead to one of the eigenvalues of that operator and cause the system wave function to become the corresponding eigenfunction. However, subsequent measurement of the second operator will produce an eigenvalue of that operator, but the system wave function will be changed to become an eigenfunction of the second operator and thus no longer the eigenfunction of the first.

I think an example will help clarify this discussion. Let us consider the following orbital angular momentum operators for \(N\) particles

\[\textbf{L} = \sum_{j=1}^N (\textbf{r}_j \times \textbf{p}_j)\tag{1.14}\]

or

\[\textbf{L}_z = -i\hbar \sum_{j=1,N} \Big(x_j \frac{∂}{∂y_j} –y_j \frac{∂}{∂x_j}\Big)\tag{1.14a}\]

\[\textbf{L}_x = -i\hbar \sum_{j=1,N} \Big(y_j \frac{∂}{∂x_j} –x_j \frac{∂}{∂y_j}\Big)\tag{1.14b}\]

\[\textbf{L}_y = -i\hbar \sum_{j=1,N} \Big(z_j \frac{∂}{∂x_j} –x_j \frac{∂}{∂z_j}\Big)\tag{1.14c}\]

and

\[\textbf{L}^2 = \textbf{L}_x^2 + \textbf{L}_y^2 +\textbf{L}_z^2\tag{1.14}\]

It turns out that the operator \(\textbf{L}^2\) can be shown to commute with any one of \(\textbf{L}_z\), \(\textbf{L}_x\), or \(\textbf{L}_y\), but \(\textbf{L}_z\), \(\textbf{L}_x\), or \(\textbf{L}_y\) do not commute with one another (we will discuss these operators in considerably more detail in Chapter 2 section 2.7; for now, please accept these statements).

Let us assume a measurement of \(\textbf{L}_z\) is carried out and one obtains the value \(2\hbar\). Thus far, all one knows is that the system can be described by a wave function that is some combination of \(D\), \(F\), \(G\), \(H\), etc. angular momentum functions \(|L, m=2\rangle\) having different \(L\)-values but all having \(m = 2\)

\[\Psi = \sum_{L > 2} C_L |L, m=2\rangle ,\tag{1.14}\]

but one does not know the amplitudes \(C_L\) telling how much a given \(L\)-value contributes to \(\Psi\). One can express \(\Psi\) as such a linear combination because the Hermitian quantum mechanical operators formed as described above can be shown to possess complete sets of eigenfunctions; this means that any function (of the appropriate variables) can be written as a linear combination of these eigenfunctions as done above.

If one subsequently carries out a measurement of \(\textbf{L}^2\), the fact that \(\textbf{L}^2\) and \(\textbf{L}_z\) commute means that this second measurement will not alter the fact that \(\Psi\) contains only contributions with \(m =2\), but it will result in observing only one specific \(L\)-value. The probability of observing any particular \(L\)-value will be given by \(|C_L|^2\). Once this measurement is realized, the wave function will contain only terms having that specific \(L\)-value and \(m = 2\). For example, if \(L = 3\) is found, we know the wave function has \(L = 3\) and \(m = 2\), so we know it is a F-symmetry function with \(m = 2\), but we don’t know any more. That is, we don’t know if it is an \(n = 4, 5, 6,\) etc. F-function.

What now happens if we make a measurement of \(\textbf{L}_x\) when the system is in the \(L = 3\), \(m=2\) state (recall, this \(m = 2\) is a value of the \(\textbf{L}_z\) component of angular momentum)? Because \(\textbf{L}_x\) and \(\textbf{L}^2\) commute, the measurement of \(\textbf{L}_x\) will not alter the fact that \(\Psi\) contains only \(L = 3\) components. However, because \(\textbf{L}_x\) and \(\textbf{L}_z\) do not commute, we can not assume that \(\Psi\) is still an eigenfunction of \(\textbf{L}_x\) ; it will be a combination of eigenfunctions of \(\textbf{L}^2\) having \(L = 3\) but having \(m\)-values between -3 and 3, with m now referring to the eigenvalue of \(\textbf{L}_x\) (no longer to \(\textbf{L}_z\))

\[\Psi = \sum_{m=-3}^3 C_m |L=3, m\rangle .\tag{1.14}\]

When \(\textbf{L}_x\) is measured, the value \(m\hbar\) will be found with probability \(|C_m|^2\), after which the wave function will be the \(|L=3, m\rangle\) eigenfunction of \(\textbf{L}^2\) and \(\textbf{L}_x\) (and no longer an eigenfunction of \(\textbf{L}_z\))

I understand that these rules of quantum mechanics can be confusing, but I assure you they are based on laboratory observations about how atoms, ions, and molecules behave when subjected to state-specific measurements. So, I urge you to get used to the fact that quantum mechanics has rules and behaviors that may be new to you but need to be mastered by you.

## The Schrödinger Equation

This equation is an eigenvalue equation for the energy or Hamiltonian operator; its eigenvalues provide the only allowed energy levels of the system

### The Time-Dependent Equation

*If the Hamiltonian operator contains the time variable explicitly, one must solve the time-dependent Schrödinger equation*

Before moving deeper into understanding what quantum mechanics means, it is useful to learn how the wave functions \(\psi\) are found by applying the basic equation of quantum mechanics, the Schrödinger equation, to a few exactly soluble model problems. Knowing the solutions to these 'easy' yet chemically very relevant models will then facilitate learning more of the details about the structure of quantum mechanics.

The Schrödinger equation is a differential equation depending on time and on all of the spatial coordinates necessary to describe the system at hand (thirty-nine for the \(H_2O\) example cited above). It is usually written

\[H \psi = i \hbar \dfrac{\partial \Psi}{\partial t} \tag{1.14}\]

where \(\Psi(q_j,t)\) is the unknown wavefunction and \(H\) is the operator corresponding to the total energy of the system. This Hermitian operator is called the Hamiltonian and is formed, as stated above, by first writing down the classical mechanical expression for the total energy (kinetic plus potential) in Cartesian coordinates and momenta and then replacing all classical momenta \(p_j\) by their quantum mechanical operators \(p_j = - i\hbar\dfrac{\partial}{\partial q_j}\).

For the \(H_2O\) example used above, the classical mechanical energy of all thirteen particles is

\[E = \sum_{i=1}^{30} \frac{p_i^2}{2m_e} + \frac{1}{2} \sum_{j\ne i=1,10} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} + \sum_{a=1}^9 \frac{p_a^2}{2m_a} + \frac{1}{2} \sum_{b\ne a=1}^3 \frac{Z_aZ_be^2}{r_{a,b}}\tag{1.14}\]

where the indices \(i\) and \(j\) are used to label the ten electrons whose thirty Cartesian coordinates and thirty Cartesian momenta are {\(q_i\)} and {\(p_j\)}, and \(a\) and \(b\) label the three nuclei whose charges are denoted \(\{Z_a\}\) and whose nine Cartesian coordinates and nine Cartesian momenta are {\(q_a\)} and {\(p_a\)}. The electron and nuclear masses are denoted \(m_e\) and \(\{m_a\}\), respectively. The corresponding Hamiltonian operator is

\[H = \sum_{i=1}^{30} \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_i^2} \Big]+ \frac{1}{2} \sum_{j\ne i=1}^{10} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} + \sum_{a=1}^9 \Big[- \frac{\hbar^2}{2m_a} \frac{\partial^2}{\partial q_i^2} \Big]+ \frac{1}{2} \sum_{b\ne a=1}^3 \frac{Z_aZ_be^2}{r_{a,b}} \tag{1.14}\]

where \(r_{i,j}\), \(r_{i,a}\), and \(r_{a,b}\) denote the distances between electron pairs, electrons and nuclei, and nuclear pairs, respectively.

Notice that \(\textbf{H}\) is a second order differential operator in the space of the thirty-nine Cartesian coordinates that describe the positions of the ten electrons and three nuclei. It is a second order operator because the momenta appear in the kinetic energy as \(p_j^2\) and \(p_a^2\), and the quantum mechanical operator for each momentum \(p = -i\hbar \dfrac{\partial}{\partial q}\) is of first order.

The Schrödinger equation for the \(H_2O\) example at hand then reads

\[\left\{\sum_{i=1}^{30} \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_i^2} \Big] + \frac{1}{2} \sum_{j\ne i} \frac{e^2}{r_{i,j}} - \sum_{a=1}^3\sum_{i=1}^{10} \frac{Z_ae^2}{r_{i,a}} \right\} \Psi

+ \left\{\sum_{a=1}^9 \Big[- \frac{\hbar^2}{2m_e} \frac{\partial^2}{\partial q_a^2} \Big]+ \frac{1}{2} \sum_{b\ne a} \frac{Z_aZ_be^2}{r_{a,b}} \right\} \Psi = i \hbar \frac{\partial \Psi}{\partial t} \tag{1.14}\]

The Hamiltonian in this case contains \(t\) nowhere. An example of a case where \(\textbf{H}\) does contain \(t\) occurs, for example, when the an oscillating electric field \(E \cos(\omega t)\) along the \(x\)-axis interacts with the electrons and nuclei and a term

\[\sum_{a=1}^{3} Z_ze X_a E \cos(\omega t) - \sum_{j=1}^{10} e x_j E \cos(\omega t)\tag{1.14}\]

is added to the Hamiltonian. Here, \(X_a\) and \(x_j\) denote the \(x\) coordinates of the \(a^{th}\) nucleus and the \(j^{th}\) electron, respectively.

### The Time-Independent Equation

If the Hamiltonian operator does not contain the time variable explicitly, one can solve the time-independent Schrödinger equation

In cases where the classical energy, and hence the quantum Hamiltonian, do not contain terms that are explicitly time dependent (e.g., interactions with time varying external electric or magnetic fields would add to the above classical energy expression time dependent terms), the separations of variables techniques can be used to reduce the Schrödinger equation to a time-independent equation. In such cases, \(\textbf{H}\) is not explicitly time dependent, so one can assume that \(\Psi(q_j,t)\) is of the form (n.b., this step is an example of the use of the separations of variables method to solve a differential equation)

\[\Psi(q_j,t) = \Psi(q_j) F(t). \tag{1.14}\]

Substituting this 'ansatz' into the time-dependent Schrödinger equation gives

\[\Psi(q_J) i\hbar \frac{\partial F}{\partial t} = F(t) \textbf{H}\Psi(q_J) . \tag{1.14}\]

Dividing by \(\Psi(q_J) F(t)\) then gives

\[F^{-1} (i\hbar \frac{\partial F}{\partial t}) = \Psi^{-1} (\textbf{H}\Psi(q_J) ). \tag{1.14}\]

Since \(F(t)\) is only a function of time \(t\), and \(\Psi(q_j)\) is only a function of the spatial coordinates {\(q_j\)}, and because the left hand and right hand sides must be equal for all values of t and of {\(q_j\)}, both the left and right hand sides must equal a constant. If this constant is called E, the two equations that are embodied in this separated Schrödinger equation read as follows:

\[H \Psi(q_J) = E\Psi(q_J), \tag{1.14}\]

\[i\hbar \frac{dF(t)}{dt} = E F(t).\tag{1.14}\]

The first of these equations is called the time-independent Schrödinger equation; it is an eigenvalue equation in which one is asked to find functions that yield a constant multiple of themselves when acted on by the Hamiltonian operator. Such functions are called eigenfunctions of \(\textbf{H}\) and the corresponding constants are called eigenvalues of \(\textbf{H}\). For example, if \(\textbf{H}\) were of the form \(- \dfrac{\hbar^2}{2I}\dfrac{\partial^2}{\partial \phi^2} = \textbf{H}\), then functions of the form \(\exp(i m\phi)\) would be eigenfunctions because

\[- \frac{\hbar^2}{2I} \frac{\partial^2}{\partial \phi^2} \exp(i m\phi) = \frac{m^2\hbar^2}{2I} \exp(i m\phi).\tag{1.14}\]

In this case, \(\dfrac{m^2\hbar^2}{2I}\) is the eigenvalue. In this example, the Hamiltonian contains the square of an angular momentum operator (recall earlier that we showed the \(z\)-component of angular momentum \(L_z\) for a single particle is to equal \(– i\hbar \dfrac{d}{d\phi}\)).

When the Schrödinger equation can be separated to generate a time-independent equation describing the spatial coordinate dependence of the wave function, the eigenvalue \(E\) must be returned to the equation determining \(F(t)\) to find the time dependent part of the wave function. By solving

\[i\hbar \frac{dF(t)}{dt} = E F(t)\tag{1.14}\]

once \(E\) is known, one obtains

\[F(t) = \exp( -i Et/ \hbar),\tag{1.14}\]

and the full wave function can be written as

\[\Psi(q_j,t) = \Psi(q_j) \exp (-i Et/\hbar).\tag{1.14}\]

For the above example, the time dependence is expressed by

\[F(t) = \exp \Big( -i t { \frac{m^2 \hbar^2}{2M} }\frac{1}{\hbar}\Big).\tag{1.14}\]

In such cases, the spatial probability density \(|\Psi(q_j,t)|^2\) does not depend upon time because the product \(\exp (-i Et/\hbar) \exp (i Et/\hbar)\) reduces to unity.

In summary, whenever the Hamiltonian does not depend on time explicitly, one can solve the time-independent Schrödinger equation first and then obtain the time dependence as \(\exp(-i Et/\hbar)\) once the energy \(E\) is known. In the case of molecular structure theory, it is a quite daunting task even to approximately solve the full Schrödinger equation because it is a partial differential equation depending on all of the coordinates of the electrons and nuclei in the molecule. For this reason, there are various approximations that one usually implements when attempting to study molecular structure using quantum mechanics.

It should be noted that it is possible to prepare in the laboratory, even when the Hamiltonian contains no explicit time dependence, wave functions that are time dependent and that have time-dependent spatial probability densities. For example, one can prepare a state of the Hydrogen atom that is a superposition of the \(2s\) and \(2p_z\) wave functions

\[\Psi(r,t=0) = C_1 \psi_{2s} (r) +C_2 \psi_{2pz} (r)\tag{1.14}\]

where the two eigenstates obey

\[H \psi_{2s} (r) = E_{2s} \psi_{2s} (r)\tag{1.14}\]

and

\[H \psi_{2pz} (r) = E_{2pz}\psi_{2pz} (r).\tag{1.14}\]

When \(\textbf{H}\) does not contain \(t\) explicitly, it is possible to then express \(\Psi(r,t)\) in terms of \(\Psi(r,t=0)\) as follows:

\[\Psi(r,t) = \exp\Big(-\dfrac{iHt}{\hbar}\Big)[ C_1 \psi_{2s} (r) +C_2 \psi_{2pz} (r)] \tag{1.14}\]

\[= \left[ C_1 \psi_{2s} (r) \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2 \psi_{2pz} (r) \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right]. \tag{1.14}\]

This function, which is a superposition of \(2s\) and \(2p_z\) functions, does indeed obey the full time-dependent Schrödinger equation \(\textbf{H} \Psi = i\hbar \dfrac{\partial \Psi}{\partial t}\). The probability of observing the system in the \(2s\) state if a measurement capable of making this determination were carried out is

\[\left|C_1 \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)\right|^2 = |C_1|^2 \tag{1.14}\]

and the probability of finding it in the \(2p_z\) state is

\[\left|C_2 \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right|^2,\tag{1.14}\]

both of which are independent of time. This does not mean that \(\Psi\) or the spatial probability density \(\Psi\) describes is time-independent because the product

\[\left[C_1 \psi_{2s} (r) \exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2 \psi_{2pz} (r)\exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right]^* \left[C_1 \psi_{2s} (r)\exp\Big(\frac{-itE_{2s}}{\hbar}\Big)+C_2 \psi_{2pz} (r) \exp\Big(\frac{-itE_{2pz}}{\hbar}\Big)\right] \tag{1.14}\]

contains cross terms that depend on time.

It is important to note that applying \(\exp(-iHt/\hbar)\) to such a superposition state in the manner shown above, which then produces a superposition of states each of whose amplitudes carries its own time dependence, only works when \(\textbf{H}\) has no time dependence. If \(\textbf{H}\) were time-dependent, \(i\hbar \dfrac{\partial}{\partial t}\) acting on \(\exp(-iHt/\hbar) \Psi(r,t=0)\) would contain an additional factor involving \(\dfrac{\partial\textbf{H}}{\partial t}\) as a result of which one would not have \(\textbf{H} \Psi= i\hbar \dfrac{\partial\Psi}{\partial t}\).

## Contributors and Attributions

Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry

Integrated by Tomoyuki Hayashi (UC Davis)