Skip to main content
Chemistry LibreTexts

2.4: The Tools of Quantum Mechanics

  • Page ID
    420476
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Quantum mechanics is a model that can predict many properties of systems. The prediction of these properties can be made by examining the results of operations on the wavefunctions describing systems. In order to develop a quantum mechanical "toolbox", we utilize the results of the Particle in a Box model.

    Expectation Values

    The fourth postulate of quantum mechanics gives a recipe for calculating the expectation value of a particular measurement. The expectation value is a prediction of the average value measured based on an infinite number of measurements of the property.

    The Expectation value of Energy \(\langle E \rangle\)

    One of the most useful properties to know for a system is its energy. As chemists, the energy is what is most useful to understand for atoms and molecules as all of the thermodynamics of the system are determined by the energies of the atoms and molecules in the system.

    For illustrative convenience, consider a system that is prepared such that its wavefunction is given by one of the eigenfunctions of the Hamiltonian.

    \[\psi_{n}=\sqrt{\dfrac{2}{a}} \sin \left(\dfrac{n \pi x}{a}\right)\nonumber\]

    These functions satisfy the important relationship

    \[\hat{H} \psi_{n}=E_{n} \psi_{n}\nonumber\]

    This greatly simplifies the calculation of the expectation value! To get the expectation value of E, we need simply the following expression:

    \[\langle E\rangle=\int \psi_{n}^{*} \hat{H} \psi_{n} d \tau\nonumber\]

    Making the substitution from above yields:

    48

    \[\begin{aligned} \langle E\rangle &=\int \psi_{n}^{*} \hat{H} \psi_{n} d \tau \\ &=\int \psi_{n}^{*} E_{n} \psi_{n} d \tau \\ &=E_{n} \int \psi_{n}^{*} \psi_{n} d \tau \\ &=E_{n} \end{aligned}\nonumber\]

    In fact it is easy to prove that for a system whose wavefunction is an eigenfunction of any operator, the expectation value for the property corresponding to that operator is the eigenvalue for the given operator operating on the wavefunction. The proof for this is almost trivial!

    Proof: For a system prepared in a state such that its wavefunction is given by \(\psi\), and \(\psi\) satisfies the relationship

    \[\hat{A} \psi=a \psi\nonumber\]

    The expectation value for the property associated with operator  will be the eigenvalue \(a\).

    \[\begin{aligned} \langle a\rangle &=\int \psi^{*} \hat{A} \psi d \tau \\ &=\int \psi^{*} a \psi d \tau \\ &=a \int \psi^{*} \psi d \tau \\ &=a \end{aligned}\nonumber\]

    since the wavefunction \(\psi\) is normalized.

    The Expectation value of position \(\langle x \rangle\)

    To illustrate the concept, let’s calculate \(\langle x\rangle\) or the expectation value of position for a particle in a box that is in the \(\mathrm{n}^{\text {th }}\) eigenstate

    \[\begin{aligned} \langle x\rangle &=\int_{0}^{a} \psi_{n}(x) \cdot x \cdot \psi_{n}(x) d x \\ &=\dfrac{2}{a} \int_{0}^{a} x \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x \end{aligned}\nonumber\]

    Again, it helps to find the result for the integral in a table of integrals.

    \[\int x \sin ^{2}(\alpha x) d x=\dfrac{x^{2}}{4}-\dfrac{x \sin (2 \alpha x)}{4 \alpha}-\dfrac{\cos (2 \alpha x)}{8 \alpha^{2}}\nonumber\]

    Substitution yields

    \[\begin{aligned} \dfrac{2}{a} \int_{0}^{a} x \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x &=\dfrac{2}{a}\left[\dfrac{x^{2}}{4}-\dfrac{x \sin \left(2 \dfrac{n \pi}{a} x\right)}{4 \dfrac{n \pi}{a}}-\dfrac{\cos \left(2 \dfrac{n \pi}{a} x\right)}{8\left(\dfrac{n \pi}{a}\right)^{2}}\right]_{0}^{a} \\ &=\dfrac{2}{a}\left[\dfrac{a^{2}}{4}-0-\dfrac{1}{8\left(\dfrac{n \pi}{a}\right)^{2}}-0+0+\dfrac{1}{8\left(\dfrac{n \pi}{a}\right)^{2}}\right] \\ &=\dfrac{a}{2} \end{aligned}\nonumber\]

    This result is interesting for two reasons. First off, \(\frac{a}{2}\) is the middle of the box. So the result implies that we might find the particle on the left side of the box half the time and the right side of the box the other half. Averaging all of the results yields a mean value of the middle of the box. Secondly, the result is independent of the quantum number \(n\) - which means that we get the same result irrespective of the quantum state in which the system is. This is a remarkable result, really, (well, not really, but it is fun to claim it is) since it means that for the \(\mathrm{n}=2\) eigenstate, which has a node at the center of the box, meaning we will never measure the particle to be there, still has an expectation value of position centered in the box. This should really drive home the idea that an expectation value is an average. We need never measure the particle to be at the position indicated by the expectation value. The average of the measured positions must, instead, be at the position indicated by the expectation value.

    The Expectation Value of Momentum \(\langle p\rangle\)

    It is also easy to calculate the expectation value for momentum, \(\langle p \rangle\). In fact, it is almost trivially easy! Based on the fourth postulate, \(\langle p\rangle\) is found from the expression

    \[\begin{aligned} \langle p\rangle &=\int_{0}^{a} \psi \hat{p} \psi d x \\ &=-i \hbar \int_{0}^{a} \psi \dfrac{d}{d x} \psi d x \end{aligned}\nonumber\]

    At this point it is convenient to make a substitution. If we let \(u=\psi\) then \(d u=\dfrac{d \psi}{d x} d x\). Now the problem can be restated in terms of \(u\). But since we have changed from \(x\) to \(u\), we must change the limits of integration to the values of \(u\) at the endpoints. As it turns out, \(\psi(0)\) and \(\psi(a)\) are both 0 !

    \[\begin{aligned} \langle p\rangle &=-i \hbar \int_{0}^{0} u d u \\ &=-i \hbar\left[\dfrac{u^{2}}{2}\right]_{0}^{0} \\ &=0 \end{aligned}\nonumber\]

    Wow! The expectation value of momentum is zero! What makes this so remarkable is that the particle is always moving since it has a non-zero kinetic energy. (How can this be?) Keeping in mind that the expectation value is the average of a theoretical infinite number of measurements, and that momentum is a vector quantity it is easy to see why the average is zero. Half of the time, the momentum is measured in the positive \(\mathrm{x}\) direction and the other half in the negative \(\mathrm{x}\) direction. These cancel one another and the average result is zero.

    Variance

    Quantum mechanics provides enough information to also calculate the variance of a theoretical infinite set of measurements. Based on normal statistics, the variance of any value be calculated from

    \[\sigma_{a}^{2}=\left\langle a^{2}\right\rangle-\langle a\rangle^{2}\nonumber\]

    That result does not come from quantum mechanics, by the way. Quantum mechanics just tells us how to calculate the expectation values. The above expression for variance can be applied to any set of measurements of any property on any system.

    So, to calculate \(\sigma_{\mathrm{x}}^{2}\) and \(\sigma_{\mathrm{p}}^{2}\) it is simply necessary to know \(\langle\mathrm{x}\rangle,\left\langle\mathrm{x}^{2}\right\rangle,\langle\mathrm{p}\rangle\) and \(\left\langle\mathrm{p}^{2}\right\rangle\). Two of those quantities we already know from the previous sections.

    The variance in \(x\left(\sigma_{x}^{2}\right)\)

    To calculate \(\left\langle\mathrm{x}^{2}\right\rangle\), we set up the usual expression.

    \[\begin{aligned} \left\langle x^{2}\right\rangle &=\int_{0}^{a} \psi x^{2} \psi d x \\ &=\dfrac{2}{a} \int_{0}^{a} x^{2} \sin ^{2}\left(\dfrac{n \pi x}{a}\right) d x \end{aligned}\nonumber\]

    From a table of integrals, it can be found that

    \[\int x^{2} \sin ^{2}(\alpha x) d x=\dfrac{x^{3}}{6}-\left(\dfrac{x^{2}}{4 \alpha}-\dfrac{1}{8 \alpha^{3}}\right) \sin (2 \alpha x)-\dfrac{x \cos (2 \alpha x)}{4 \alpha^{2}}\nonumber\]

    Letting \(\alpha=\dfrac{n \pi}{a}\) and noting that \(\cos (2 \mathrm{n} \pi \mathrm{x})=1\) and \(\sin (2 \mathrm{n} \pi \mathrm{x})=0\) for any value of \(n\), we see that

    \[\begin{align*} \left\langle x^{2}\right\rangle &=\dfrac{2}{a}\left[\dfrac{x^{3}}{6}-\left(\dfrac{a x^{2}}{4 n \pi}-\dfrac{a^{3}}{8 n^{3} \pi^{3}}\right) \sin \left(\dfrac{2 n \pi x}{a}\right)-\dfrac{a^{2} x \cos \left(\dfrac{2 n \pi x}{a}\right)}{4 n^{2} \pi^{2}}\right]_{0}^{a} \\[4pt] &=\dfrac{2}{a}\left(\dfrac{a^{3}}{6}-\right.\left.0-\dfrac{a^{3}}{4 n^{2} \pi^{2}}-0+0+0\right) \\[4pt] &= \dfrac{a^{2}}{3}-\dfrac{a^{2}}{2 n^{2} \pi^{2}} \end{align*}\]

    Notice that this result has units of length squared (due to the \(a^{2}\) dependence) which is to be expected for \(\left\langle x^{2}\right\rangle\).

    Based on these results, it is easy to calculate the variance, and thus the standard deviation of the theoretical infinite set of measurements of position.

    \[\begin{aligned} \sigma_{x}^{2} &=\left\langle x^{2}\right\rangle-\langle x\rangle^{2} \\ &=\left(\dfrac{a^{2}}{3}-\dfrac{a^{2}}{2 n^{2} \pi^{2}}\right)-\left(\dfrac{a}{2}\right)^{2} \\ &=\dfrac{\left(8 n^{2} \pi^{2}-12-6 n^{2} \pi^{2}\right) a}{24 n^{2} \pi^{2}} \\ &=\dfrac{\left(n^{2} \pi^{2}-6\right) a^{2}}{12 n^{2} \pi^{2}} \end{aligned}\nonumber\]

    The variance in \(p\left(\sigma_{p}^{2}\right)\)

    The relationship between energy and momentum simplifies the calculation of \(\left\langle\mathrm{p}^{2}\right\rangle\) greatly. Recall that

    \[T=\dfrac{p^{2}}{2 m}\nonumber\]

    And since all of the energy in this system is kinetic energy, it follows that

    \[\left\langle p^{2}\right\rangle=2 m\langle H\rangle\nonumber\]

    Further, \(\langle H\rangle\) (or \(\langle\mathrm{E}\rangle\) ) is simply the energy expression since the wavefunctions are eigenfunctions of the Hamiltonian! \(\left(\hat{H} \psi_{n}=E_{n} \psi_{n}\right)\)

    \[\begin{aligned} \langle H\rangle &=\int_{0}^{a} \psi_{n} \hat{H} \psi_{n} d x \\ &=\int_{0}^{a} \psi_{n} E_{n} \psi_{n} d x \\ &=E_{n} \int_{0}^{a} \psi_{n} \psi_{n} d x \\ &=E_{n} \end{aligned}\nonumber\]

    Basically, this means that the expectation value for energy for a system in an eigenstate is always given by the eigenvalue of the Hamiltonian. In a later section we’ll discuss the expectation value of energy when the system is not in an eigenstate.

    Another important aspect of the above relationship is how the integral simply went away. It didn’t, really. It’s just that the wavefunctions are normalized, so the integral is unity. Recall that for orthonormalized wavefunctions

    \[\int \psi_{i}^{*} \psi_{j} d \tau=\delta_{i j}\nonumber\]

    which is a property of which we will make great use throughout our development of quantum theory.

    So from the result for the expectation value for energy, it follows that

    \[\begin{aligned} \left\langle p^{2}\right\rangle &=2 m E \\ &=2 m\left(\dfrac{n^{2} h^{2}}{8 m a^{2}}\right) \\ &=\dfrac{n^{2} h^{2}}{4 a^{2}} \end{aligned}\nonumber\]

    Note that the variance of the position measurement decreases with increasing \(n\).

    For momentum, the variance is given by

    \[\begin{aligned} \sigma_{p}^{2} &=\left\langle p^{2}\right\rangle-\langle p\rangle^{2} \\ &=\left(\dfrac{n^{2} h^{2}}{4 a^{2}}\right)-(0)^{2} \\ &=\dfrac{n^{2} h^{2}}{4 a^{2}} \end{aligned}\nonumber\]

    The variance of momentum measurements increases with increasing \(n\) !

    We shall place these results on hold for now, and revisit them when we look at the Heisenberg Uncertainty Principle. But in order to make sense of that rather important consequence of quantum theory, we must first examine commutators and the relationship between pairs of operators as this will have a profound impact on what can be known (or measured) by their associated physical observables.

    The Heisenberg Uncertainty Principle

    One of the more interesting (and controversial!) consequences of the quantum theory can be seen in the Heisenberg Uncertainty Principle. Before examining the Heisenberg Uncertainty principle, it is necessary to examine the relationship that can exist between a pair of quantum mechanical operators. In order to do this, we define an operator for operators, called the commutator.

    The Commutator

    For a pair of operators \(\hat{A}\) and \(\hat{B}\), the commutator \([\hat{A}, \hat{B}]\) is defined as follows

    \[[\hat{A}, \widehat{B}] f(x)=\hat{A} \widehat{(B} f(x))-\widehat{B} \widehat{(A} f(x))\nonumber\]

    If the end result of the commutator operating on \(\mathrm{f}(\mathrm{x})\) is zero, then the two operations are said to commute. This means that for the particular pair of operations, it does not matter which order they on the function - the same result is obtained either way.

    Relationships for Commutators

    There are a number of important mathematical relationships for commutators. First, every operator commutes with itself, and with any power of itself.

    \[\begin{aligned} &{[\hat{\mathrm{A}}, \hat{\mathrm{A}}]=0} \\ &{\left[\hat{\mathrm{A}}, \hat{\mathrm{A}}^{\mathrm{n}}\right]=0} \end{aligned}\nonumber\]

    Second, given the definition of the commutator relationship, it should be fairly obvious that

    \[[\hat{\mathrm{A}}, \hat{\mathrm{B}}]=-[\hat{\mathrm{B}}, \hat{\mathrm{A}}]\nonumber\]

    Also, there is a linearity relationship for commutators (of linear operators).

    \[[k \hat{A}, \hat{B}]=k[\hat{A}, \hat{B}]\nonumber\]

    Theorem \(\PageIndex{1}\)

    Proof: Show that two operators have a common set of eigenfunctions, the operators must commute.

    Solution: Consider operators \(\hat{A}\) and \(\hat{B}\) that have the same set of eigenfunctions \(\phi_{\mathrm{n}}\) such that

    \[\hat{A} \phi_{n}=a_{n} \phi_{n} \quad \text { and } \quad \hat{B} \phi_{n}=b_{n} \phi_{n}\nonumber\]

    For any arbitrary function \(\Phi\) that can be expressed as a linear combination of \(\phi_{\mathrm{n}}\)

    \[\Phi=\sum_{n} c_{n} \phi_{n}\nonumber\]

    the commutator of \(\hat{A}\) and \(\hat{B}\) operating on \(\Phi\) will give the following result.

    \[\begin{aligned} {[\hat{A}, \hat{B}] \Phi } &=\left[\hat{A}, \hat{B} \sum_{n} c_{n} \phi_{n}\right.\\ &=\hat{A}\left(\hat{B} \sum_{n} c_{n} \phi_{n}\right)-\hat{B}\left(\hat{A} \sum_{n} c_{n} \phi_{n}\right) \end{aligned}\nonumber\]

    And since \(\hat{A}\) and \(\hat{B}\) are linear (as all quantum mechanical operators must be)

    \[\begin{aligned} \hat{A}\left(\hat{B} \sum_{n} c_{n} \phi_{n}\right)-\hat{B}\left(\hat{A} \sum_{n} c_{n} \phi_{n}\right) &=\hat{A}\left(\sum_{n} c_{n} \hat{B} \phi_{n}\right)-\hat{B}\left(\sum_{n} c_{n} \hat{A} \phi_{n}\right) \\ &=\hat{A}\left(\sum_{n} c_{n} b_{n} \phi_{n}\right)-\hat{B}\left(\sum_{n} c_{n} a_{n} \phi_{n}\right) \\ &=\sum_{n} c_{n} b_{n} \hat{A} \phi_{n}-\sum_{n} c_{n} a_{n} \hat{B} \phi_{n} \\ &=\sum_{n} c_{n} b_{n} a_{n} \phi_{n}-\sum_{n} c_{n} a_{n} b_{n} \phi_{n} \\ &=0 \end{aligned}\nonumber\]

    And so it is clear that the operators \(\hat{A}\) and \(\hat{B}\) must commute.

    When Operators do not Commute

    An example of operators that do not commute are \(\hat{x}\) and \(\hat{p}\). The commutator of these two operators is evaluated below, using a well-behaved function \(f\).

    \[\begin{aligned} {[\hat{x}, \hat{p}] f } &=\hat{x}(\hat{p} f)-\hat{p}(\hat{x} f) \\ &=x \cdot\left(-i \hbar \dfrac{d}{d x} f\right)+i \hbar \dfrac{d}{d x}(x \cdot f) \end{aligned}\nonumber\]

    The second term requires the product rule to evaluate. Recall that

    \[d(u v)=v d u+u d v\nonumber\]

    And so the above expression can be simplified by noting that

    \[\dfrac{d}{d x}(x \cdot f)=f \dfrac{d}{d x} x+x \dfrac{d}{d x} f\nonumber\]

    And so

    \[\begin{aligned} {[\hat{x}, \hat{p}] f } &=x \cdot\left(-i \hbar \dfrac{d}{d x} f\right)+i \hbar \dfrac{d}{d x}(x \cdot f) \\ &=\left(-i \hbar \cdot x \cdot \dfrac{d}{d x} f\right)+i \hbar\left(f \dfrac{d}{d x} x+x \dfrac{d}{d x} f\right) \\ &=-i \hbar \cdot x \cdot \dfrac{d}{d x} f+i \hbar f+i \hbar \cdot x \cdot \dfrac{d}{d x} f \\ &=i \hbar f \end{aligned}\nonumber\]

    So the final result of the operation is to multiply the function by \(i \hbar\). Another way to state this is to note

    \[[\hat{x}, \hat{p}]=i \hbar\nonumber\]

    The Heisenberg Uncertainty Principle

    Among the many contributions that Werner Heisenberg made to the development of quantum theory, one of the most important was the discovery of the uncertainty principle. Heisenberg’s observation was based on the prediction of interference of electron beams that was predicted by de Broglie. The uncertainty principle states that for the observables corresponding to a pair of operators \(\hat{A}\) and \(\hat{B}\), the following result must hold

    \[\sigma_{a}^{2} \sigma_{b}^{2} \geq-\dfrac{1}{4}\left(\int \psi^{*}[\hat{A}, \hat{B}] \psi d \tau\right)^{2}\nonumber\]

    The most popularly taught statement of the uncertainty principle is based on the uncertainty product for position and momentum.

    \[\Delta x \Delta p \geq \dfrac{\hbar}{2}\nonumber\]

    This result is easy to derive from the above expression.

    \[\begin{aligned} \sigma_{x}^{2} \sigma_{p}^{2} & \geq-\dfrac{1}{4}\left(\int \psi^{*}[\hat{x}, \hat{p}] \psi d \tau\right)^{2} \\ & \geq-\dfrac{1}{4}\left(\int \psi^{*}(i \hbar \psi) d \tau\right)^{2} \\ & \geq-\dfrac{1}{4}(i \hbar)^{2}\left(\int \psi^{*} \psi d \tau\right)^{2} \\ & \geq-\dfrac{1}{4}(i h)^{2} \\ & \geq \dfrac{\hbar^{2}}{4} \\ \sigma_{x} \sigma_{p} & \geq \dfrac{\hbar}{2} \end{aligned}\nonumber\]

    As we saw in a previous section, we have a means of evaluating \(\sigma_{x}\) and \(\sigma_{\mathrm{p}}\) to verify this relationship for a given state of a particle in a box. (This evaluation is left as an exercise.)


    This page titled 2.4: The Tools of Quantum Mechanics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Patrick Fleming.