# 9.1: Measurement

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$

The result of a measurement of the observable $$A$$ must yield one of the eigenvalues of $$\hat{A}$$. Thus, we see why $$A$$ is required to be a hermitian operator: Hermitian operators have real eigenvalues. If we denote the set of eigenvalues of $$\hat{A}$$ by $$\{a_i\}$$, then each of the eigenvalues $${a_i}$$ satisfies an eigenvalue equation

$\hat{A}\vert a_i\rangle = a_i \vert a_i\rangle \nonumber$

where $$\vert a_i\rangle$$ is the corresponding eigenvector. Since the operator $$\hat{A}$$ is hermitian and $${a_i}$$ is therefore real, we have also the left eigenvalue equation

$\langle a_i\vert \hat{A} = \langle a_i\vert a_i \nonumber$

The probability amplitude that a measurement of $$A$$ will yield the eigenvalue $${a_i}$$ is obtained by taking the inner product of the corresponding eigenvector $$\vert a_i \rangle$$ with the state vector $$\vert\Psi(t)\rangle$$, $$\langle a_i\vert\Psi(t)\rangle$$. Thus, the probability that the value $${a_i}$$ is obtained is given by

$P_{a_i} = \vert\langle a_i\vert\Psi(t)\rangle \vert^2 \nonumber$

Another useful and important property of hermitian operators is that their eigenvectors form a complete orthonormal basis of the Hilbert space, when the eigenvalue spectrum is non-degenerate. That is, they are linearly independent, span the space, satisfy the orthonormality condition

$\langle a_i\vert a_j\rangle = \delta_{ij} \nonumber$ and thus any arbitrary vector $$\vert\phi\rangle$$ can be expanded as a linear combination of these vectors:

$\vert\phi\rangle = \sum_i c_i \vert a_i\rangle \nonumber$ By multiplying both sides of this equation by $$\langle a_j\vert$$ and using the orthonormality condition, it can be seen that the expansion coefficients are

$c_i = \langle a_i\vert\phi\rangle \nonumber$

The eigenvectors also satisfy a closure relation:

$I = \sum_i \vert a_i\rangle \langle a_i\vert \nonumber$

where $$I$$ is the identity operator.

Averaging over many individual measurements of $$A$$ gives rise to an average value or expectation value for the observable $$A$$, which we denote $$\langle A \rangle$$ and is given by

$\langle A \rangle = \langle \Psi(t)\vert A\vert\Psi(t)\rangle \nonumber$

That this is true can be seen by expanding the state vector $$\vert\Psi(t)\rangle$$ in the eigenvectors of $$A$$:

$\vert\Psi(t)\rangle = \sum_i \alpha_i(t) \vert a_i\rangle \nonumber$

where $${a_i}$$ are the amplitudes for obtaining the eigenvalue $${a_i}$$ upon measuring $$A$$, i.e., $$\alpha_i = \langle a_i\vert\Psi(t)\rangle$$. Introducing this expansion into the expectation value expression gives

\begin{align*} \langle A \rangle (t) &= \sum_{i,j} \alpha_i^*(t) \alpha_j(t) \langle a_i\vert A\vert a_i \rangle \\[4pt] &=\sum_{i,j} \alpha_i^*(t) \alpha_j a_i(t) \delta_{ij} \\[4pt] &= \sum_i a_i \vert\alpha_i(t)\vert^2 \end{align*}

The interpretation of the above result is that the expectation value of $$A$$ is the sum over possible outcomes of a measurement of $$A$$ weighted by the probability that each result is obtained. Since $$\vert\alpha_i\vert^2 =\vert\langle a_i\vert\Psi(t)\rangle \vert^2$$ is this probability, the equivalence of the expressions can be seen.

Two observables are said to be compatible if $$AB = BA$$. If this is true, then the observables can be diagonalized simultaneously to yield the same set of eigenvectors. To see this, consider the action of $$BA$$ on an eigenvector $$\vert a_i\rangle$$ of $$A$$. $$BA\vert a_i\rangle = a_i B\vert a_i\rangle$$. But if this must equal $$AB\vert a_i\rangle$$, then the only way this can be true is if $$B\vert a_i\rangle$$ yields a vector proportional to $$\vert a_i \rangle$$ which means it must also be an eigenvector of $$B$$. The condition $$AB = BA$$ can be expressed as

$AB - BA = 0 \nonumber$

that is

$\left[ A, B \right]= 0 \nonumber$

where, in the second line, the quantity $$\left [A, B \right ] \equiv AB - BA$$ is know as the commutator between $$A$$ and $$B$$. If $$\left [ A, B \right ] = 0$$, then $$A$$ and $$B$$ are said to commute with each other. That they can be simultaneously diagonalized implies that one can simultaneously predict the observables $$A$$ and $$B$$ with the same measurement.

As we have seen, classical observables are functions of position $$x$$ and momentum $${P}$$ (for a one-particle system). Quantum analogs of classical observables are, therefore, functions of the operators $$X$$ and $$P$$ corresponding to position and momentum. Like other observables $$X$$ and $$P$$ are linear hermitian operators. The corresponding eigenvalues $$x$$ and $${P}$$ and eigenvectors $$\vert x \rangle$$ and $$\vert P \rangle$$ satisfy the equations

$X\vert x\rangle = x\vert x\rangle \nonumber$

$P\vert p\rangle \nonumber =p\vert p\rangle \nonumber$

which, in general, could constitute a continuous spectrum of eigenvalues and eigenvectors. The operators $$X$$ and $$P$$ are not compatible. In accordance with the Heisenberg uncertainty principle (to be discussed below), the commutator between $$X$$ and $$P$$ is given by

$\left [ X, P \right ] = i \hbar I \nonumber$

and that the inner product between eigenvectors of $$X$$ and $$P$$ is

$\langle x\vert p\rangle = {1 \over \sqrt{2\pi\hbar}}e^{ipx/\hbar} \nonumber$

Since, in general, the eigenvalues and eigenvectors of $$X$$ and $$P$$ form a continuous spectrum, we write the orthonormality and closure relations for the eigenvectors as:

\begin{align*} \langle x\vert x'\rangle & = \delta(x-x') \\[4pt] \langle p\vert p'\rangle &= \delta(p-p') \end{align*}

\begin{align*} \vert \phi \rangle &= \int dx \vert x\rangle \langle x\vert\phi\rangle \\[4pt] \vert\phi\rangle &= \int dp \vert p\rangle \langle p\vert\phi \rangle \end{align*}

\begin{align*} I &= \int dx \vert x\rangle \langle x\vert \\[4pt] I &= \int dp \vert p\rangle \langle p\vert \end{align*}

The probability that a measurement of the operator $$X$$ will yield an eigenvalue $$x$$ in a region $$dx$$ about some point is

$P(x,t)dx = \vert\langle x\vert\Psi(t)\rangle \vert^2 dx \nonumber$

The object $$\langle x \vert \Psi (t) \rangle$$ is best represented by a continuous function $$\Psi (x, t)$$ often referred to as the wave function. It is a representation of the inner product between eigenvectors of $$X$$ with the state vector. To determine the action of the operator $$X$$ on the state vector in the basis set of the operator $$X$$, we compute

$\langle x\vert X\vert\Psi(t)\rangle = x\Psi(x,t) \nonumber$

The action of $$P$$ on the state vector in the basis of the $$X$$ operator is consequential of the incompatibility of $$x$$ and $$P$$ and is given by

$\langle x\vert P\vert\Psi(t)\rangle = {\hbar \over i}{\partial \over \partial x}\Psi(x,t) \nonumber$

Thus, in general, for any observable $$A (X,P)$$, its action on the state vector represented in the basis of $$X$$ is

$\langle x\vert A(X,P)\vert\Psi(t)\rangle = A\left(x,{\hbar\over i}{\partial \over \partial x}\right)\Psi(x,t) \nonumber$

This page titled 9.1: Measurement is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mark Tuckerman.