Skip to main content
Chemistry LibreTexts

11.2.1: Expectation values of observables

  • Page ID
    5277
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Recall the basic formula for the expectation value of an observable \(A\):

    \[\langle A \rangle = {1 \over Q(\beta)}{\rm Tr}(Ae^{-\beta H}) \nonumber \]

    Two important cases pertaining to the evaluation of the trace in the coordinate basis for expectation values will be considered below:

    Case 1: Functions only of position

    If \(A = A (X) \), i.e., a function of the operator \(X\) only, then the trace can be easily evaluated in the coordinate basis:

    \[\langle A \rangle = {1 \over Q}\int dx \langle x\vert A(X)e^{-\beta H}\vert x\rangle \nonumber \]

    Since \(A (X)\) acts to the left on one of its eigenstates, we have

    \[\langle A \rangle = {1 \over Q}\int dx A(x) \langle x\vert e^{-\beta H}\vert x\rangle \nonumber \]

    which only involves a diagonal element of the density matrix. This can, therefore, be written as a path integral:

    \[ \langle A \rangle = {1 \over Q}\lim_{P\rightarrow \infty}\left ( {mP \over 2\pi \beta \hbar^2}\right )^{P/2} \int dx_1 \cdots dx_P A(x_1) exp \left [ - \beta \sum_{i=1}^P \left ( {1 \over 2}m\omega_P^2 (x_{i+1}-x_i)^2 + {1 \over P}U(x_i)\right)\right] \nonumber \]

    However, since all points \( {x_1, \cdots , x_P } \) are equivalent, due to the fact that they are all integrated over, we can make \(P\) equivalent cyclic renaming of the coordinates \( {x_1 \rightarrow x_2 } \), \( {x_2 \rightarrow x_3 } \) etc. and generate \(P\) equivalent integrals. In each, the function \(A (x_1) \) or \(A (x_2) \), etc. will appear. If we sum these \(P\) equivalent integrals and divide by \(P\), we get an expression:

    \[ \langle A \rangle = {1 \over Q}\lim_{P\rightarrow \infty}\left ( {mP \over 2\pi \beta \hbar^2}\right )^{P/2} \int dx_1 \cdots dx_P A(x_1) exp \left [ - \beta \sum_{i=1}^P \left ( {1 \over 2}m\omega_P^2 (x_{i+1}-x_i)^2 + {1 \over P}U(x_i)\right)\right] \nonumber \]

    This allows us to define an estimator for the observable \(A\). Recall that an estimator is a function of the \(P\) variables \( {x_1, \cdots , x_P } \) whose average over the ensemble yields the expectation value of \(A\):

    \[ a_P(x_1,...,x_P) = {1 \over P}\sum_{i=1}^P A(x_i) \nonumber \]

    Then

    \[ \langle A \rangle = \lim_{P\rightarrow\infty}\langle a_p \rangle _{x_1,...,x_P} \nonumber \]

    where the average on the right is taken over many configurations of the \(P\) variables \( {x_1, \cdots , x_P } \) (we will discuss, in the nex lecture, a way to generate these configurations).

    The limit \(P \rightarrow \infty \) can be taken in the same way that we did in the previous lecture, yielding a functional integral expression for the expectation value:

    \[ \langle A \rangle = {1 \over Q}\oint {\cal D}x(\tau) \left[ {1 \over \beta \hbar} \int _0^{\beta \hbar } d \tau A (x (\tau )) \right ] exp \left [ -{1 \over \hbar } \int _0^{\beta \hbar } d \tau \left ( {1 \over 2} m \dot {x}^2 + U (x (\tau )) \right )\right] \nonumber \]

    Case 2: Functions only of momentum

    Suppose that \(A = A (P) \), i.e., a function of the momentum operator. Then, the trace can still be evaluated in the coordinate basis:

    \[ \langle A \rangle = {1 \over Q}\int dx \langle x\vert A(P)e^{-\beta H}\vert x\rangle \nonumber \]

    However, \(A (P) \) acting to the left does not act on an eigenvector. Let us insert a coordinate space identity \(I = \int dx \vert x \rangle \langle x \vert \) between \( A\) and \( exp (- \beta H ) \):

    \[ \langle A \rangle = {1 \over Q}\int dx dx' \langle x\vert A(P)\vert x'\rangle \langle x\vert e^{-\beta H}\vert x\rangle \nonumber \]

    Now, we see that the expectation value can be obtained by evaluating all the coordinate space matrix elements of the operator and all the coordinate space matrix elements of the density matrix.

    A particularly useful form for the expectation value can be obtained if a momentum space identity is inserted:

    \[ \langle A \rangle = {1 \over Q}\int dx dx' dp \langle x\vert A (P) \vert p \rangle \langle p \vert x' \rangle \langle x' \vert e^{-\beta H}\vert x\rangle \nonumber \]

    Now, we see that \(A (P) \) acts on an eigenstate (at the price of introducing another integral). Thus, we have

    \[ \langle A \rangle = {1 \over Q}\int dp A(p) \int dx dx'\langle x \vert \rangle \langle p \vert x' \rangle \langle x'\vert e^{-\beta H}\vert x\rangle \nonumber \]

    Using the fact that \( \langle x\vert p\rangle = (1/2\pi\hbar)\exp(ipx/\hbar) \), we find that

    \[ \langle A \rangle = {1 \over 2\pi\hbar Q}\int dp A(p)\int dx dx' e^{ip(x-x')/\hbar} \langle x'\vert e^{-\beta H}\vert x\rangle \nonumber \]

    In the above expression, we introduce the change of variables

    \[ r = {x+x' \over 2}\;\;\;\;\;\;\;\;\;\;s=x-x' \nonumber \]

    Then

    \[ \langle A \rangle = {1 \over 2\pi\hbar Q}\int dp A(p)\int dr ds e^{ips/\hbar}\langle r-{s \over 2}\vert e^{-\beta H}\vert r+{s \over 2}\rangle \nonumber \]

    Define a distribution function

    \[ \rho_{\rm W}(r,p) = {1 \over 2\pi\hbar}\int ds e^{ips/\hbar} \langle r-{s \over 2}\vert e^{-\beta H}\vert r+{s \over 2}\rangle \nonumber \]

    Then, the expectation value can be written as

    \[ \langle A \rangle = {1 \over Q}\int dr dp A(p)\rho_{\rm W}(r,p) \nonumber \]

    which looks just like a classical phase space average using the "phase space'' distribution function \(\rho_{\rm W}(r,p) \). The distribution function \(\rho _{\rm W} (r, p) \) is known as the Wigner density matrix and it has many interesting features. For one thing, its classical limit is

    \[ \rho_{\rm W}(r,p) = \exp\left[-\beta \left({p^2 \over 2m} + U(r)\right)\right] \nonumber \]

    which is the true classical phase space distribution function. There are various examples, in which the exact Wigner distribution function is the classical phase space distribution function, in particularly for quadratic Hamiltonians. Despite its compelling appearance, the evaluation of expectation values of functions of momentum are considerably more difficult than functions of position, due to the fact that the entire density matrix is required. However, there are a few quantities of interest, that are functions of momentum, that can be evaluated without resorting to the entire density matrix. These are thermodynamic quantities which will be discussed in the next section.


    This page titled 11.2.1: Expectation values of observables is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mark Tuckerman.