7.4: Time Correlation Functions
- Page ID
- 17635
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)One of the most active research areas in statistical mechanics involves the evaluation of so-called equilibrium time correlation functions such as we encountered in Chapter 6. The correlation function \(C(t)\) is defined in terms of two physical operators \(A\) and \(B\), a time dependence that is carried by a Hamiltonian \(H\) via \(\exp(-iHt/ \hbar)\), and an equilibrium average over a Boltzmann population \(\exp(-\beta H)/Q\).
The quantum mechanical expression for \(C(t)\) is
\[C(t) = \sum_j \langle \Phi_j | A\exp(iHt/ \hbar) B\exp(-iHt/ \hbar) |\Phi_j\rangle \dfrac{\exp(-\beta E_j)}{Q}, \label{1}\]
while the classical mechanical expression (here, we allow the \(h^{-M}\) factor that occurs in the partition function shown in Section 7.1.2 to be canceled out in the numerator and denominator for simplicity) is
\[C(t) = \int dq(0) \int dp(0) A(q(0),p(0)) B(q(t),p(t)) \dfrac{\exp(-\beta H(q(0),p(0)))}{Q},\label{2}\]
where \(q(0)\) and \(p(0)\) are the values of all the coordinates and momenta of the system at \(t=0\) and \(q(t)\) and \(p(t)\) are their values, according to Newtonian mechanics, at time \(t\).
As shown above, an example of a time correlation function that relates to molecular spectroscopy is the dipole-dipole correlation function that we discussed in Chapter 6:
\[C(t) = \sum_j \langle \Phi_j | \textbf{e}•\boldsymbol{\mu} \exp(iHt/ \hbar) \textbf{e} \cdot \boldsymbol{\mu} \exp(-iHt/ \hbar) |\Phi_j\rangle \dfrac{\exp(-\beta E_j)}{Q},\label{3}\]
for which \(A\) and \(B\) are both the electric dipole interaction \(\textbf{e}•\boldsymbol{\mu}\) between the photon's electric field whose direction is characterized by the vector \(\textbf{e}\) and the molecule's dipole operator \(\mu\). The Fourier transform of this particular \(C(t)\) relates to the absorption intensity for light of frequency \(\omega\) :
\[I(\omega) = \int dt C(t) \exp(i\omega t).\]
It turns out that many physical properties (e.g., absorption line shapes, Raman scattering intensities) and transport coefficients (e.g., diffusion coefficients, viscosity) can be expressed in terms of time-correlation functions. It is beyond the scope of this text to go much further in this direction, so I will limit my discussion to the optical spectroscopy case at hand, which requires that we now discuss how the time-evolution aspect of this problem is dealt with. The text Statistical Mechanics, D. A. McQuarrie, Harper and Row, New York (1977) has a nice treatment of such other correlation functions, so the reader is directed to that text for further details.
The computation of correlation functions involves propagating either wave functions or classical trajectories which produce the q(t), \(p(t)\) values entering into the expression for \(C(t)\). In the classical case, one carries out a large number of Newtonian trajectories with initial coordinates \(q(0)\) and momenta \(p(0)\) chosen to represent the equilibrium condition of the \(N\)-molecule system. For example, one could use the MC method to select these variables employing \(\exp(-\beta H(p(0),q(0)))\) as the probability function for accepting or rejecting initial \(q(0)\) and \(p(0)\) values. In this case, the weighting function contains not just the potential energy but also the kinetic energy (and thus the total Hamiltonian \(H\)) because now we need to also select proper initial values for the momenta. So, with many (e.g., M) selections of the initial \(q\) and \(p\) variables of the \(N\)-molecules being made, one would allow the Newton dynamics of each set of initial conditions to proceed. During each such trajectory, one would monitor the initial value of the \(A(q(0), p(0))\) property and the time progress of the \(B(q(t),p(t))\) property. One would then compute the MC average to obtain the correlation function:
\[C(t) = \dfrac{1}{M} \sum_{J=1}^M A(q_J(0),p_J(0)) B(q_J(t),p_J(t)) \exp(-\beta H(q_J(0),p_J(0))).\label{4}\]
Where the index \(J\) labels the \(M\) accepted configurations and momenta of the MC sampling.
In the quantum case, the time propagation is especially challenging and is somewhat beyond the scope of this text. However, I want to give you some idea of the steps that are involved, realizing that this remains an area of very active research development. As noted in Section 1.3.6, it is possible to time-propagate a wave function \(F\) that is known at \(t = 0\) if one is able to expand \(F\) in terms of the eigenfunctions of the Hamiltonian \(H\). However, for systems comprised of many molecules, which are most common in statistical mechanics studies, it is impossible to compute (or realistically approximate) these eigenfunctions. Thus, it is not productive to try to express \(C(t)\) in terms of these eigenfunctions. Therefore, an entirely new set of tools has been introduced to handle time-propagation in the quantum case, and it is these new devices that I now attempt to describe in a manner much like we saw in Section 1.3.6’s discussion of time propagation of wave functions.
To illustrate, consider the time propagation issue contained in the quantum definition of \(C(t)\) shown above. One is faced with
- propagating \(|\Phi_j\rangle\) from \(t=0\) up to time \(t\), using \(\exp(-iHt/ \hbar) |\Phi_j\rangle\) and then acting with the operator \(B\)
- acting with the operator \(A^+\) on \(|\Phi_j\rangle \) and then propagating \(A^+ |\Phi_j\rangle \) from \(t=0\) up to time \(t\), using \(\exp(-iHt/ \hbar)A^+ |\Phi_j\rangle \);
- \(C(t)\) then requires that these two time-propagated functions be multiplied together and integrated over the coordinates that \(F\) depends on.
The \(\exp(-\beta H)\) operator that also appears in the definition of \(C(t)\) can be combined, for example, with the first time propagation step and actually handled as part of the time propagation as follows:
\[exp(-iHt/ \hbar) |\Phi_j\rangle \exp(-\beta E_j) = \exp(-iHt/ \hbar) \exp(-\beta H) |\Phi_j\rangle\label{5a} \]
\[=exp(-i[t+\beta \hbar /i]H/ \hbar) |\Phi_j\rangle.\label{5b}\]
The latter expression can be viewed as involving a propagation in complex time from \(t = 0\) to \(t = t + \beta\hbar/i\). Although having a complex time may seem unusual, as I will soon point out, it turns out that it can have a stabilizing influence on the success of these tools for computing quantum correlation functions.
Much like we saw earlier in Section 1.3.6, so-called Feynman path integral techniques can be used to carry out the above time propagations. One begins by dividing the time interval into \(P\) discrete steps (this can be the real time interval or the complex interval)
\[\exp\big[-\frac{i Ht}{\hbar}\big] = \Big\{\exp\big[-\frac{i H\delta t}{\hbar}\big]\Big\}^P .\label{6}\]
The number \(P\) will eventually be taken to be large, so each time step \(dt = t/P\) has a small magnitude. This fact allows us to use approximations to the exponential operator appearing in the propagator that are valid only for short time steps. For each of these short time steps one then approximates the propagator in the most commonly used so-called split symmetric form:
\[\exp\big[-\frac{i H\delta t}{\hbar}\big] = \exp\big[-\frac{i V\delta t}{2 \hbar}\big] \exp\big[-\frac{i T\delta t}{\hbar}\big] \exp\big[-\frac{i V\delta t}{2 \hbar}\big].\label{7}\]
Here, \(V\) and \(T\) are the potential and kinetic energy operators that appear in \(H\) = \(T + V\). It is possible to show that the above approximation is valid up to terms of order \((\delta t)^4\). So, for short times (i.e., small \(\delta t\)), these symmetric split operator approximation to the propagator should be accurate.
The time evolved wave function \(\Phi(t)\) can then be expressed as
\[\Phi(t) = \{ \exp\big[-\frac{i V\delta t}{2 \hbar}\big] \exp\big[-\frac{i T\delta t}{\hbar}\big] \exp\big[-\frac{i V\delta t}{2 \hbar}\big]\}^P \Phi(t=0).\label{8}\]
The potential \(V\) is (except when external magnetic fields are present) a function only of the coordinates \(\{q_j \}\) of the system, while the kinetic term \(T\) is a function of the momenta \(\{p_j\}\) (assuming Cartesian coordinates are used). By making use of the completeness relations for eigenstates of the coordinate operator
\[1 = \int dq | q_j\rangle \langle q_j|\label{9}\]
and inserting this identity \(P\) times (once between each combination of \(\exp[-i V\delta t/2\hbar] \exp[-i T\delta t/\hbar] \exp[-i V\delta t/2\hbar]\) factors), the expression given above for \(\Phi(t)\) can be rewritten as follows:
\[\Phi(q_P ,t)= \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \prod_{j=1}^P \exp\big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))\big]\]
\[\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle \Phi(q_0,0).\]
Then, by using the analogous completeness identity for the momentum operator
\[1 = \frac{1}{\hbar} \int dp_j| p_j\rangle \langle p_j |\]
one can write
\[\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle = \frac{1}{\hbar} \int dp \langle q_j|p \rangle \exp\big(-\frac{ip^2\delta t}{2m \hbar} \big) \langle p|q_{j-1} \rangle .\]
Finally, by using the fact (recall this from Section 1.3.6) that the momentum eigenfunctions \(|p\rangle\), when expressed as functions of coordinates \(q\) are given by
\[\langle q_j|p \rangle = \frac{1}{\sqrt{2\pi}} \exp\big(\frac{ipq}{\hbar}\big),\]
the above integral becomes
\[\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle = \frac{1}{2\pi \hbar} \int dp \exp\big(-\frac{ip^2 dt}{2m \hbar}\big) \exp\big[\frac{ip(q_j - q_j- 1)}{\hbar}\big].\]
This integral over \(p\) can be carried out analytically to give
\[\langle q_j| \exp\big[-\frac{i \delta tT}{\hbar}\big] |q_{j-1}\rangle =\left(\frac{m}{2\pi i\hbar \delta t}\right)^{1/2} \exp\big[\frac{im(q_j - q_{j-1})^2}{2 \hbar \delta t}\big].\]
When substituted back into the multidimensional integral for \(\Phi(q_P ,t)\), we obtain
\[\Phi(q_P ,t)= \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \prod_{j=1}^P\exp\big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))\big] \exp\big[\frac{im(q_j - q_{j-1})^2}{2 \hbar \delta t}\big] F(q_0,0)\]
or
\[\Phi(q_P ,t)= \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \int dq_{P-1} dq_{P-2} . . . dq_1 dq_0 \exp\Big[\sum_{j=1}^P \big[ -\frac{i\delta t}{2 \hbar}(V(q_j) + V(q_{j-1}))+ \frac{i m(q_j - q_{j-1})^2}{2 \hbar \delta t}\big]\Big] F(q_0,0).\]
Recall what we said earlier that the time correlation function was to be computed by:
- propagating \(|\Phi_j\rangle\) from \(t=0\) up to time \(t\), using \(\exp(-iHt/ \hbar) |\Phi_j\rangle\) and then acting with the operator B
- acting with the operator \(A^+\) on \(|\Phi_j\rangle\) and then propagating \(A^+ |\Phi_j\rangle\) from \(t=0\) up to time \(t\), using \(\exp(-iHt/ \hbar)A^+ |\Phi_j\rangle\) ;
- multiplying together these two functions and integrating over the coordinates that \(F\) depends on.
So all of the effort described above would have to be expended for \(F (q_0,0)\) taken to be \(|\Phi_j\rangle\) after which the result would be multiplied by the operator B, as well as for \(F (q_0,0)\) taken to be \(A^+|\Phi_j\rangle\) to allow the quantum time correlation function \(C(t)\) to be evaluated. These steps can be performed, but they are very difficult to implement, so I will refer the student to Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley, Oxford U. Press, New York (1997) for further discussion on this topic.
Why are the multidimensional integrals of the form shown above called path integrals? Because the sequence of positions \(q_1 , ... q_{P-1}\) describes a path connecting \(q_0\) to \(q_P\). By integrating over all of the intermediate positions \(q_1 , q_2 ,... q_{P-1}\) for any given \(q_0\) and \(q_P\) one is integrating over all paths that connect \(q_0\) to \(q_P\). Further insight into the meaning of the above is gained by first realizing that
\[\frac{m}{2\delta t} (q_j - q_{j-1})^2 =\frac{m}{2(\delta t)^2} (q_j - q_{j-1})^2 \delta t = \int T dt\]
is the finite-difference representation, within the \(P\) discrete time steps of length dt, of the integral of Tdt over the jth time step, and that
\[\frac{\delta t}{2} (V(q_j) + V(q_{j-1})) = \int V(q)dt\]
is the representation of the integral of \(Vd\;t\) over the jth time step. So, for any particular path (i.e., any specific set of \(q_0, q_1, \cdots q_{P-1} , q_P\) values), the sum over all such terms
\[\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]\]
represents the integral over all time from \(t=0\) until \(t = t\) of the so-called Lagrangian \(L = T - V\):
\[\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big] = \int Ldt.\]
This time integral of the Lagrangian is called the action \(S\) in classical mechanics (recall that in Chapter 1, we used quantization of the action in the particle-in-a-box problem). Hence, the N-dimensional integral in terms of which \(\Phi(q_P ,t)\) is expressed can be written as
\[F (q_P ,t) = \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \sum_{\text{ all paths}} \exp\big[\frac{i}{\hbar} \int dt L \big] F(q_0 ,t=0).\]
Here, the notation "all paths" is realized in the earlier version of this equation by dividing the time axis from \(t = 0\) to \(t = t\) into \(P\) equal divisions, and denoting the coordinates of the system at the jth time step by \(q_j\). By then allowing each \(q_j\) to assume all possible values (i.e., integrating over all possible values of \(q_j\) using, for example, the Monte-Carlo method discussed earlier), one visits all possible paths that begin at \(q_0\) at \(t = 0\) and end at \(q_P\) at \(t = t\). By forming the classical action \(S\)
\[S = \int dtL\]
for each path and then summing \(\exp(iS/ \hbar) \Phi(q_0,t=0)\) over all paths and multiplying by \(\left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2}\), one is able to form \(\Phi(q_P ,t)\).
The difficult step in implementing this Feynman path integral method in practice involves how one identifies all paths connecting \(q_0\), \(t = 0\) to \(q_P\), \(t\). Each path contributes an additive term involving the complex exponential of the quantity
\[\sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]\]
Because the time variable \(\delta t =t/P\) appearing in each action component can be complex (recall that, in one of the time evolutions, \(t\) is really \(t + \beta \hbar /i\)), the exponentials of these action components can have both real and imaginary parts. The real parts, which arise from the \(\exp(-\beta H)\), cause the exponential terms to be damped (i.e., to undergo exponential decay), but the imaginary parts give rise (in \(\exp(iS/ \hbar\)) to oscillations. The sum of many, many (actually, an infinite number of) oscillatory
\[\exp(iS/ \hbar) = \cos (S/ \hbar) + i \sin(S/ \hbar)\]
terms is extremely difficult to evaluate because of the tendency of contributions from one path to cancel those of another path. The practical evaluation of such sums remains a very active research subject.
The most commonly employed approximation to this sum involves finding the path(s) for which the action
\[S= \sum_{j=1}^{P-1} \big[\frac{m(q_j - q_{j-1})^2}{2\delta t} - \frac{\delta t(V(q_j) + V(q_{j-1}))}{2}\big]\]
is smallest because such paths produce the lowest-frequency oscillations in \(\exp(iS/ \hbar)\), and thus may be less subject to cancelation by contributions from other paths.
The path(s) that minimize the action \(S\) are, in fact, the classical paths. That is, they are the paths that the system whose quantum wave function is being propagated would follow if the system were undergoing classical Newtonian mechanics subject to the conditions that the system be at \(q_0\) at \(t=0\) and at \(q_P\) at \(t=t\). In this so-called semi-classical approximation to the propagation of the initial wave function using Feynman path integrals, one finds all classical paths that connect \(q_0\) at \(t = 0\) and at \(q_P\) at \(t = t\), and one evaluates the action \(S\) for each such path. One then applies the formula
\[\Phi(q_P ,t) = \left(\frac{m}{2\pi i\hbar \delta t}\right)^{P/2} \sum_{\text{ all paths}} \exp\big[\frac{i}{\hbar} \int dt L \big] F(q_0 ,t=0)\]
but includes in the sum only the contribution from the classical path(s). In this way, one obtains an approximate quantum propagated wave function via a procedure that requires knowledge of only classical propagation paths.
Clearly, the quantum propagation of wave functions, even within the semi-classical approximation discussed above, is a rather complicated affair. However, keep in mind the alternative that one would face in evaluating, for example, spectroscopic line shapes if one adopted a time-independent approach. One would have to know the energies and wave functions of a system comprised of many interacting molecules. This knowledge is simply not accessible for any but the simplest molecules. For this reason, the time-dependent framework in which one propagates classical trajectories or uses path-integral techniques to propagate initial wave functions offers the most feasible way to evaluate the correlation functions that ultimately produce spectral line shapes and other time correlation functions for complex molecules in condensed media.
Before finishing this Section, it might help if I showed how one obtains the result that classical paths are those that make the action integral \(S = \int L\;dt\) minimum. This provides the student with an introduction to the subject called calculus of variations or functional analysis, which most students reading this text have probably not studied in a class. First, let’s clarify what a functional is. A function \(f(x)\) depends on one or more variables x that take on scalar values; that is, given a scalar number \(x\), \(f(x)\) produces the value of the function \(f\) at this value of \(x\). A functional \(F[f]\) is a function of the function \(f\) if, given the function \(f\), \(F\) acts on it to produce a value. In more general functionals, \(F[f]\) might depend not only of f, but on various derivatives of \(f\). Let’s consider an example. Suppose one has a functional of the form
\[F[f]=\int_{t_0}^{t_f} F(t,f(t),\frac{df(t)}{dt})dt\]
meaning that the functional involves an integral from \(t_0\) through \(t_f\) of an integrand that may contain (i) the variable \(t\) explicitly, (ii) the function \(f(t)\), and (iii) the derivative of this function with respect to the variable \(t\). This is the kind of integral one encounters when evaluating the action integral
\[S=\int_{t_0}^{t_f}[T-V]dt=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}\Big)^2-V(x(t))\Big]dt\]
where the function \(f(t)\) is the coordinate \(x(t)\) that evolves from \(x(t_0)\) to \(x(t_f)\). The task at hand is to determine that function \(x(t)\) for which this integral is a minimum.
We solve this problem proceeding much as one would do if one had to minimize a function of a variable; we differentiate with respect to the variable and set the derivative to zero. However, in our case, we have a function of a function, not a function of a variable; so how do we carry out the derivative? We assume that the function \(x(t)\) that minimizes \(S\) is known, and we express any function that differs a little bit from the correct \(x(t)\) as
\[x(t)+\varepsilon\eta(t)\]
where is a scalar quantity used to suggest that \(x(t)\) and differ by only a small amount and is a function that obeys
\[ \eta(t)= 0\text{ at }t=t_0\text{ and at }t = t_f;\]
this is how we guarantee that we are only considering paths that connect to the proper \(x_0\) at \(t_0\) and \(x_f\) at \(t_f\). By considering all possible functions that obey these conditions, we have in a parameterization of all paths that begin (at \(t_0\)) and end (at tf) where the exact path \(x(t)\) does but differ by a small amount from \(x(t)\). Substituting into
\[S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}\Big)^2-V(x(t))\Big]dt\]
gives
\[S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\Big(\frac{dx(t)}{dt}+\varepsilon\frac{d\eta(t)}{dt}\Big)^2-V(x(t)+\varepsilon\eta(t))\Big]dt.\]
The terms in the integrand are then expanded in powers of the \(\varepsilon\) parameter
\[\Big(\frac{dx(t)}{dt}+\varepsilon\frac{d\eta(t)}{dt}\Big)^2=\Big(\frac{dx(t)}{dt}\Big)^2+2\varepsilon\frac{dx(t)}{dt}\frac{d\eta(t)}{dt}+\varepsilon^2\left(\frac{d\eta}{dt}\right)^2\]
\[-V(x(t)+\varepsilon\eta(t))=-V(x(t))-\varepsilon\frac{\partial V(x(t))}{\partial x(t)}\eta(t)-\frac{1}{2}\varepsilon^2\frac{\partial^2 V(x(t))}{\partial x(t)^2}\eta^2(t)-\cdots\]
and substituted into the integral for \(S\). Collecting terms of each power of \(\varepsilon\) allows this integral to be written as
\[S=\int_{t_0}^{t_f}\Big[\frac{m}{2}\left\{\Big(\frac{dx(t)}{dt}\Big)^2+2\varepsilon\frac{dx(t)}{dt}\frac{d\eta(t)}{dt}+O(\varepsilon^2)\right\}-V(x(t)+\varepsilon\eta(t))-V(x(t))-\varepsilon\frac{\partial V(x(t))}{\partial x(t)}\eta(t)-O(\varepsilon^3) \Big]dt.\]
The condition that S(e) be stable with respect to variations in \(\varepsilon\) can be expressed as
\[\frac{ds(\varepsilon)}{d\varepsilon}=0=\lim_{\varepsilon\rightarrow 0}\dfrac{S(\varepsilon)-S(0)}{\varepsilon}\]
which is equivalent to requiring that the terms linear in \(\varepsilon\) in the above expansion for \(S(\varepsilon)\) vanish
\[0=\int_{t_0}^{t_f}\Big[m\frac{dx(t)}{dt}\frac{d\eta(x(t))}{dt}-\frac{\partial V(x(t))}{\partial x(t)}\eta(t)\Big]dt\]
Next, we use integration by parts to rewrite the first term involving as a term involving instead
\[\int_{t_0}^{t_f}m\frac{dx(t)}{dt}\frac{d\eta(x(t))}{dt}=m\left[\frac{dx(t)}{dt}\eta(t)\right]_{t_0}^{t_f}-\int_{t_0}^{t_f} m\frac{d^2x(t)}{dt^2}\eta(t)dt\]
Because the function vanishes at \(t_0\) and \(t_f\), the first term vanishes, so this identity can be used to rewrite the condition that the terms in \(S(\varepsilon)\) that are linear in \(\varepsilon\) vanish as
\[0=\int_{t_0}^{t_f}\Big[-m\frac{d^2x(t)}{dt^2}-\frac{\partial V(x(t))}{\partial x(t)}\Big]\eta(t)dt.\]
Because this result is supposed to be valid for any function that vanishes at \(t_0\) and tf, the factor multiplying in the above integral must itself vanish
\[-m\frac{d^2x(t)}{dt^2}-\frac{\partial V(x(t))}{\partial x(t)}=0.\]
This shows that the path \(x(t)\) that makes \(S\) stationary is the path that obeys Newton’s equations- the classical path. I urge the student reader to study this example of the use of functional analysis because this mathematical device is an important tool too master.
Contributors and Attributions
Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry
Integrated by Tomoyuki Hayashi (UC Davis)