# 6.8: High-End Methods for Treating Electron Correlation

- Page ID
- 162853

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Although their detailed treatment is beyond the scope of this text, it is important to appreciate that new approaches are always under development in all areas of theoretical chemistry. In this Section, I want to introduce you to two tools that are proving to offer high precision in the treatment of electron correlation energies. These are the so-called quantum Quantum Monte-Carlo and r1,2- approaches to this problem. Both methods currently are used when one wishes to obtain the absolute highest precision in an electronic structure calculation. The computational requirements of both of these methods are very high, so, at present, they can only be used on species containing fewer than ca. 100 electrons. However, with the power and speed of computers growing as fast as they are, it is likely that these high-end methods will be more and more widely used as time goes by.

## Quantum Monte-Carlo

In this method, one first re-writes the time dependent Schrödinger equation

\[i \hbar \frac{d\Psi}{dt} = - \frac{\hbar^2}{2m_e} \sum_j \nabla_j^2 \Psi + V \Psi\]

for negative imaginary values of the time variable \(t\) (i.e., one simply replaces \(t\) by \(-it\)). This gives

\[\frac{d\Psi}{dt} = \frac{\hbar}{2m_e} \sum_j \nabla_j^2 \Psi - \frac{V}{\hbar} \Psi,\]

which is analogous to the well-known diffusion equation

\[\frac{dC}{dt} = D \nabla^2C + S C.\]

The re-written Schrödinger equation can be viewed as a diffusion equation in the \(3N\) spatial coordinates of the \(N\) electrons with a diffusion coefficient \(D\) that is related to the electrons' mass me by

\[D = \frac{\hbar}{2m_e}.\]

The so-called source and sink term \(S\) in the diffusion equation is related to the electron-nuclear and electron-electron Coulomb potential energies denoted V:

\[S = - \frac{V}{\hbar}.\]

In regions of space where \(V\) is large and negative (i.e., where the potential is highly attractive), \(V\) is large and negative, so \(S\) is large and positive. This causes the concentration \(C\) of the diffusing material to accumulate in such regions. Likewise, where \(V\) is positive, \(C\) will decrease. Clearly by recognizing \(\Psi\) as the concentration variable in this analogy, one understands that \(\Psi\) will accumulate where \(V\) is negative and will decay where \(V\) is positive, as one expects.

So far, we see that the trick of taking \(t\) to be negative and imaginary causes the electronic Schrödinger equation to look like a \(3N\)-dimensional diffusion equation. Why is this useful and why does this trick work? It is useful because, as we see in Chapter 7 of this text, Monte-Carlo methods are highly efficient tools for solving certain equations; it turns out that the diffusion equation is one such case. So, the Quantum Monte-Carlo approach can be used to solve the imaginary-time Schrödinger equation even for systems containing many electrons. But, what does this imaginary time mean?

To understand the imaginary time trick, let us recall that any wave function

(e.g., the trial wave function with which one begins to use Monte-Carlo methods to propagate the diffusing \(\Psi\) function) \(\Phi\) can be written in terms of the exact eigenfunctions {\(\psi_K\)} of the Hamiltonian

\[H = - \frac{\hbar^2}{2m_e} \sum_j \nabla_j^2 + V\]

as follows:

\[F = \sum_K C_K \psi_K.\]

If the Monte-Carlo method can, in fact be used to propagate forward in time such a function but with \(t = -it\), then it will, in principle, generate the following function at such an imaginary time:

\[F = \sum_K C_K \psi_K \exp(-iEKt/\hbar) = \sum_K C_K \psi_K \exp(-EKt/\hbar).\]

As \(t\) increases, the relative amplitudes {\(C_K \exp(-E_Kt/\hbar)\)} of all states but the lowest state (i.e., that with smallest \(E_K\)) will decay compared to the amplitude \(C_0 \exp(-E_0t/\hbar)\) of the lowest state. So, the time-propagated wave function will, at long enough t, be dominated by its lowest-energy component. In this way, the quantum Monte-Carlo propagation method can generate a wave function in \(3N\) dimensions that approaches the ground-state wave function.

It has turned out that this approach, which tackles the \(N\)-electron correlation problem head-on, has proven to yield highly accurate energies and wave functions that display the proper cusps near nuclei as well as the negative cusps (i.e., the wave function vanishes) whenever two electrons' coordinates approach one another. Finally, it turns out that by using a starting function \(F\) of a given symmetry and nodal structure, this method can be extended to converge to the lowest-energy state of the chosen symmetry and nodal structure. So, the method can be used on excited states also. In Chapter 7 of this text, you will learn how the Monte-Carlo tools can be used to simulate the behavior of many-body systems (e.g., the \(N\)-electron system we just discussed) in a highly efficient and easily parallellized manner.

## \(r_{1,2}\) Method

In this approach to electron correlation, one employs a trial variational wave function that contains components that depend explicitly on the inter-electron distances \(r_{i,j}\). By so doing, one does not rely on the polarized orbital pair approach introduced earlier in this Chapter to represent all of the correlations among the electrons. An example of such an explicitly correlated wave function is:

\[\psi = |\phi_1 \phi_2 \phi_3 …\phi_N| (1 + a \sum_{i<j} r_{i,j})\]

which consists of an antisymmetrized product of \(N\) spin-orbitals multiplied by a factor that is symmetric under interchange of any pair of electrons and contains the electron-electron distances in addition to a single variational parameter \(a\). Such a trial function is said to contain linear-\(r_{i,j}\) correlation factors. Of course, it is possible to write many other forms for such an explicitly correlated trial function. For example, one could use:

\[\psi = |\phi_1 \phi_2 \phi_3 …\phi_N| \exp(-a \sum_{i<j} r_{i,j}))\]

as a trial function. Both the linear and the exponential forms have been used in developing this tool of quantum chemistry. Because the integrals that must be evaluated when one computes the Hamiltonian expectation value \(\langle \psi|H| \psi \rangle\) are most computationally feasible (albeit still very taxing) when the linear form is used, this particular parameterization is currently the most widely used.