# 4.3: Linear Variational Method


A widely used example of Variational Methods is provided by the so-called linear variational method. Here one expresses the trial wave function a linear combination of so-called basis functions {$$c_j$$}.

$\psi=\sum_j c_j \chi_j.\nonumber$

Substituting this expansion into $$\langle\psi|H|\psi\rangle$$ and then making this quantity stationary with respect to variations in the $$c_i$$ subject to the constraint that $$\psi$$ remains normalized

$1=\langle\psi|\psi\rangle=\sum_i\sum_j c_i^*\langle\chi_i|\chi_j\rangle c_j\nonumber$

gives

$\sum_j \langle\chi_i|H|\chi_j\rangle c_j = E \sum_j \langle\chi_i|\chi_j\rangle c_j.\nonumber$

This is a generalized matrix eigenvalue problem that we can write in matrix notation as

$\textbf{HC}=\textbf{ESC}.\nonumber$

It is called a generalized eigenvalue problem because of the appearance of the overlap matrix $$\textbf{S}$$ on its right hand side. This set of equations for the $$c_j$$ coefficients can be made into a conventional eigenvalue problem as follows:

1. The eigenvectors $$\textbf{v}_k$$ and eigenvalues $$s_k$$ of the overlap matrix are found by solving $\sum_j S_{i,j}\nu_{k,j}=s_k\nu_{k,i}\nonumber$All of the eigenvalues $$s_k$$ are positive because $$\textbf{S}$$ is a positive-definite matrix.
2. Next one forms the matrix $$\textbf{S}^{-1/2}$$ whose elements are $S_{i,j}^{-1/2}=\sum_k\nu_{k,i}\dfrac{1}{\sqrt{s_k}}\nu_{k,j}\nonumber$ (another matrix $$\textbf{S}^{1/2}$$ can be formed in a similar way replacing $$\dfrac{1}{\sqrt{s_k}}$$ with $$\sqrt{s_k}$$).
3. One then multiplies the generalized eigenvalue equation on the left by $$\textbf{S}^{-1/2}$$ to obtain $\textbf{S}^{-1/2}\textbf{HC}=\textbf{E} \textbf{S}^{-1/2}\textbf{SC}.\nonumber$
4. This equation is then rewritten, using $$\textbf{S}^{-1/2}\textbf{S}$$​ = $$\textbf{S}^{1/2}$$​ and $$1=\textbf{S}^{-1/2}\textbf{S}^{1/2}$$​ as $\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2} (\textbf{S}^{1/2}\textbf{C})=\textbf{E} (\textbf{S}^{1/2}\textbf{C}).\nonumber$

This is a conventional eigenvalue problem in which the matrix is $$\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$$ and the eigenvectors are $$(\textbf{S}^{1/2}\textbf{C})$$.

The net result is that one can form $$\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$$ and then find its eigenvalues and eigenvectors. Its eigenvalues will be the same as those of the original generalized eigenvalue problem. Its eigenvectors $$(\textbf{S}^{1/2}\textbf{C})$$ can be used to determine the eigenvectors $$\textbf{C}$$ of the original problem by multiplying by $$\textbf{S}^{-1/2}$$

$\textbf{C}= \textbf{S}^{-1/2} (\textbf{S}^{1/2}\textbf{C}).\nonumber$

Although the derivation of the matrix eigenvalue equations resulting from the linear variational method was carried out as a means of minimizing $$\langle\psi|H|\psi\rangle$$, it turns out that the solutions offer more than just an upper bound to the lowest true energy of the Hamiltonian. It can be shown that the nth eigenvalue of the matrix $$\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$$ is an upper bound to the true energy of the nth state of the Hamiltonian. A consequence of this is that, between any two eigenvalues of the matrix $$\textbf{S}^{-1/2}\textbf{H} \textbf{S}^{-1/2}$$ there is at least one true energy of the Hamiltonian. This observation is often called the bracketing condition. The ability of linear variational methods to provide estimates to the ground- and excited-state energies from a single calculation is one of the main strengths of this approach.