# 20.1: Calculations of Properties Other Than the Energy

- Page ID
- 70540

There are, of course, properties other than the energy that are of interest to the practicing chemist. Dipole moments, polarizabilities, transition probabilities among states, and vibrational frequencies all come to mind. Other properties that are of importance involve operators whose quantum numbers or symmetry indices label the state of interest. Angular momentum and point group symmetries are examples of the latter properties; for these quantities the properties are precisely specified once the quantum number or symmetry label is given (e.g., for a \(^3P\) state, the average value of \(L^2 \text{ is } = \langle ^3 P \big| L^2 \big| ^3P \rangle = \hbar^2 1(1+1) = 2\hbar^2\).

Although it may be straightforward to specify what property is to be evaluated, often computational difficulties arise in carrying out the calculation. For some ab initio methods, these difficulties are less severe than for others. For example, to compute the electric dipole transition matrix element \( \langle \Psi_2 \big| \textbf{ r }\big| \Psi_1 \rangle \) between two states \(\Psi_1 \text{ and } \Psi_2\), one must evaluate the integral involving the one-electron dipole operator \( \textbf{r} = \sum\limits_j e\textbf{r}_j - \sum\limits_a eZ_a\textbf{R}_a \); here the first sum runs over the N electrons and the second sum runs over the nuclei whose charges are denoted \(Z_a\). To evaluate such transition matrix elements in terms of the Slater-Condon rules is relatively straightforward as long as \(\Psi_1 \text{ and } \Psi_2\) are expressed in terms of Slater determinants involving a single set of orthonormal spin-orbitals. If \(\Psi_1 \text{ and } \Psi_2\), have been obtained, for example, by carrying out separate MCSCF calculations on the two states in question, the energy optimized spin-orbitals for one state will not be the same as the optimal spin-orbitals for the second state. As a result, the determinants in \(\Psi_1 \text{ and those in } \Psi_2\) will involve spin-orbitals that are not orthonormal to one another. Thus, the SC rules can not immediately be applied. Instead, a transformation of the spin-orbitals of \(\Psi_1 \text{ and } \Psi_2\) to a single set of orthonormal functions must be carried out. This then expresses \(\Psi_1 \text{ and } \Psi_2\) in terms of new Slater determinants over this new set of orthonormal spinorbitals, after which the SC rules can be exploited.

In contrast, if \(\Psi_1 \text{ and } \Psi_2\) are obtained by carrying out a CI calculation using a single set of orthonormal spin-orbitals (e.g., with \(\Psi_1 \text{ and } \Psi_2\) formed from two different eigenvectors of the resulting secular matrix), the SC rules can immediately be used to evaluate the transition dipole integral.

## Formulation of Property Calculations as Responses

Essentially all experimentally measured properties can be thought of as arising through the **response** of the system to some externally applied perturbation or disturbance. In turn, the calculation of such properties can be formulated in terms of the response of the energy E or wavefunction \(\Psi\) to a perturbation. For example, molecular dipole moments \(\mu\) are measured, via electric-field deflection, in terms of the change in energy

\[ \Delta \text{E} = \mu\cdot{\textbf{E}} + \dfrac{1}{2}\textbf{E}\cdot{\alpha}\cdot{\textbf{E}} + \dfrac{1}{6}\textbf{E}\cdot{\textbf{E}}\cdot{\textbf{E}}\cdot{\beta} + ... \]

caused by the application of an external electric field **E** which is spatially inhomogeneous, and thus exerts a force

\[ \textbf{F} = -\nabla \Delta E \]

on the molecule proportional to the dipole moment (good treatments of response properties for a wide variety of wavefunction types (i.e., SCF, MCSCF, MPPT/MBPT, etc.) are given in **Second Quantization Based Methods in Quantum Chemistry** , P. Jørgensen and J. Simons, Academic Press, New York (1981) and in **Geometrical Derivatives of Energy Surfaces and Molecular Properties** , P. Jørgensen and J. Simons, Eds., NATO ASI Series, Vol. 166, D. Reidel, Dordrecht (1985)).

To obtain expressions that permit properties other than the energy to be evaluated in terms of the state wavefunction \(\Psi\), the following strategy is used:

- The perturbation V = H-H\(^0\) appropriate to the particular property is identified. For dipole moments (\(\mu\)), polarizabilities (\(\alpha\)), and hyperpolarizabilities (\(\beta\)), V is the interaction of the nuclei and electrons with the external electric field \[ V = \sum\limits_a Z_ae\textbf{R}_a\cdot{\textbf{E}} - \sum\limits_je\textbf{r}_j\cdot{\textbf{E}}. \]For vibrational frequencies, one needs the derivatives of the energy E with respect to deformation of the bond lengths and angles of the molecule, so V is the sum of all changes in the electronic Hamiltonian that arise from displacements \(\delta\textbf{R}_a\) of the atomic centers \[ V = \sum_a (\nabla \textbf{R}_a H)\cdot{\delta \textbf{R}_a}. \]
- A power series expansion of the state energy E, computed in a manner consistent with how \(\Psi\) is determined (i.e., as an expectation value for SCF, MCSCF, and CI wavefunctions or as \( \langle \Phi \big| \text{ H } \big| \Psi \rangle \)for MPPT/MBPT or as \( \langle \Phi \big| e^{-T}\text{ H }e^{T}\Phi \rangle \) for CC wavefunctions), is carried out in powers of the perturbation V: \[ \text{E = E}^0 + \text{E}^{(1)} + \text{E}^{(2)} + \text{E}^{(3)} + ... \] In evaluating the terms in this expansion, the dependence of H = H\(^0\)+V
**and**of \(\Psi\) (which is expressed as a solution of the SCF, MCSCF, ..., or CC equations for H**not**for H\(^0\)) must be included. - The desired physical property must be extracted from the power series expansion of \(\Delta\) E in powers of V.

## The MCSCF Response Case

### The Dipole Moment

To illustrate how the above developments are carried out and to demonstrate how the results express the desired quantities in terms of the original wavefunction, let us consider, for an MCSCF wavefunction, the response to an external electric field. In this case, the Hamiltonian is given as the conventional one- and two-electron operators H\(^0\) to which the above one-electron electric dipole perturbation V is added. The MCSCF wavefunction \(\Psi\) and energy E are assumed to have been obtained via the MCSCF procedure with H=H\(^0+\lambda \text{V, where } \lambda\) can be thought of as a measure of the strength of the applied electric field. The terms in the expansion of E(\(\lambda\)) in powers of \(\lambda\):

\[ \text{E} = \text{E(}\lambda = 0) + \lambda\left( \dfrac{\text{dE}}{\text{d}\lambda} \right)_0 + \dfrac{1}{2}\lambda^2 \left( \dfrac{\text{d}^2\text{E}}{\text{d}\lambda^2} \right)_0 + ... \]

are obtained by writing the total derivatives of the MCSCF energy functional with respect to \(\lambda\) and evaluating these derivatives at \(\lambda = 0\) (which is indicated by the subscript (..)0 on the above derivatives):

\[ \text{E}(\lambda = 0) = \langle \Psi (\lambda = 0)\big| \text{ H}^0\big|\Psi (\lambda = 0) \rangle = \text{E}^0 , \]

\[ \left( \dfrac{\text{dE}}{\text{d}\lambda} \right)_0 = \langle \Psi (\lambda = 0)\big| V\big| \Psi (\lambda = 0)\rangle + 2\sum\limits_J \left( \dfrac{\partial \text{C}_J}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0 \big| \Psi (\lambda = 0)\rangle + 2 \sum\limits_{i,a}\left( \dfrac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big| \Psi (\lambda = 0) \rangle + ... \]

\[ ... + 2 \sum\limits_{\nu} \left( \dfrac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}} \big| \text{ H}^0 \big| \Psi (\lambda = 0) \rangle , \]

and so on for higher order terms. The factors of 2 in the last three terms come through using the hermiticity of H\(^0\) to combine terms in which derivatives of \(\Psi\) occur.

The first-order correction can be thought of as arising from the response of the wavefunction (as contained in its LCAO-MO and CI amplitudes and basis functions \(\chi_{\nu}\)) plus the response of the Hamiltonian to the external field. Because the MCSCF energy functional has been made stationary with respect to variations in the C\(_J\) and C\(_{i,a}\) amplitudes, the second and third terms above vanish:

\[ \dfrac{\partial \text{E}}{\partial \text{C}_J} = 2 \langle \dfrac{\partial \Psi}{\partial \text{C}_J}\big|\text{ H}^0\big|\Psi (\lambda = 0) \rangle = 0, \]

\[ \dfrac{\partial \text{E}}{\partial \text{C}_{a,i}} = 2 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}}\big|\text{ H}^0\big|\Psi (\lambda = 0) \rangle = 0. \]

If, as is common, the atomic orbital bases used to carry out the MCSCF energy optimization are not explicitly dependent on the external field, the third term also vanishes because \( \left( \frac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 = 0. \) Thus for the MCSCF case, the first-order response is given as the average value of the perturbation over the wavefunction with \(\lambda l= 0\):\

\[ \left( \dfrac{\text{dE}}{\text{d} \lambda} \right)_0 = \langle \Psi (\lambda = 0)\big| V \big| \Psi (\lambda = 0) \rangle . \]

For the external electric field case at hand, this result says that the field-dependence of the state energy will have a linear term equal to

\[ \langle \Psi (\lambda = 0) \big| V \big| \Psi (\lambda = 0) \rangle = \langle \Psi \big| \sum\limits_a Z_ae\textbf{R}_a\cdot{e} - \sum\limits_je\textbf{r}_j\cdot{e}\big| \Psi \rangle , \]

where **e** is a unit vector in the direction of the applied electric field (the magnitude of the field \(\lambda\) having already been removed in the power series expansion). Since the dipole moment is determined experimentally as the energy's slope with respect to field strength, this means that the dipole moment is given as:

\[ \mu = \langle \Psi \big| \sum\limits_a Z_a e \textbf{R}_a - \sum\limits_j e\textbf{r}_j \big| \Psi \rangle . \]

### The Geometrical Force

These same techniques can be used to determine the response of the energy to displacements \(\delta \text{R}_a\) of the atomic centers. In such a case, the perturbation is

\[ V = \sum\limits_a \delta \textbf{R}_a\cdot{\nabla_{\textbf{R}_a}}\left( -\sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{\big| r_i - R_a \big|} \right) = -\sum\limits_a Z_a e^2\delta \textbf{R}_a\cdot{\sum\limits_i} \dfrac{( \textbf{r}_i - \textbf{R}_a )}{\big| r_i - R_a \big|^3 .} \]

Here, the one-electron operator \( \sum\limits_i \dfrac{( \textbf{r}_i - \textbf{R}_a )}{\big| r_i - R_a \big|} \) is referred to as 'the HellmannFeynman' force operator; it is the derivative of the Hamiltonian with respect to displacement of center-a in the x, y, or z direction. The expressions given above for E(\(\lambda \)=0) and \( \left( \frac{\text{dE}}{\text{d}\lambda} \right)_0 \) can once again be used, but with the Hellmann-Feynman form for V. Once again, for the MCSCF wavefunction, the variational optimization of the energy gives

\[ \langle \dfrac{\partial \Psi}{\partial \text{C}_J}\big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle = \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}}\big| \text{ H}^0\big| \Psi (\lambda = 0) \rangle = 0. \]

However, because the atomic basis orbitals are attached to the centers, and because these centers are displaced in forming V, it is no longer true that \( \left( \frac{\partial \chi_{\nu}}{\partial \lambda} \right)_0 = 0; \) the variation in the wavefunction caused by movement of the basis functions now contributes to the firstorder energy response. As a result, one obtains

\[ \left( \right)_0 = -\sum\limits_a Z_a e^2 \delta \textbf{R}_a\cdot{\langle} \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_ - \textbf{R}_a)}{|r_i - R_a|^3} \big| \Psi \rangle + 2 \sum\limits_a \delta \textbf{R}_a\cdot{\sum\limits_{\nu}} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle . \]

The first contribution to the **force**

\[ \textbf{F}_a = - Z_a e^2 \langle \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{|r_i - R_a|^3} \big| \Psi \rangle + 2\sum\limits_{\nu} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}} \big| \text{ H}^0\big| \Psi (\lambda = 0)\rangle \]

along the x, y, and z directions for center-a involves the expectation value, with respect to the MCSCF wavefunction with \(\lambda\) = 0, of the Hellmann-Feynman force operator. The second contribution gives the forces due to infinitesimal displacements of the basis functions on center-a. The evaluation of the latter contributions can be carried out by first realizing that

\[ \Psi = \sum\limits_J \text{C}_J \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \phi_{Jn} ... \phi_{JN} \big| \]

with

\[ \phi_j = \sum\limits_{\mu}\text{C}_{\mu ,i}\chi_{\mu} \]

involves the basis orbitals through the LCAO-MO expansion of the \(\phi_j\)s. So the derivatives of the basis orbitals contribute as follows:

\[ \sum\limits_{\nu} (\nabla_{\textbf{R}_a} \chi_{\nu})\langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| = \sum\limits_J \sum\limits_{j,\nu}\text{C}_J\text{C}_{\nu ,j} \langle \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \nabla_{\textbf{R}_a}\chi_{\nu} ... \phi_{JN} \big|. \]

Each of these factors can be viewed as combinations of CSFs with the same \(\text{C}_J \text{ and } \text{C}_{n,j}\) coefficients as in \(\Psi \text{ but with the } j^{th}\) spin-orbital involving basis functions that have been differentiated with respect to displacement of center-a. It turns out that such derivatives of Gaussian basis orbitals can be carried out analytically (giving rise to new Gaussians with one higher and one lower l-quantum number). When substituted into \( \sum\limits_{\nu} (\nabla_{\textbf{R}_a} \chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big| \text{ H}^0\big| \Psi (\lambda =0) \rangle\), these basis derivative terms yield \[ \sum\limits_{\nu} (\nabla_{\textbf{R}_a}\chi_{\nu})_0 \langle \dfrac{\partial \Psi}{\partial \chi_{\nu}}\big|\text{ H}^0\big| \Psi (\lambda = 0)\rangle = \sum\limits_J \sum\limits_{j,\nu} \text{C}_J\text{C}_{\nu ,j} \langle \big| \phi_{J1} \phi_{J2} \phi_{J3} ... \nabla_{\textbf{R}_a}\chi_{\nu} ... \phi_{JN}\big| \text{ H}^0\big| \Psi \rangle , \] whose evaluation via the Slater-Condon rules is straightforward. It is simply the expectation value of H\(^0\) with respect to \(\Psi\) (with the same density matrix elements that arise in the evaluation of \(\Psi\)'s energy) **but** with the one- and two-electron integrals over the atomic basis orbitals involving one of these differentiated functions: \[ \langle \chi_{\mu}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle \Longrightarrow \nabla_{\textbf{R}_a}\langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \big| \chi_{\gamma}\chi_{\delta} \rangle =\] \[ = \langle \nabla_{\textbf{R}_a}\chi_{\mu}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \nabla_{\textbf{R}_a}\chi_{\nu}\big|\text{ g }\big|\chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \chi_{\nu}\big|\text{ g }\big| \nabla_{\textbf{R}_a} \chi_{\gamma}\chi_{\delta} \rangle + \langle \chi_{\mu} \chi_{\nu}\big|\text{ g }\big| \chi_{\gamma} \nabla_{\textbf{R}_a} \chi_{\delta} \rangle . \] In summary, the force **F**\(_a\) felt by the nuclear framework due to a displacement of center-a along the x, y, or z axis is given as \[ \textbf{F}_a = -Z_a e^2 \langle \Psi \big| \sum\limits_i \dfrac{(\textbf{r}_i - \textbf{R}_a)}{|r_i - R_a|^3}\big| \Psi \rangle + (\nabla_{\textbf{R}_a}\langle \Psi \big| \text{ H}^0 \big| \Psi \rangle ), \] where the second term is the energy of \(\Psi\) but with all atomic integrals replaced by integral derivatives: \( \langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \chi_{\gamma} \chi_{\delta} \rangle \Longrightarrow \nabla_{\textbf{R}_a} \langle \chi_{\mu}\chi_{\nu}\big| \text{ g } \chi_{\gamma} \chi_{\delta} \rangle . \)

## Responses for Other Types of Wavefunctions

It should be stressed that the MCSCF wavefunction yields especially compact expressions for responses of E with respect to an external perturbation because of the variational conditions

\[ \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0 \big|\Psi (\lambda = 0)\rangle = \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0 \]

that apply. The SCF case, which can be viewed as a special case of the MCSCF situation, also admits these simplifications. However, the CI, CC, and MPPT/MBPT cases involve additional factors that arise because the above variational conditions do not apply (in the CI case, \( \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0 \) still applies, but the orbital condition \( \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle = 0 \) does not because the orbitals are not varied to make the CI energy functional stationary).

Within the CC, CI, and MPPT/MBPT methods, one must evaluate the so-called responses of the C\(_I\) and C\(_{a,i}\) coefficients \( \left( \frac{\partial \text{C}_J}{\partial \lambda} \right)_0 \) and \( \left( \frac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0 \) that appear in the full energy response as (see above) \( 2 \sum\limits_J \left( \frac{\partial \text{C}_J}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_J} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle + 2 \sum\limits_{i,a} \left( \frac{\partial \text{C}_{a,i}}{\partial \lambda} \right)_0 \langle \dfrac{\partial \Psi}{\partial \text{C}_{a,i}} \big| \text{ H}^0\big|\Psi (\lambda = 0)\rangle \). To do so requires solving a set of response equations that are obtained by differentiating whatever equations govern the \(C_I \text{ and } C_{a,i}\) coefficients in the particular method (e.g., CI, CC, or MPPT/MBPT) with respect to the external perturbation. In the geometrical derivative case, this amounts to differentiating with respect to x, y, and z displacements of the atomic centers. These response equations are discussed in **Geometrical Derivatives of Energy Surfaces and Molecular Properties** , P. Jørgensen and J. Simons, Eds., NATO ASI Series, Vol. 166, D. Reidel, Dordrecht (1985). Their treatment is somewhat beyond the scope of this text, so they will not be dealt with further here.

## The Use of Geometrical Energy Derivatives

- Gradients as Newtonian Forces The first energy derivative is called the gradient
**g**and is the negative of the force**F**(with components along the \(a^{th}\) center denoted \(\textbf{F}_{\textbf{a}}\)) experienced by the atomic centers**F**= -**g**. These forces, as discussed in Chapter 16, can be used to carry out classical trajectory simulations of molecular collisions or other motions of large organic and biological molecules for which a quantum treatment of the nuclear motion is prohibitive. The second energy derivatives with respect to the x, y, and z directions of centers a and b (for example, the x, y component for centers a and b is \( \text{H}_{ax,by} = \left( \dfrac{\partial ^2E}{\partial x_a \partial y_b} \right) \) form the Hessian matrix**H**. The elements of**H**give the local curvatures of the energy surface along the 3N cartesian directions. The gradient and Hessian can be used to systematically locate local minima (i.e., stable geometries) and transition states that connect one local minimum to another. At each of these stationary points, all forces and thus all elements of the gradient**g**vanish. At a local minimum, the**H**matrix has 5 or 6 zero eigenvalues corresponding to translational and rotational displacements of the molecule (5 for linear molecules; 6 for non-linear species) and 3N-5 or 3N-6 positive eigenvalues. At a transition state,**H**has one negative eigenvalue, 5 or 6 zero eigenvalues, and 3N-6 or 3N-7 positive eigenvalues. - Transition State Rate Coefficients The transition state theory of Eyring or its extensions due to Truhlar and coworkers (see, for example, D. G. Truhlar and B. C. Garrett, Ann. Rev. Phys. Chem. 35 , 159 (1984)) allow knowledge of the Hessian matrix at a transition state to be used to compute a rate coefficient k\(_{\text{rate}}\) appropriate to the chemical reaction for which the transition state applies. More specifically, the geometry of the molecule at the transition state is used to compute a rotational partition function Q\(^†_{\text{rot}}\) in which the principal moments of inertia \(I_a \text{, } I_b \text{, and } I_c\) (see Chapter 13) are those of the transition state (the \(^†\) symbol is, by convention, used to label the transition state): \[ Q_{\text{rot}}^{\dagger} = \prod\limits_{\text{n = a,b,c}}\sqrt{\dfrac{8\pi^2 I_nkT}{h^2}}, \] where k is the Boltzmann constant and T is the temperature in \(^{\circ}\text{K}\). The eigenvalues {\(\omega_{\alpha}\)} of the mass weighted Hessian matrix (see below) are used to compute, for each of the 3N-7 vibrations with real and positive \(\omega_{\alpha}\) values, a vibrational partition function that is combined to produce a transition-state vibrational partition function: \[ \text{Q}^{\dagger}_{\text{vib}} = \prod\limits_{\alpha = 1,3 N-7} \dfrac{e^{-\dfrac{\hbar \omega_{\alpha}}{2kT}}}{1-e^{-\dfrac{\hbar \omega_{\alpha}}{kT}}} . \] The electronic partition function of the transition state is expressed in terms of the activation energy (the energy of the transition state relative to the electronic energy of the reactants) E\(^{\dagger}\) as: \[ \text{Q}^{\dagger}_{\text{electronic}} = \omega^{\dagger} e^{ -\dfrac{ \text{E}^{\dagger} }{ kT } } \] where \(\omega^{\dagger}\) is the degeneracy of the electronic state at the transition state geometry. In the original Eyring version of transition state theory (TST), the rate coefficient k\(_{rate}\) is then given by: \[ k_{\text{rate}} = \dfrac{\text{kT}}{\hbar}\omega^{\dagger}e^{ -\dfrac{E^{\dagger}}{kT} } \dfrac{ \text{Q}_{\text{rot}}^{\dagger} \text{Q}_{\text{vib}}^{\dagger} }{\text{Q}_{\text{reactants}}} \] where \(\text{Q}_{\text{reactants}}\) is the converntional partition function for the reactant materials. For example, in a biomolecular reaction such as: \[ \text{F + H}_2 \rightarrow \text{ FH + H}, \] the reactant partition function \[ \text{Q}_{\text{reactants}} = \text{Q}_F \text{Q}_{H_2} \] is written in terms of the translational and electronic (the degeneracy of the \(^2\)P state produces the 2 (3) overall degeneracy factor) partition functions of the F atom \[ \text{Q}_F = 2\sqrt{\dfrac{2\pi m_F kT}{h^2}}^3 \] and the translational, electronic, rotational, and vibrational partition functions of the H\(_2\) molecule \[ \text{Q}_{\text{H}_2} = \sqrt{\dfrac{2\pi m_{H_2}kT}{h^2}}^3 \dfrac{8\pi^2 I H_2 kT}{2h^2} \dfrac{e^{-\dfrac{\hbar \omega H_2}{2kT}}}{1-e^{-\dfrac{\hbar \omega_{H_2}}{kT}}}. \] The factor of 2 in the denominator of the H2 molecule's rotational partition function is the "symmetry number" that must be inserted because of the identity of the two H nuclei. The overall rate coefficient k\(_{\text{rate}} \text{ (with units sec}^{-1}\) because this is a rate per collision pair) can thus be expressed entirely in terms of energetic, geometrical, and vibrational information about the reactants and the transition state. Even within the extensions to Eyring's original model, such is the case. The primary difference in the more modern theories is that the transition state is identified not as the point on the potential energy surface at which the gradient vanishes and there is one negative Hessian eigenvalue. Instead, a so-called variational transition state (see the above reference by Truhlar and Garrett) is identified. The geometry, energy, and local vibrational frequencies of this transition state are then used to compute, must like outlined above, k\(_{\text{rate}}\).
- Harmonic Vibrational Frequencies It is possible (see, for example, J. Nichols, H. L. Taylor, P. Schmidt, and J. Simons, J. Chem. Phys. 92 , 340 (1990) and references therein) to remove from
**H**the zero eigenvalues that correspond to rotation and translation and to thereby produce a Hessian matrix whose eigenvalues correspond only to internal motions of the system. After doing so, the number of negative eigenvalues of**H**can be used to characterize the nature of the stationary point (local minimum or transition state), and H can be used to evaluate the local harmonic vibrational frequencies of the system. The relationship between**H**and vibrational frequencies can be made clear by recalling the classical equations of motion in the Lagrangian formulation: \[ \dfrac{d}{dt}\left( \dfrac{\partial L}{\partial \dot{q_j}} \right) - \left( \dfrac{\partial L}{\partial q_j} \right) = 0, \] where \(q_j\) denotes, in our case, the 3N cartesian coordinates of the N atoms, and \(\dot{q}_j\) is the velocity of the corresponding coordinate. Expressing the Lagrangian L as kinetic energy minus potential energy and writing the potential energy as a local quadratic expansion about a point where**g**vanishes, gives \[ L = \dfrac{1}{2} \sum\limits_J m_j \dot{q}_j^2 - E(0) -\dfrac{1}{2}\sum\limits_{j,k} q_j H_{j,k}q_k . \] Here, E(0) is the energy at the stationary point, mj is the mass of the atom to which \(q_j \text{ applies, and the } H_{j,k}\) are the elements of**H**along the x, y, and z directions of the various atomic centers. Applying the Lagrangian equations to this form for L gives the equations of motion of the \(q_j\) coordinates: \[ m_j\dots{q}_j = -\sum\limits_k H_{j,k}q_k . \] To find solutions that correspond to local harmonic motion, one assumes that the coordinates \(q_j\) oscillate in time according to \[ q_j(t) = q_j cos(\omega t) . \] Substituting this form for q_j(t) into the equations of motion gives \[ m_j \omega^2 q_j = \sum\limits_k H_{j,k} q_k . \] Defining \[ q_j' = q_j\sqrt{m_j} \] and introducing this into the above equation of motion yields \[ \omega^2 q_j' = \sum\limits H_{j,k}'q_k' , \] where \[ H_{j,k}' = H_{j,k}\dfrac{1}{\sqrt{m_j m_k}} \] is the so-called**mass-weighted Hessian**matrix. The squares of the desired harmonic vibrational frequencies \(\omega^2\) are thus given as eigenvalues of the mass-weighted Hessian**H'**: \[ \textbf{H'q'}_{\alpha} = \omega^2_{\alpha}\textbf{q'}_{\alpha} \] The corresponding eigenvector, {q'\(_{\alpha ,j}\) gives, when multiplied by \(\frac{1}{\sqrt{m_j}}\), the atomic displacements that accompany that particular harmonic vibration. At a transition state, one of the \(\omega^2_{\alpha}\) will be negative and 3N-6 or 3N-7 will be positive. - Reaction Path Following The Hessian and gradient can also be used to trace out 'streambeds' connecting local minima to transition states. In doing so, one utilizes a local harmonic description of the potential energy surface \[ E(\textbf{x}) = E(0) + \textbf{x}\cdot{\textbf{g}} + \dfrac{1}{2}\textbf{x}\cdot{\textbf{H}}\cdot{\textbf{x}} + ..., \] where
**x**represents the (small) step away from the point**x = 0**at which the gradient**g**and Hessian H have been evaluated. By expressing**x**and**g**in terms of the eigenvectors**v\(_{\**alpha}\) of**H**\[ \textbf{Hv}_{\alpha} = \lambda_{\alpha} \textbf{v}_{\alpha} , \] \[ \textbf{x} = \sum\limits_{\alpha} \langle \textbf{v}_{\alpha} \big| \textbf{x} \rangle \textbf{v}_{\alpha} = \sum\limits_{\alpha} \textbf{x}_{\alpha} \textbf{v}_{\alpha} , \] \[ \textbf{g} = \sum\limits_{\alpha} \langle \textbf{v}_{\alpha} \big| \textbf{g} \rangle \textbf{v}_{\alpha} = \sum\limits_{\alpha} \textbf{g}_{\alpha} \textbf{v}_{\alpha} , \] the energy change E(**x**) - E(0) can be expressed in terms of a sum of independent changes along the eigendirections: \[ \text{E(}\textbf{x}) - \text{E(0)} = \sum\limits_{\alpha} \left[ x_{\alpha} g_{\alpha} + \dfrac{1}{2}x^2_{\alpha} \lambda_{\alpha} \right] + ... \] Depending on the signs of g\(_{\alpha} \text{ and of } \lambda_{\alpha}\) , various choices for the displacements xa will produce increases or decreases in energy:

- If \(\lambda_{\alpha}\) is positive, then a step x\(_{\alpha} \text{ 'along' g}_{\alpha}\) (i.e., one with x\(_{\alpha} \text{ g}_{\alpha}\) positive) will generate an energy increase. A step 'opposed to' g\(_{\alpha}\) will generate an energy decrease if it is short enough that x\(_{\alpha} \text{ g}_{\alpha}\) is larger in magnitude than \( \frac{1}{2}x^2_{\alpha} \lambda_{\alpha} \), otherwise the energy will increase.
- If \(\lambda_{\alpha}\)} is negative, a step opposed to g\(_{\alpha}\) will generate an energy decrease. A step along g\(_{\alpha}\) will give an energy increase if it is short enough for x\(_{\alpha} \text{ g}_{\alpha}\) to be larger in magnitude than \( \frac{1}{2}x^2_{\alpha}\lambda_{\alpha} \), otherwise the energy will decrease. Thus, to proceed downhill in all directions (such as one wants to do when searching for local minima), one chooses each x\(_{\alpha}\) in opposition to g\(_{\alpha}\) and of small enough length to guarantee that the magnitude of x\(_{\alpha} \text{ g}_{\alpha}\) exceeds that of \( \frac{1}{2}x^2_{\alpha} \lambda_{\alpha} \) for those modes with \(\lambda_{\alpha}\) > 0. To proceed uphill along a mode with \(\lambda_{\alpha}\)' < 0 and downhill along all other modes with \(\lambda_{\alpha}\) > 0, one chooses x\(_{\alpha}\)' along g\(_{\alpha}\)' with x\(_{\alpha}\)' short enough to guarantee that x\(_{\alpha}' \) g\(_{\alpha} '\) is larger in magnitude than \( \frac{1}{2}_{\alpha}'x^2 \lambda_{\alpha '} \), and one chooses the other x\(_{\alpha}\) opposed to g\(_{\alpha}\) and short enough that x\(_{\alpha}\) g\(_{\alpha}\) is larger in magnitude than \(\frac{1}{2}x^2_{\alpha} \lambda_{\alpha}\). Such considerations have allowed the development of highly efficient potential energy surface 'walking' algorithms (see, for example, J. Nichols, H. L. Taylor, P. Schmidt, and J. Simons, J. Chem. Phys. 92 , 340 (1990) and references therein) designed to trace out streambeds and to locate and characterize, via the local harmonic frequencies, minima and transition states. These algorithms form essential components of most modern ab initio , semi-empirical, and empirical computational chemistry software packages.