# 12.4: Symmetry Operations as Matrices

$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

Matrices can be used to map one set of coordinates or functions onto another set. Matrices used for this purpose are called transformation matrices. In group theory, we can use transformation matrices to carry out the various symmetry operations discussed previously. As a simple example, we will investigate the matrices we would use to carry out some of these symmetry operations on a vector in 2D space $$\begin{pmatrix} x, y \end{pmatrix}$$.

The transformation matrix for any operation in a group has a form that is unique from the matrices of the other members of the same group; however, the character of the transformation matrix for a given operation is the same as that for any other operation in the same class.

Each symmetry operation below will operate on an arbitrary vector:

$\bf{u} = \begin{pmatrix} x \\ y \\ z \end{pmatrix}$

## The Identity Operation, $$E$$

The identity operator, $$E$$, leaves the vector unchanged, and as you may already suspect, the appropriate matrix is the identity matrix:

$E\bf{u} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} \label{9.1}$

## The Reflection Operation, $$\sigma$$

The reflection operation reflects the vector $$\bf{u}$$ over a plane. This can be the $$xy$$, $$xz$$, or $$yz$$ plane. The matrix is similar to the identity matrix, with the exception that there is a sign change for the appropriate element. The reflect matrix in the $$xy$$ plane is:

$\sigma \bf{u} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} x \\ y \\ -z \end{pmatrix} \label{9.2}$

Notice that the element for the dimension being reflected is the on that is negative. In the above case, since $$z$$ is being reflected over the $$xy$$ plane, the $$z$$ element in the matrix is negative. If we were to reflect over the $$xz$$ plane instead, the $$y$$ element would be the one that is negative:

$\sigma \bf{u} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} x \\ -y \\ z \end{pmatrix}$

We can also talk about reflections in 2 dimensions. The simplest example of a reflection matrix corresponds to reflecting the vector $$\begin{pmatrix} x, y \end{pmatrix}$$ in either the $$x$$ or $$y$$ axes. Reflection in the $$x$$ axis maps $$y$$ to $$-y$$, while reflection in the $$y$$ axis maps $$x$$ to $$-x$$. The appropriate matrix is very like the identity matrix but with a change in sign for the appropriate element. Reflection in the $$x$$ axis transforms the vector $$\begin{pmatrix} x, y \end{pmatrix}$$ to $$\begin{pmatrix} x, -y \end{pmatrix}$$, and the appropriate matrix is

$\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ -y \end{pmatrix}$ Figure 12.4.1 : Reflection across the x-axis in 2D space.

Reflection in the y axis transforms the vector $$\begin{pmatrix} x, y \end{pmatrix}$$ to $$\begin{pmatrix} -x, y \end{pmatrix}$$, and the appropriate matrix is:

$\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} -x \\ y \end{pmatrix} \label{9.3}$ Figure 12.4.2 : Reflection across the y-axis in 2D space.

More generally, matrices can be used to represent reflections in any plane (or line in 2D). For example, reflection in the 45° axis shown below maps $$\begin{pmatrix} x, y \end{pmatrix}$$ onto $$\begin{pmatrix} -y, -x \end{pmatrix}$$.

$\begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} -y \\ -x \end{pmatrix} \label{9.4}$ Figure 12.4.3 : Reflection across the axis that is rotated 45° with with respect to x-axis in 2D space.

## The $$n$$-fold Rotation Operation, $$C_n$$

The counterclockwise rotation about $$z$$ is:

$C_n\bf{u} = \begin{pmatrix} \cos{\frac{2\pi}{n}} & -\sin{\frac{2\pi}{n}} & 0 \\ \sin{\frac{2\pi}{n}} & \cos{\frac{2\pi}{n}} & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} x' \\ y' \\ z \end{pmatrix}$

For clockwise rotation, the sign on the $$\sin{\theta}$$  terms will be reversed. This matrix simplifies dramatically for the $$C_2$$ rotation:

$C_2\bf{u} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} -x \\ -y \\ z \end{pmatrix}$

rotations about the $$x$$, $$y$$ and $$z$$ axes acting on a vector $$\begin{pmatrix} x, y, z \end{pmatrix}$$ are represented by the following matrices.

$R_{x}(\theta) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta \\ 0 & \sin\theta & \cos\theta \end{pmatrix} \label{9.6a}$

$R_{y}(\theta) = \begin{pmatrix} \cos\theta & 0 & -\sin\theta \\ 0 & 1 & 0 \\ \sin\theta & 0 & \cos\theta \end{pmatrix} \label{9.6b}$

$R_{z}(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix} \label{9.6c}$

The rotation operation can also be performed in two dimensions. In two dimensions, the appropriate matrix to represent rotation by an angle $$\theta$$ about the origin is

$R(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \label{9.5}$

## The Inversion operation, $$I$$

The inversion operation inverts every point:

$I\bf{u} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} -x \\ -y \\ -z \end{pmatrix}$

Inversion of the 2D the vector $$\begin{pmatrix} x, y \end{pmatrix}$$ to $$\begin{pmatrix} -x, -y \end{pmatrix}$$, and the appropriate matrix is:

$\begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} -x \\ -y \end{pmatrix}$

## $$C_{2v}$$ Point Group

Now that we have seen the matrix form of our operators, we can see how the multiplication of each operator leads to another operations in the group. The $$C_{2v}$$ multiplication table is:

$$C_{2v}$$ $$E$$ $$C_2$$ $$\sigma_v$$ $$\sigma_v'$$
$$E$$ $$E$$ $$C_2$$ $$\sigma_v$$ $$\sigma_v'$$
$$C_2$$ $$C_2$$ $$E$$ $$\sigma_v'$$ $$\sigma_v$$
$$\sigma_v$$ $$\sigma_v$$ $$\sigma_v'$$ $$E$$ $$C_2$$
$$\sigma_v'$$ $$\sigma_v'$$ $$\sigma_v$$ $$C_2$$ $$E$$

Let's look at $$C_2 \sigma_v$$ multiplication, where $$\sigma_v$$ is a reflection across the $$xz$$ plane:

$C_2 \sigma_v = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \sigma_v'$

where $$\sigma_v'$$ is the reflection across the $$yz$$ plane.

## Matrix Representation

The symmetry operations in a group may be represented by a set of transformation matrices $$\Gamma$$$$(g)$$, one for each symmetry element $$g$$. Each individual matrix is called a representative of the corresponding symmetry operation, and the complete set of matrices is called a matrix representation of the group. The matrix representatives act on some chosen basis set of functions, and the actual matrices making up a given representation will depend on the basis that has been chosen. The representation is then said to span the chosen basis. In the examples above we were looking at the effect of some simple transformation matrices on an arbitrary vector $$\begin{pmatrix} x, y \end{pmatrix}$$. The basis is therefore a pair of unit vectors pointing in the $$x$$ and $$y$$ directions. In 3D space, the basis is the set of unit vectors pointing in the $$x$$, $$y$$, and $$z$$ directions. In most of the examples we will be considering in this course, we will use sets of atomic orbitals as basis functions for matrix representations. Before proceeding any further, we must check that a matrix representation of a group obeys all of the rules set out in the formal mathematical definition of a group.

1. Identity. The first rule is that the group must include the identity operation $$E$$ (the ‘do nothing’ operation). We showed above that the matrix representative of the identity operation is simply the identity matrix. As a consequence, every matrix representation includes the appropriate identity matrix.
2. Closure. The second rule is that the combination of any pair of elements must also be an element of the group (the group property). If we multiply together any two matrix representatives, we should get a new matrix which is a representative of another symmetry operation of the group. In fact, matrix representatives multiply together to give new representatives in exactly the same way as symmetry operations combine according to the group multiplication table. For example, in the $$C_{2v}$$ point group, we showed that the combined symmetry operation $$C_2$$$$\sigma_v$$ is equivalent to $$\sigma_v'$$. In a matrix representation of the group, if the matrix representatives of $$C_2$$ and $$\sigma_v$$ are multiplied together, the result will be the representative of $$\sigma_v'$$.
3. Associativity. The third rule states that the rule of combination of symmetry elements in a group must be associative. This is automatically satisfied by the rules of matrix multiplication.
4. Reciprocality. The final rule states that every operation must have an inverse, which is also a member of the group. The combined effect of carrying out an operation and its inverse is the same as the identity operation. It is fairly easy to show that matrix representatives satisfy this criterion. For example, the inverse of a reflection is another reflection, identical to the first. In matrix terms we would therefore expect that a reflection matrix was its own inverse, and that two identical reflection matrices multiplied together would give the identity matrix. This turns out to be true, and can be verified using any of the reflection matrices in the examples above. The inverse of a rotation matrix is another rotation matrix corresponding to a rotation of the opposite sense to the first.
##### Example 12.4.1 : Matrix Representation of the $$C_{2v}$$ Point Group (the allyl radical)

In this example, we’ll take as our basis a $$p$$ orbital on each carbon atom $$\begin{pmatrix} p_1, p_2, p_3 \end{pmatrix}$$. Note that the $$p$$ orbitals are perpendicular to the plane of the carbon atoms (this may seem obvious, but if you’re visualizing the basis incorrectly it will shortly cause you a not inconsiderable amount of confusion). The symmetry operations in the $$C_{2v}$$ point group, and their effect on the three $$p$$ orbitals, are as follows:

$\begin{array}{ll} E & \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \rightarrow \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \\ C_2 & \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \rightarrow \begin{pmatrix} -p_3 \\ -p_2 \\ -p_1 \end{pmatrix} \\ \sigma_v & \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \rightarrow \begin{pmatrix} -p_1 \\ -p_2 \\ -p_3 \end{pmatrix} \\ \sigma_v' & \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \rightarrow \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \end{array} \nonumber$

The matrices that carry out the transformation are

$\begin{array}{ll} \Gamma(E) & \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} = \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \\ \Gamma(C_2) & \begin{pmatrix} 0 & 0 & -1 \\ 0 & -1 & 0 \\ -1 & 0 & 0 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} = \begin{pmatrix} -p_3 \\ -p_2 \\ -p_1 \end{pmatrix} \\ \Gamma(\sigma_v) & \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} = \begin{pmatrix} -p_1 \\ -p_2 \\ -p_3 \end{pmatrix} \\ \Gamma(\sigma_v') & \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} = \begin{pmatrix} p_1 \\ p_2 \\ p_3 \end{pmatrix} \end{array} \nonumber$

We have written the vectors representing our basis as row vectors. This is important. If we had written them as column vectors, the corresponding transformation matrices would be the transposes of the matrices above, and would not reproduce the group multiplication table (try it as an exercise if you need to convince yourself).

12.4: Symmetry Operations as Matrices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Jerry LaRue & Claire Vallance.