5: Electron diffraction, postulates of quantum mechanics, the Bohr model, and the BeerLambert law
Electron diffraction
Continuing with our analysis of experiments that lead to the new quantum theory, we now look at the phenomenon of electron diffraction. It is wellknown that light has the ability to diffract around objects in its path, leading to an interference pattern that is particular to the object. This is, in fact, how holography works (the interference pattern is created by allowing the diffracted light to interfere with the original beam so that the hologram can be viewed by shining the original beam on the image). A simple illustration of diffraction is the Young double slit experiment pictured below:
Figure: From www.lightandmatter.com
Here, we use water waves (pictured as waves in a plane parallel to the double slit apparatus) and observe what happens when they impinge on the slits. Each slit then becomes a point source for spherical waves that subsequently interfere with each other, giving rise to the light and dark fringes on the screen at the bottom. The intensity of the fringes is depicted in the sketch below:
If laser light is used, the interference pattern appears as shown below:
Figure 3: Laser interference through a double slit
Amazingly, if electrons are used instead of light in the doubleslit experiment, and a fluorescent screen is used, one finds the same kind of interference pattern! This is shown in the electron doubleslit diffraction pattern below:
Figure 4: From zms.desy.de
Obviously, classical mechanics is not able to predict such a result. If the electrons are treated as classical particles, one would predict an intensity pattern corresponding to particles that can pass through one slit or the other, landing on the screen directly opposite the slit (i.e., no intensity maximum at the center of the screen):
Figure 5: Intensity pattern for ``classical'' electrons
The width of each peak is a direct measure of the width of the slits. since, in classical mechanics, the electrons follow definite, deterministic, predictable paths, there an be no deviation from this pattern. For this reason, the classical explanation cannot be the correct one.
We will consider two different rationalizations of the electron doubleslit experiment.
Particlewave picture
The first experiment, proposed by Clinton Davisson and Lester Germer in 1927, was based on a hypothesis put forth earlier by Louis de Broglie in 1922. De Broglie suggested that if waves (photons) could behave as particles, as demonstrated by the photoelectric effect, then the converse, namely that particles could behave as waves, should be true. He associated a wavelength \(\lambda\) to a particle with momentum \(p\) using Planck's constant as the constant of proportionality:
\[\lambda =\dfrac{h}{p} \label{1}\]
which is called the de Broglie wavelength. The DavissonGermer experiment, which produced an electron diffraction pattern from electrons scattered off a nickel crystal, confirmed the de Broglie hypothesis. However, it was not until 1961 that an experiment in which electrons impinged on a slit apparatus was performed by Claus Jönsson. This was a fiveslit setup. The doubleslit experiment was finally performed in the 1970s by Pier Giorgio Merli, Giulio Pozzi and GianFranco Missiroli. The point is, however, that through such experiments, the idea that electrons can behave as waves, creating interference patterns normally associated with light, is now wellestablished. The fact that particles can behave as waves but also as particles, depending on which experiment you perform on them, is known as the particlewave duality.
In the following, we give a brief discussion of where the de Broglie hypothesis comes from. From the photoelectric effect, we have the first part of the particlewave duality, namely, that electromagnetic waves can behave like particles. These particles are known as photons, and they move at the speed of light. Any particle that moves at or near the speed of light has kinetic energy given by Einstein's special theory of relatively. In general, a particle of mass \(m\) and momentum \(p\) has an energy
\[E=\sqrt{p^2 c^2+m^2 c^4} \label{2}\]
Note that if \(p=0\), this reduces to the famous restenergy expression \(E=mc^2\). However, photons are massless particles that always have a finite momentum \(p\). In this case, Einstein's formula becomes \(E=pc\). From Planck's hypothesis, one quantum of electromagnetic radiation has energy \(E=h\nu\). Thus, equating these two expressions for the kinetic energy of a photon, we have
\[h\nu =\dfrac{hc}{\lambda}=pc \label{3}\]
Solving for the wavelength \(\lambda\) gives
\[\lambda=\dfrac{h}{p} \label{4}\]
Now, this relation pertains to photons not massive particles. However, de Broglie argued that if particles can behave as waves, then a relationship like this, which pertains particularly to waves, should also apply to particles. Hence, we associate a wavelength \(\lambda\) to a particle that has momentum \(p\), which says that as the momentum becomes larger and large, the wavelength becomes shorter and shorter. In both cases, this means the energy becomes larger. That is, short wavelength and high momentum correspond to high energy.
Probability Waves
If particles can behave as waves, then we need to develop a theory for this type particlewave. We will do this in detail when we study the Schrdinger wave equation. For now, suffice it to say that the theory of particlewaves has some aspects that are similar to the classical theory of waves, but by no means can a classical wave theory, like that used to describe waves on a string on the surface of a liquid, be used to formulate the theory of particle waves. To begin with, what is the very nature of a particle wave? Here, we will give only a brief conceptual answer.
A particlewave is still described by some kind of amplitude function \(A(x,t)\), but this amplitude must be consistent with the fact that we could, in principle, design an experiment capable of measuring the particle's spatial location or position. Thus, we seem to have arrived at a paradoxical situation: The electron diffraction experiment tells us that particlewaves can interfere with each other, yet it must also be possible to measure a particlelike property, the position, via some kind of actual experiment. The resolution of this dilemma is that the particle exhibits wavelike behavior until a measurement is performed on it that is capable of localizing it at a particular point in space. Making this leap, however, has a profound implication, namely, that the outcome of the position measurement will not be the same in each realization even if it is performed in the same way. The reason is that if it did yield the same result, then we could say that the particle was evolving in a particular way that would put it there when the measurement was made, and this negates the possibility of its ever having a wavelike character, since this is exactly how classical particles behave. Thus, if the result of a position measurement can yield different outcomes, then the only thing we can predict is the probability that a given measurement of the position yields a particular value. The quantum world is not deterministic but rather intrinsically probabilistic.
If we are only able to predict the probability that a measurement of position will yield a particular value, then how do we obtain this prediction. Recall that the particlewave is described by an amplitude function \(A(x,t)\). Let us suppose that the particlewave reaches the screen at time \(T\) when the amplitude is \(A(x,T)\), which we denote as \(A(x)\) (when \(T\) is fixed, the amplitude is a function of \(x\) alone). Here, \(x\) denotes the position along the screen. The screen, itself, acts as an apparatus for measuring the particle's position. The amplitude \(A(x)\) can be either positive or negative, and we could even choose it such that it is a complex number. Thus, \(A(x)\) is not a probability because it is not positivedefinite, and it is not real. However, we can turn \(A(x)\) into a probability by taking the square magnitude of \(A(x)\). We define a new function
\[P(x)=A(x)^2 =A^{*}(x)A(x)\]
where \(A^{*}(x)\) denotes the complex conjugate of \(A(x)\). The function \(P(x)\) is positivedefinite, and if \(A(x)\) is appropriately chosen, \(P(x)\) will satisfy the normalization condition
\[\int_{a}^{b}P(x)dx=1\]
where \(a\) and \(b\) denote the endpoints of the screen. \(P(x)\) is an example of a probability density or probability distribution function. Because \(x\) is continuous, we cannot define a discrete probability since the probability to be at any particular value of \(x\) is zero (there are an infinite number of values for \(x\)). What \(P(x)\) tells us is the probability to measure the particle's position on the screen within a small interval \(dx\). In particular, the probability that a measurement of position yields a value in an interval \(dx\) centered on the point \(x_0\) is \(P(x_0)dx\). The amplitude \(A(x)\) is called a probability amplitude.
So why does an interference pattern arise? Because we do not observe the particle's position until it reaches the screen, we have to consider two possibilities for the particlewave: passage through the uppoer slit and passage through the lower slit, and we assign to one possibility a probability amplitude \(A_1 (x)\) and other an amplitude \(A_2 (x)\). Just as with ordinary waves, we must add the amplitudes to obtain the total wave amplitude, to the total amplitude is \(A_{tot}=A_1 (x)+A_2 (x)\). The probability that we observe something at a given point \(x_0\) on the screen in an interval \(dx\) is
\[\begin{align*}P(x_0)dx&=A_{tot}(x_0)^2 dx\\ &=A_1 (x_0)+A_2 (x_0)^2 dx\\&=(A_{1}^{*}(x_0)+A_{2}^{*}(x_0))(A_1 (x_0)+A_2 (x_0))dx\\&=A_1 (x_0)^2 +A_2 (x_0)^2 +(A_{1}^{*}(x_0)A_2(x_0)+A_1 (x_0)A_{2}^{*}(x_0))dx\end{align*}\]
If the two possibilities were completely independent, then their corresponding probabilities would simply add, and we we just have the first two terms in the last expression. However, there is a cross term that is generally nonzero, and it is this cross term that gives rise to the interference between the two possibilities, which leads to the observed interference pattern. Let us look at this term in greater detail. Recall that a complex number \(A=a+ib\) can be expressed as
\[A=Ae^{i\theta}\]
where \(A\) is the magnitude of the number
\[A=\sqrt{a^2 +b^2}\]
and \(\theta\) is the phase of the number
\[\theta=tan^{1}\dfrac{b}{a}\]
Letting
\[A_1 =A_1e^{i\theta_1}, \ A_2 = A_2e^{i\theta_2}\]
so that
\[A_{1}^{*}=A_1e^{i\theta_1}, \ A_{2}^{*}=A_2e^{i\theta_2}\]
and substituting into the probability distribution expression, we obtain
\[P(x_0)=A_1^2+A_2^2+(A_1e^{i\theta_1}A_2e^{i\theta_2}+A_1e^{i\theta_1}A_2e^{i\theta_2})=A_1^2+A_2^2+ A_1A_2(e^{i(\theta_2 \theta_1)}+e^{i(\theta_2 \theta_1)})\]
Recognizing that
\[e^{ix}+e^{ix}=2\cos(x)\]
the probability finally becomes
\[P(x_0)=A_1(x_0)^2 +A_2 (x_0)^2 +2A_1 (x_0)A_2(x_0)\cos(\theta_2 (x_0)\theta_1 (x_0))\]
The oscillatory nature of the co\sine is suggests of the oscillating pattern of bright and dark fringes in the interference pattern.
Sum over paths picture
Let us now consider an alternative explanation of the doubleslit experiment, however, due to Richard Feynman. This explanation is published in his 1965 book, Quantum Mechanics and Path Integrals. Feynman's explanation is closer in spirit to a classicallike picture, yet it still represents a radical departure from classical mechanics. Feynman postulated that electrons could still behave as particle in a doubleslit experiment. The twist is that the particles do not follow definite paths, as they would if they were classical particles. Rather, they can trace out a myriad of possible paths that might differ considerably from the path that would be predicted by classical mechanics. In fact, electrons initialized in the same manner, will follow different paths if they are allowed to wend their way through the doubleslit apparatus! This is illustrated in the figure below:
since the electron can follow any path and will follow different paths in different realizations of the experiment, physical quantities must be obtained by summing over all possible paths that the electron can follow. In order to carry out this sum, Feynman assigned a weight or importance to each path in the sum over paths. since each path takes the electron from the source to a point on the screen (call this point \(x\), where \(x\) varies over the legnth of the screen) in a time \(t\), the probability \(P\) associated with each path can only depend on \(x\) and \(t\). If \(t\) is fixed, the \(P\) can only depend on \(x\). Feynman proposed that each path could be assigned a quantity called the action \(S(x)\), where \(S(x)\) has the following properties:
 \(S(x)\) varies only minimally for paths near the path that classical mechanics would predict the electron should follow.
 \(S(x)\) varies increa\singly dramatically as the paths differ increa\singly from the classical path.
He then assigned an amplitude \(A(x)\) to each path according to the formula
\[A(x)=e^{iS(x)/\hbar}\]
where \(i=\sqrt{1}\) and \(\hbar=h/2\pi\). The exponential of a complex argument has a simple expression in terms of common trigonometric functions:
\[e^{ix}=\cos(x)+i\sin(x)\]
Thus, we can also write \(A(x)\) as
\[A(x)=\cos(S(x)/\hbar)+i\sin(S(x)/\hbar)\]
It is important to note that \(A(x)\) is a complex number. An amplitude is related to an actual probability \(P\) by taking the absolute value squared of the amplitude. Thus, if there were only one path with amplitude \(A(x)\), the probability that the electron would follow that path is
\[\begin{align*}P(x)&=A^{*}(x)A(x)=A(x)^2\\ &=e^{iS(x)/\hbar}e^{iS(x)/\hbar}\\ &=1\end{align*}\]
as expected. This is also the probability that the electron will end up at a point \(x\) on the screen, since the \single path takes the particle to a \single definite point \(x\). However, in Feynman's picture, the electron can follow any path. Thus, in order to compute the probability \(P(x)\) that the particle ends up at a point \(x\) on the screen, we must sum over all possible paths:
\[P(x)=\left  \sum_{paths}A_{path}(x)\right ^{2}\]
where \(A_{path}(x)\) is the amplitude for a particle path. since we are summing over many oscillating \sines and co\sines, there will be an interference pattern, meaning that the paths effectively interfere with each other. Indeed, the intensity \(I(x)\) will be proportional to the probability: \(I(x)\propto P(x)\). In fact, if we were to carry out this sum over paths (no simple feat, by the way), we would obtain an interference pattern that agrees with experiment.
Generally, a probability amplitude is a generalization of the square root of a probability that allows the amplitude to be a complex number. If \(P\) is a probability, and \(A\) is the associated probability amplitude, then if \(A\) were restricted to be real, then there would be only two possible values of \(A\), i.e., \(A=\sqrt{P}\) and \(A=\sqrt{P}\). If we let \(A\) be complex, the relation between \(A\) and \(P\) is
\[A^2 =A^{*}A=P\]
and there is an infinite number of square roots of \(P\). To see this, consider writing \(A\) as
\[A=\sqrt{P}e^{i\theta}\]
where \(\theta\) is any number in the interval \([0, 2\pi]\). If we can show that \(A^{*}A=P\), then it will follow that any value of \(\theta \epsilon [0, 2\pi]\) is allowable, which means that the number of possible amplitudes is infinite. The complex conjugate of \(A\) is
\[A^{*}=\sqrt{P}e^{i\theta}\]
and we find
\[\begin{align*}A^{*}A &= (\sqrt{P}e^{i\theta})(\sqrt{P}e^{i\theta})\\ &=e^{i\theta +i\theta}(\sqrt{P})^2 \\ &=P\end{align*}\]
Now, suppose we have two interfering paths with amplitudes \(A_1\) and \(A_2\) (they should depend on \(x\), but for notational simplicity, we will suppress the \(x\) dependence). The total amplitude is \(A=A_1 +A_2\), and the corresponding probability is \(P=A^2\), which gives
\[P=A^{*}A=(A_{1}^{*} +A_{2}^{*})(A_1 +A_2)\]
Let \(A_1 =a_1 e^{i\theta_1}\) and \(A_2 =a_2 e^{i\theta_2}\), where \(a_1\) and \(a_2\) are real numbers. Then
\[\begin{align*}P&=(a_1 e^{i\theta_1}+a_2 e^{i\theta_2})(a_1 e^{i\theta_1} +a_2 e^{i\theta_2})\\&=a_{1}^{2}+a_{2}^{2}+a_1a_2(e^{i(\theta_1\theta_2)}+e^{i(\theta_1\theta_2)})\\&=a_{1}^{2}+a_{2}^{2}+2a_1a_2\cos(\theta_1\theta_2)\end{align*}\]
The last term is known as the interference term. The presence of the co\sine in that time, which oscillates, suggests the oscillation in the interference pattern observed on the screen.
At this point, several comments are in order. It is tempting to try to impose either the wavelike picture or the manypaths picture on the experiment. Indeed, both of these pictures provide a useful physical picture that helps us understand the outcome of the experiment. In the wavelike picture, we can think of each electron that leaves the source as feeling the presence of both slits simultaneously, and therefore interfering with itself (rather than with other electrons). In the manypaths picture, each electron follows not one path in the path sum but all possible paths at once, and these paths interfere with each other. However, the infuriating thing about quantum mechanics is that we have no way of knowing what is taking place between the source and the detector. All we have is the observation that there is an interference pattern. Feynman's picture makes this rather manifest. The implications of his picture can be summarized as
 Even within a particlelike interpretation of the experiment, particles do not have predictable positions and momenta along the paths. The reason for this is that the paths, themselves, are not predictable by any rule as they are in classical mechanics!
 If we could devise an experiment for measuring the position of the electron on the screen, we would find that different repetitions of the experiment on one electron initialized the same way would have different outcomes.
Thus, the best we can do from theory is to predict the probability of a given outcome of an experiment but not the actual outcome, itself.
The rationalizations of the three experiments we have examined, blackbody radiation, the photoelectric effect, and electron diffraction, leads us to conclude that classical mechanics, with its deterministic, predictable view of the universe, must be overthrown in favor of a much more radial theory, now known as quantum mechanics. It is interesting to note that the idea of probabilistic outcomes of experiments and the fact that we can ONLY predict the probabilities, lead Albert Einstein ultimately to reject quantum mechanics, saying: ``Gott spielt nicht Würfel'' (``God does not play dice'').
Simple statement of the postulates of quantum mechanics
Below, we summarize the basic postulates of quantum mechanics and contrast them with roughly equivalent postulates of classical mechanics.
Classical:
 Particles are pointlike objects that follow predictable, deterministic paths with welldefined positions and momenta obtained by solving Newton's laws of motion
 The energy of a system can take on any value.
 If the initial conditions of an experiment repeated many times are the same in each repetition, the outcome of the experiment will be the same for each repetition, and that outcome is predictable.
Quantum:
 Particles can exhibit wavelike or particlelike behavior, depending on the experiment. Even within the particlelike interpretation, particles do not follow welldefined, predictable paths and hence, do not have welldefined positions and momenta.
 The energy can take on only certain discrete values.
 Even if a system is prepared in the same way for different repetitions of an experiment, the outcome need not be the same in each repetition. All that we can predict is the probability that a given outcome will be obtained.
Heisenberg's uncertainty principle
If particles cannot be assigned welldefined positions and momenta, then how are these two quantities related for a quantum particle? The fact that particles do not follow welldefined paths means that there must be a limit on how accurately we can determine the position and momentum of a particle and this limit is a fundamental characteristic of the particle, itself, rather than a limit on our ability to perform an accurate enough measurement.
This idea of a fundamental limit to what is knowable about a quantum particle (or collection of quantum particles) was put forth by the physicist Werner Heisenberg in 1927. His principle is now one of the fundamental postulates of quantum mechanics and is known as the uncertainty principle or indeterminacy principle. Heisenberg's principle states that there are specific pairs of physical observables that cannot be simultaneously measured to arbitrary accuracy, i.e. there will be a fundamental limit to what we can know about two such observables simultaneously. Two such observables are said to be incompatible with each other. Dimensionally, such pairs are related so that their product has units of energytime. Thus, if \(A\) and \(B\) constitute such a pair, and if \(\Delta A\) and \(\Delta B\) are the uncertainties associated with these observables, then Heisenberg's uncertainty principle states
\[\Delta A\Delta B \geq \dfrac{1}{2}\hbar\]
where \(\hbar=h/2\pi\). Here, the uncertainties can be computed from the statistical uncertainty
\[\Delta A=\sqrt{\langle A^2\rangle \langle A\rangle ^2}\]
where \(\langle A^2\rangle\) is the average of \(A^2\) over many realizations of an experiment, and \(\langle A\rangle\) is the average of \(A\) over many such realizations.
Position and momentum constitute such a pair of observables. If \(\Delta x\) and \(\Delta p\) are the corresponding uncertainties, then according to Heisenberg's principle, the best we can do in measuring \(x\) and \(p\) simultaneously is to have uncertainties in our measurements related by
\[\Delta x\Delta p\geq \dfrac{1}{2}\hbar\]
If we wish to determine the position of an electron, then we need to probe it with a photon, i.e. scatter a photon off of it and observe where the scattering occurred. The accuracy of the measurement will be related to the wavelength \(\lambda\) of the photon. That is, if we wish to determine the location of the scattering event to within \(10^{12} \ m \ (1\% \ the \ size \ of \ an \ atom)\), then we need \(\lambda =10^{12} \ m\), which, according to the energy formula \(E=hc/\lambda\) is a very energetic photon.
When the photon strikes the electron, it causes the electron to change its direction, as illustrated in the figure below:
If we accept that we can only predict the probability that a measurement of position will yield a particular outcome, then, of course, the same must hold for the particle's momentum. Thus, by u\sing an energetic photon to localize the particle, we potentially also transfer a large amount of kinetic energy to the particle, which increases the range of possible momentum values that could be obtained if a subsequent measurement of momentum were to be carried out. This is why the position and momentum uncertainties must be inversely proportional:
\[\Delta p \geq \dfrac{\hbar}{\Delta x}\]
Feynman's manypath picture of quantum mechanics can also help us understand this fundamental indeterminacy. According to Feynman, such a picture is misleading since the electron does not follow a definite path. Rather, the electron can follow complicated paths as illustrated below:
The more energetic the photon, the greater the spread in the possible scattered paths that electron can follow, which translates into a greater spread in the distribution of scattered momenta. since we have to sum over all possible paths, the uncertainty in the momentum will be quite large for such an energetic photon. So, even though we have a relatively accurate determination of the electron's location, we have a large uncertainty in the momentum, and this is why there is an inverse proportionality between \(\Delta x\) and \(\Delta p\):
\[\Delta p \geq \dfrac{\hbar}{2\Delta x}\]
We stress again, that the uncertainty or indeterminacy principle is a statement about an inherent limit to what we can know about a quantum system. This limit cannot be reduced simply by defining a better experiment.
Coulomb's Law
Given two charged particles with charges \(Q_1\) and \(Q_2\), there is an interaction between them that behaves as follows: since charge can be positive or negative, opposite charges attract each other and like charges repel each other. The strength of the force between any pair of charges varies as the inverse square of the distance between them. These observations constitute Coulomb's Law. Coulomb's law is expressed as a force law of the form
\[F(r)=\dfrac{kQ_1 Q_2}{r^2} \label{1}\]
where \(k\) is a constant whose value is \(8.98755 \times 10^9 \ J\cdot m\cdot C^{2}\). The direction of the force is always along the line joining the two charges. Thus, if the position of charge \(Q_1\) is \(r_1\) and the position of charge \(Q_2\) is \(r_2\), then let \(r=r_2 r_1\). Here, \(r=r\), and the force can be written as a proper vector quantity
\[F(r)=\dfrac{kQ_1 Q_2}{r^3}r \label{2}\]
The force law comes from the following potential energy:
\[V(r)=\dfrac{kQ_1 Q_2}{r} \label{3}\]
and it can be easily verified that
\[F(r)=\dfrac{dV}{dr} \label{4}\]
As \(r\) is varied, the energy will change, so that we have an example of a potential energy curve \(V(r)\), as discussed in lecture 3.
If \(Q_1\) and \(Q_2\) are the same sign, then the curve appears roughly as follows:
Figure 9:
which is a purely repulsive potential, i.e., the energy increases monotonically as the charges are brought together and decreases monotonically as they are separated. From this, it is easy to see that like charges (charges of the same sign) repel each other. If the charges are of opposite sign, then the curve appears roughly as:
Figure 10:
Thus, the energy decreases as the charges are brought together, implying that opposite charges attract.
The Bohr model: An early attempt to predict the energy levels of an atom
It is observed that hydrogen will absorb and emit light at only discrete wavelengths as the figure below shows:
Figure 11: From http://www.solarobserving.com
This observation is connected to the discrete nature of the allowed energies of a quantum mechanical system. Quantum mechanics postulates that, in contrast to classical mechanics, the energy of a system can only take on certain discrete values. This leaves us with the question: How do we determine what these allowed discrete energy values are? After all, it seems that Planck's formula for the allowed energies of a harmonic oscillator came out of nowhere. The model we will describe here, due to Niels Bohr in 1913, is an early attempt to predict the allowed energies for \singleelectron atoms such as \(H\), \(He^{+}\), \(Li^{2+}\), \(Be^{3+}\),... Although Bohr's reasoning relies on classical concepts and hence, is not a correct explanation, the reasoning is interesting, and so we examine this model for its historical significance.
Consider a nucleus with charge \(+Ze\) and one electron orbiting the nucleus. In this analysis, we will use another representation of the constant \(k\) in Coulomb's law. This constant is more commonly represented in the form
\[k=\dfrac{1}{4\pi \epsilon_0}\]
where \(\epsilon_0\) is known as the permittivity of free space, which has the numerical value \(\epsilon_0 = 8.8541878\times 10^{12} \ C^2 J^{1} m^{1}\). The energy of the electron (the nucleus is assumed to be fixed in space at the origin):
\[E=\dfrac{p^2}{2m_e}\dfrac{Ze^2}{4\pi \epsilon_0 r}\]
The force on the electron is
\[F=\dfrac{Ze^2}{4\pi \epsilon_0 r^3}r\]
and its magnitude is
\[F=F=\dfrac{Ze^2}{4\pi \epsilon_0 r^3}r=\dfrac{Ze^2}{4\pi \epsilon_0 r^2}\]
since \(F=m_e a\), the magnitude, it follows that \(F=m_e a\). If we assume that the orbit is circular (which is an approximation because the orbit is really elliptical), then the acceleration is purely centripetal, so
\[a=\dfrac{v^2}{r}\]
where \(v\) is the velocity of the electron. Equating force \(F\) to \(m_e a\), we obtain
\[\dfrac{Ze^2}{4\pi \epsilon_0 r^2}=m_e\dfrac{v^2}{r}\]
or
\[\dfrac{Ze^2}{4\pi \epsilon_0}=m_e v^2 r\]
or
\[\dfrac{Ze^2 m_e r}{4\pi \epsilon_0}=(m_e vr)^2\]
The reason for writing the equation this way is that the quantity \(m_e vr\) is the classical orbital angular momentum of the electron. Bohr was familiar with Maxwell's theory of classical electromagnetism and knew that in a classical theory, the orbiting electron should radiate energy away and eventually collapse into the nucleus. He circumvented this problem by following Planck's idea and positing that the orbital angular momentum \(m_e vr\) could only take on specific values
\[m_e vr=n\hbar \ ; \ n=1,2,3,...\]
Note that the electron must be in motion, so \(n=0\) is not allowed.
Substituting this into the Newton's law expression above, we find
\[\dfrac{Ze^2 m_e r}{4\pi \epsilon_0}=n^2 (\hbar)^2\]
This expression implies that the orbits Bohr was imagining could only have certain allowed radii given by
\[\begin{align*}r_n &= \dfrac{4\pi \epsilon_0 (\hbar)^2}{Ze^2 m_e}n^2 \ ; \ n=1,2,3,...\\ &=\dfrac{a_0}{Z}n^2\end{align*}\]
where the collection of constants has been defined to be \(a_0\)
\[a_0=\dfrac{4\pi \epsilon_0 (\hbar)^2}{e^2 m_e}\]
a quantity that is known as the Bohr radius.
We can also calculate the allowed momenta since \(m_e vr=n\hbar\), and \(p=m_e v\). Thus,
\[\begin{align*}p_n r_n &=n\hbar\\p_n&=\dfrac{n\hbar}{r_n}\\p_n&=\dfrac{\hbar Z}{a_0 n}=\dfrac{Ze^2 m_e}{4\pi \epsilon_0 \hbar n}\end{align*}\]
From \(p_n\) and \(r_n\), we can calculate the allowed energies from
\[E_n=\dfrac{p^2_n}{2m_e}\dfrac{Ze^2}{4\pi \epsilon_0 r_n}\]
Substituting in the expressions for \(p_n\) and \(r_n\) and simplifying gives
\[E_n=\dfrac{Z^2 e^4 m_e}{32\pi^2 \epsilon_{0}^{2}\hbar^2}\dfrac{1}{n^2}=\dfrac{Z^2 e^4 m_e}{8 \epsilon_{0}^{2}h^2}\dfrac{1}{n^2}\]
The constant
\[\dfrac{e^4 m_e}{8\epsilon_{0}^{2} h^2}\]
is an energy having the value \(2.18\times 10^{18} \ J\). since this is a small unit, we redefine a new energy scale by defining the Rydberg as \(1 \ Ry=2.18\times 10^{18} \ J\). Thus, the allowed energies predicted by the Bohr model are
\[E_n=(2.18\times 10^{18})\dfrac{Z^2}{n^2} \ J=\dfrac{Z^2}{n^2} \ Ry\]
These turn out to be the correct energy levels, apart from small corrections that cannot be accounted for in this pseudoclassical treatment. Despite the fact that the energies are essentially correct, the Bohr model masks the true quantum nature of the electron, which only emerges from a fully quantum mechanical analysis of this problem. The energies predicted by the Bohr model are plotted in the figure below. The diagram on the right is called an energy level diagram.

Given a prediction of the allowed energies of a system, how could we go about verifying them? The general experimental technique known as spectroscopy permits us to probe the various differences between the allowed energies. Thus, if the prediction of the actual energies, themselves, is correct, we should also be able to predict these differences.
In the final section of these notes, we discuss the BeerLambert law, and we will have more to say about spectroscopy later in the course. For now, let us assume that we are able to place the electron in Bohr's hydrogen atom into an energy state \(E_n\) for \(n>1\), i.e. one of its socalled excited states. The electron will rapidly return to its lowest energy state, known as the ground state and, in doing so, emit light. The energy carried away by the light is determined by the condition that the total energy is conserved. Thus, if \(n_i\) is the integer that characterizes the initial (excited) state of the electron, and \(n_f\) is the final state (here we imagine that \(n_f =1\), but as long as \(n_f <n_i\), this analysis remains valid), then by energy conservation, we must have that the frequency \(v\) of the emitted light satisfies
\[E_{nf}=E_{ni}h\nu\]
or
\[\nu=\dfrac{E_{nf}E_{ni}}{h}=\dfrac{Z^2 e^4 m_e}{8\epsilon_{0}^{2} h^3}\left ( \dfrac{1}{n_{f}^{2}}\dfrac{1}{n_{i}^{2}}\right )\]
Figure: A simple illustration of Bohr's model of the atom, with an electron making quantum leaps. Figure used with permission from Wikipedia
Thus, by observing the emitted light, we can determine the energy difference between the initial and final energy levels. This is known as emission spectroscopy. Different values of \(n_f\) determine which emission spectrum is observed, and the examples shown in the figure are named after the individuals who first observed them. The figure below shows some of the transitions possible for different \(n_f\) and \(n_i\) values.
If, on the other hand, the atom absorbs light, and ends up in an excited state as a result of the absorption. The absorption is only possible for light of certain frequencies, and again, conservation of energy determines what these frequencies are. If light is absorbed, then the final energy \(E_{nf}\) will be related to the initial energy \(E_{ni}\) with \(n_f >n_i\) by
\[E_{nf}=E_{ni}+h\nu\]
or
\[\nu=\dfrac{E_{nf}E_{ni}}{h}=\dfrac{Z^2 e^4 m_e}{8\epsilon_{0}^{2}h^3}\left ( \dfrac{1}{n_{i}^{2}}\dfrac{1}{n_{f}^{2}}\right )\]
Measuring spectra: The BeerLambert law
The figure below shows the experimental setup for taking a spectrum:
Radiation from a source is passed through a monochromator, which filters out all frequency components except one, generating monochromatic radiaion of a \single frequency \(\nu\). The beam is then split into to two by a beamsplitter. One of the two rays is allowed to pass through the a cell containing the sample whose spectrum is sought. The other is allowed to pass through a reference cell, which is just the cell without the sample. We'll come back to the role of the reference cell later. Let \(I_0\) be the intensity of radiation entering the sample, and \(I_S\) be the intensity of radiation emerging from the sample. The output beam with intensity \(I_S\) is then sent to a photodetector that measures the actual intensity, the photosignal is then amplified and sent to a recording device, where a final readout of the intensity at frequency \(\nu\) is noted.
Let us first analyze what happens as the beam is sent through the sample. Let the direction of propagation of the radiation through the sample be the \(x\) direction and let the sample extend from \(x=0\) to \(x=l\) on the \(x\) axis. Thus, \(I_0\) is the intensity at \(x=0\). We wish to determine the intensity \(I(x)\) as the beam passes through the sample. As it does, there will be an attenuation of intensity due to absorption events. Remember that the beam contains many many photons, and the sample contains many molecules, so we are interested in a measure of the number of photons absorbed vs. the number of photons that pass through without being absorbed. This will give us a measure of the attenuation in the intensity, since intensity is directly proportional to the number of photons pas\sing through a given area of the sample at any instant in time.
We expect the following. If \(I(x)\) is the intensity at point \(x\) in the sample, then the fractional loss of intensity \(dI/I(x)\) when the beam passes through a length \(dx\) of the sample will be proportional to the concentration of the sample \(C\) as well as \(dx\):
\[\dfrac{dI}{I(x)}\propto Cdx\]
The constant of proportionality is denoted \(\varepsilon'\) and is called the molar extinction coefficient:
\[\dfrac{dI}{I(x)}=C\varepsilon' dx\]
(The reason for the prime will be clarified below.) The units of the molar extinction coefficient are \(L\cdot mol^{1}\cdot m^{1}\). Rearranging this as a simple firstorder differential equation gives
\[\dfrac{dI}{dx}=C\varepsilon' I(x)\]
The solution gives us the intensity \(I(x)\) as a function of \(x\) through the sample:
\[I(x)=I(x=0)e^{C\varepsilon' x}=I_0 e^{C\varepsilon' x}\]
Or taking the log of both sides
\[ln\left ( \dfrac{I(x)}{I_0}\right )=C\varepsilon' x\]
When \(x=l\), \(I(x)=I_S\), so we can write this as
\[ln\left ( \dfrac{I_S}{I_0}\right ) =C\varepsilon' l\]
or
\[ln\left ( \dfrac{I_0}{I_s}\right ) =C\varepsilon' l\]
The common form of the BeerLambert law uses base 10 logarithms rather than natural logarithms and redefines the extinction coefficient as \(\varepsilon =\varepsilon' /2.303\), which gives
\[log_{10} \left ( \dfrac{I_0}{I_s}\right ) =C\varepsilon l\]
The attenuation of the beam through the sample will be due to only in part to the actual material whose spectrum is sought. There is another contribution to the cell, itself, and whatever material it is composed of. This is where the reference beam comes in. By observing the attenuation of the beam through the reference cell, we can determine how much of the attenuation is due to the cell alone. Thus, we are not interested in the ratio \(I_S /I_0\) as much as the ratio \(I_S /I_R\), where \(I_R\) is the beam that emerges from the reference cell, partly attenuated. For the ratio \(I_S /I_R\), we assign the molar extinction \(\varepsilon\) and write
\[log_{10}\left ( \dfrac{I_R}{I_S}\right ) =C\varepsilon l\]
The quantity \(log_{10} (I_R /I_S)\) is called the absorbance \(A\) (the quantity \(log_{10} (I_S /I_R)\) is called the transmittance \(T\)). Thus, we have finally
\[A=C\varepsilon l\]
This result is known as the BeerLambert law, and it is one of the fundamental principles of molecular spectroscopy. The extinction coefficient \(\varepsilon\) measures the extent to which the sample is able to absorb radiation at a given frequency \(\nu\) or wavelength \(\lambda\)(remember we can use either since \(\nu =c/\lambda\)). Hence, it is an intrinsic property of the material. We will revisit the BeerLambert law when we cover spectroscopy in greater detail toward the end of the semester.