3.6: Time-Dependent Perturbation Theory
- Page ID
- 419072
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\(\newcommand{\ket}[1]{\left| #1 \right>} \)
\( \newcommand{\bra}[1]{\left< #1 \right|} \)
\( \newcommand{\braket}[2]{\left< #1 \vphantom{#2} \right| \left. #2 \vphantom{#1} \right>} \)
\( \newcommand{\qmvec}[1]{\mathbf{\vec{#1}}} \)
\( \newcommand{\op}[1]{\hat{\mathbf{#1}}}\)
\( \newcommand{\expect}[1]{\langle #1 \rangle}\)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Time-dependent perturbation theory refers to calculating the time evolution of a system by truncating the expansion of the interaction picture time-evolution operator after a certain term. In practice, truncating the full time-propagator \(U\) is not effective, and only works for times that are short compared to the inverse of the energy splitting between coupled states. The interaction picture applies to Hamiltonians that can be cast as \(H(t)\; = \;H_{0} + V(t)\), which allows us to treat the time evolution under \(H_0\ \)exactly, but only truncate the influence of \(V(t)\). This works well for weak perturbations. Let’s look more closely at this.
We know the eigenstates for \(H_0\) from solving the TISE, \(H_{0}\left| n \right\rangle \; = \;E_{n}\left| n \right\rangle\), and we can calculate the evolution of the wavefunction that results from \(V(t)\):
\[\left| \psi _{I}(t) \right\rangle \; = \;\sum\limits_{n} b_{n}(t)\left| n \right\rangle \label{eq3.6.1}\]
For a given state \(k\), we calculate \(b_{k}\) as:
\[b_{k}(t)\; = \;\left\langle k\left| U_{I}(t,t_{0}) \right|\psi (t_{0}) \right\rangle \label{eq3.6.2}\]
where\[U_{I}\left( t,t_{0} \right)\; = \;\exp _{ + }\left[ \frac{ - i}{\hbar }\;\int_{t_{0}}^{t} V_{I}\left( \tau \right)\,d\tau \right]\label{eq3.6.3}\]
We can truncate the expansion of \(U_I\) after a few terms. This works well for small changes in the probability amplitude of the states involved, \(b_k(t) \approx b_k\left(0\right)\), and therefore small coupling relative to their energy splittings, \(|V|\ll\left|E_k-E_n\right|\). As we will see, the results we obtain from perturbation theory are widely used for spectroscopy, condensed phase dynamics, and relaxation.
Let’s take the specific case where we have a system prepared in state \(\left| \ell \right\rangle\), and we want to know the probability density \(P_k(t)=\left|b_k(t)\right|^2\), the probability of observing the system in \(\left| k \right\rangle\) at time \(t\) due to the external potential. Expanding eq. (\ref{eq3.6.2}) we find
\[\begin{array}{rl} b_{k}(t) \kern-.75em & \displaystyle = \left\langle {k} | {\ell } \right\rangle - \frac{i}{\hbar }\int_{t_{0}}^{t} d\tau \,\left\langle k\left| V_{I}\left( \tau \right) \right|\ell \right\rangle \\
& \displaystyle + \left( \frac{ - i}{\hbar } \right)^{2}\;\int_{t_{0}}^{t} d\tau _{2}\;\int_{t_{0}}^{\tau _{2}} d\tau _{1} \left\langle k\left| V_{I}\left( \tau _{2} \right)V_{I}\left( \tau _{1} \right) \right|\ell \right\rangle \; + \; \ldots \end{array}\label{eq3.6.4}\]
Now, we can express the matrix elements in the interaction picture as
\[\left\langle k\left| V_{I}(t) \right|\ell \right\rangle \; = \;\left\langle k\left| U_0^{\dagger}\;V(t)\;U_{0} \right|\ell \right\rangle \; = \;e^{ - i\omega _{\ell k}t}\;V_{k\ell }(t)\label{eq3.6.5}\]
and from this we obtain:
\[\begin{array} {rll}b_{k}(t)\; = \;\delta _{k\ell }\; - \;\frac{i}{\hbar }\int_{t_{0}}^{t} d\tau _{1}\;e^{ - i\omega _{\ell k}\tau _{1}}V_{k\ell }\left( \tau _{1} \right) & \text{“first-order”}
\end{array} \label{eq3.6.6} \]
\[\begin{array}[t]{rll} & + \sum\limits_{m} \left( \frac{ - i}{\hbar } \right)^{2} \int_{t_{0}}^{t} d\tau _{2}\;\int_{t_{0}}^{\tau _{2}} d\tau _{1}\;e^{ - i\omega _{mk}\tau _{2}}\;V_{km}\left( \tau _{2} \right)\;e^{ - i\omega _{\ell m}\tau _{1}}\; V_{m\ell }\left( \tau _{1} \right)\quad & \text{“second-order”} \\[6pt] &+ \; \ldots & \end{array}\label{eq3.6.7}\]
The first-order term allows only direct transitions between \(\left| \ell \right\rangle\) and \(\left| k \right\rangle\), as allowed by the matrix element in \(V\), whereas the second-order term accounts for transitions occurring through an intermediate state \(\left| m \right\rangle\). For perturbation theory, the time-ordered integral is truncated at the appropriate order. Including only eq. (\ref{eq3.6.6}) is known as first-order perturbation theory, whereas including all terms up to eq. (\ref{eq3.6.7}) is second-order perturbation theory. The order of perturbation theory applicable to a particular calculation should be chosen to account for the types of the intermediate pathways that should be allowed between \(\left| \ell \right\rangle\) and \(\left| k \right\rangle\), keeping in mind the relative magnitude of the interactions between states.
For first-order perturbation theory, the expression in eq. (\ref{eq3.6.6}) is the solution to the differential equation that we get for direct coupling between \(\left| \ell \right\rangle\) and \(\left| k \right\rangle\):
\[\frac{\partial }{\partial t}b_{k}\; = \;\frac{ - i}{\hbar }\;\;e^{ - i\omega _{\ell k}t}\;V_{k\ell }\left( t \right)\;b_{\ell }\left( 0 \right)\label{eq3.6.8}\]
The solution will not allow for the feedback between \(\left| \ell \right\rangle\) and \(\left| k \right\rangle\), and as a result neglects the changing amplitude in these states. As a result it is only valid in describing the initial rate of transfer out of state \(|\ell \rangle\) such that \(\left| b_{k}\left( t \right) \right|^{2} - \left| b_{k}\left( 0 \right) \right|^{2} \ll 1\). If the initial state of the system \(\left| \psi _{0} \right\rangle\) is not an eigenstate of \(H_0,\) we can express it as a superposition of eigenstates,\(\left| \psi_0 \right\rangle = \sum_n b_n(0) \left| \psi_0 \right\rangle\) with
\[b_{k}\left( t \right) = \;\sum\limits_{n} b_{n}\left( 0 \right)\left\langle k \right|U_{I}\left| n \right\rangle \label{eq3.6.9}\]
Another observation applies to first-order perturbation theory. If the system is initially prepared in a state \(\left| \ell \right\rangle\), and a time-dependent perturbation is turned on and then turned off over the time interval \(t = - \infty\) to \(+ \infty\), then the complex amplitude \(b\left( t \right)\) in the target state \(\left| k \right\rangle\) can be obtained by taking the Fourier transform of \(V_{\ell k}\left( t \right)\) and evaluating its value at the energy gap \(\omega_{\ell k}\).
\[b_{k}\left( t \right)\; = - \;\frac{i}{\hbar }\int_{ - \infty }^{ + \infty } d\tau \;e^{ - i\omega _{\ell k}\tau }\,V_{k\ell }\left( \tau \right) \label{eq3.6.10}\]
That is, if the Fourier transform pair is defined in the following manner:
\[\tilde V\left( \omega \right) \equiv \tilde{\mathscr{F}}\left[ V\left( t \right) \right] = \int_{ - \infty }^{ + \infty } dt\;V \left( t \right)\exp \left( i\omega t \right)\label{eq3.6.11}\]\[V\left( t \right) \equiv \tilde{\mathscr{F}}^{ - 1}\left[ \tilde V\left( \omega \right) \right] = \frac{1}{2\pi }\int_{ - \infty }^{ + \infty } d\omega \;\tilde V\left( \omega \right) \exp \left( - i\omega t \right)\label{eq3.6.12}\]
Then we can substitute (\ref{eq3.6.8}) into (\ref{eq3.6.2}) and square the result to write the probability of transfer to state \(k\) as
\[P_{k\ell } = \frac{2\pi \left| {\tilde V}_{k\ell }\left( \omega _{k\ell } \right) \right|^{2}}{\hbar ^{2}}\label{eq3.6.13}\]
Example: First-order Perturbation Theory
Let’s consider a simple model for vibrational excitation induced by the rapid compression of a harmonic oscillator illustrated in Figure 3.6.1. We will subject a harmonic oscillator initially in its ground state, \(\left|\ell\right\rangle=\left|0\right\rangle\), to a Gaussian compression pulse which increases its force constant \(\kappa\) from \(\kappa_0\) to \(\kappa_0+{\delta\kappa}_0\), and determine the probability of population transfer to the other oscillator levels, \(|n\rangle\).

The complete time-dependent Hamiltonian is:
\[H\left( t \right)\;\; = \;\;T + V\left( t \right) = \;\;\frac{p^{2}}{2m}\,\; + \;\,\frac{1}{2}\kappa \left( t \right)x^{2}\label{eq3.6.14}\]
where
\[\kappa \left( t \right) = \kappa _{0} + \delta \kappa \left( t \right)\label{eq3.6.15}\]
\[\delta \kappa \left( t \right) = \delta \kappa _{0}\exp \left( - \frac{\left( t - t_{0} \right)^{2}}{2\sigma ^{2}} \right)\;\;\label{eq3.6.16}\]
Here \(\sigma\) is the width in time of the Gaussian perturbation as a standard deviation. Now, we use eq. (\ref{eq3.6.14}) to partition the Hamiltonian as \(H\; = \;H_{0}\; + \;V\left( t \right)\) in such a manner that we can write \(H_0\) as a time-independent harmonic oscillator Hamiltonian.
\[H\; = \;\underbrace {\frac{p^{2}}{2} + \frac{1}{2}\kappa _{0}x^{2} \vphantom{\left(\frac{\left( t_{0} \right)^{2}}{\sigma ^{2}} \right)}}_{\Large H_{0}}\; + \underbrace {\;\frac{1}{2}\delta \kappa _{0}x^{2}\exp \left( - \frac{\left( t - t_{0} \right)^{2}}{2\sigma ^{2}} \right)}_{\Large V(t)}\label{eq3.6.17}\]
So, we know the eigenstates and eigenvalues of \(H_0\): \(H_{0}\left| n \right\rangle = E_{n}\left| n \right\rangle\)
\[H_{0} = \hbar \Omega \left( a^{\dagger}a + \frac{1}{2} \right)\label{eq3.6.18}\]\[E_{n} = \hbar \Omega \left( n + \frac{1}{2} \right)\label{eq3.6.19}\]
Now we ask, if the system is in \(\left| 0 \right\rangle\) before applying the perturbation, what is the probability of finding it in state \(\left| n \right\rangle\) after the perturbation? For \(n \ne 0:\)
\[b_{n}\left( t \right)\; = \;\frac{ - i}{\hbar }\int_{t_{0}}^{t} d\tau \;\;V_{n0}\left( \tau \right)\; \,e^{i\omega _{n0}\tau }\label{eq3.6.20}\]
Using \(\omega _{n0} = { \left( E_{n} - E_{0} \right) / \hbar } = n\Omega\), and recognizing that we can set the limits to \(t_{0} = - \infty\) and \(t = \inf\),
\[b_{n}\; = \frac{ - i}{2\hbar }\delta \kappa _{0}\left\langle n\,\left| x^{2} \right|\,0 \right\rangle \;\int_{ - \infty }^{ + \infty } d\tau \;\;e^{in\Omega \tau }e^{ - { \tau ^{2} / 2\sigma ^{2} }}\label{eq3.6.21}\]
This leads to \[b_{n}\; = \;\frac{ - i}{2\hbar }\;\delta \kappa _{0}\sqrt {2\pi } \sigma \left\langle n\,\left| x^{2} \right|\,0 \right\rangle \;e^{ - n^{2}\sigma ^{2}\Omega ^{2}/2}\label{eq3.6.22}\]
Here we made use of an important identity for Gaussian integrals:
\[\int_{ - \infty }^{ + \infty } \;\exp \left( ax^{2} + bx + c \right)dx \; = \;\sqrt {\frac{ - \pi }{a}\;} \exp \left( c - \frac{1}{4}\frac{b^{2}}{a} \right)\label{eq3.6.23}\]
\[\int_{ - \infty }^{ + \infty } \;\exp \left( - ax^{2} + ibx \right)dx \; = \sqrt {\frac{\pi }{a}} \exp \left( - \frac{b^{2}}{4a} \right)\label{eq3.6.24}\]
Note that eq. (\ref{eq3.6.21}) takes the form of a Fourier transform. By taking the modulus squared of both sides of we have we have illustrated an example of eq. (\ref{eq3.6.13}).
Now let’s evaluate the matrix element, which we can expand in raising and lowering operators:
\[x^{2} = \frac{\hbar }{2m\Omega }\left( a + a^{\dagger} \right)^{2} = \frac{\hbar }{2m\Omega }\left( aa + a^{\dagger}a + aa^{\dagger} + a^{\dagger}a^{\dagger} \right)\label{eq3.6.25}\]
From these we see that first-order perturbation theory will not allow transitions to \(n=1\) from \(n=0\), only \(n=2\). Generally, this would not be realistic because we would certainly expect that excitation to \(n=1\) would dominate over excitation to \(n=2\). A real system would also be anharmonic, in which case, the leading term in the expansion of the potential \(V(x)\), that is linear in \(x\), would not vanish as it does for a harmonic oscillator, and this would lead to matrix elements that raise and lower the excitation by one quantum.
However, for the present case, the only matrix element for transfer of amplitude out of \(n=0\) is
\[\left\langle 2\,\left| x^{2} \right|\,0 \right\rangle = \sqrt {2} \frac{\hbar }{2m\Omega }\label{eq3.6.26}\]
So,\[b_{2} = \frac{ - i\,\sqrt {\pi \,} \delta \kappa _{0}\,\sigma }{2m\Omega }\;e^{ - 4\sigma ^{2}\Omega ^{2}}\label{eq3.6.27}\]
Recognizing that \(\kappa _{0} = m\,\Omega ^{2}\) allows us to write the probability of occupying the \(n = 2\) state after the pulse, i.e. the population of state \(|2\rangle\), as
\[P_{2} = \left| b_{2} \right|^{2} = \frac{\;\pi }{2}\left( \frac{\delta k_{0}}{k_{0}} \right)^{2}\Omega ^{2}\sigma ^{2}e^{ - 8\sigma ^{2}\Omega ^{2}}\label{eq3.6.28}\]
Looking at the exponential argument, we observe that significant transfer of amplitude occurs when the compression pulse width is small compared to the vibrational period.
\[\sigma \ll \frac{1}{\Omega }\label{eq3.6.29}\]
In this regime, the potential is changing faster than the atoms can respond to the perturbation. In practice, when considering a solid-state problem, with frequencies matching those of acoustic phonons and unit cell dimensions, we need perturbations that move faster than the speed of sound, i.e., a shock wave. The opposite limit, \(\sigma \Omega \gg 1\), is the adiabatic limit. In this case, the perturbation is so slow that the system always remains entirely in \(n = 0\), even while it is compressed.
Now, let’s consider the range of validity of this first-order treatment. Perturbation theory does not allow for \(b_n\) to change much from its initial value, \(P_{2} \ll 1\), indicating that
\[\left( \sigma \Omega \right)\left( \frac{\delta \kappa _{0}}{\kappa _{0}} \right) \ll 1\label{eq3.6.30}\]
Generally, the first order result will hold when the magnitude of the perturbation \(\delta\kappa_0\) is small compared to \(\kappa_0\), even if \(\sigma \approx \Omega^{-1}\).
One step further
The preceding example was simple and a bit unphysical, but it tracks the general approach to setting up problems that we treat with time-dependent perturbation theory. The approach relies on writing a Hamiltonian that can be cast into a Hamiltonian that we can treat exactly as \(H_0\), and time-dependent perturbations that shift amplitudes between its eigenstates. For this scheme to work well, we need the magnitude of perturbation to be small, which immediately suggests working with a Taylor series expansion of the potential. For instance, take a one-dimensional potential for a bound particle, \(V(x)\), which is dependent on the form of an external variable \(y\). We can expand the potential in \(x\) about its minimum \(x = 0\) as
\[\begin{array}{rl} V\left( x \right) \kern-.8em & \displaystyle = \frac{1}{2!}\left. \frac{\partial ^{2}V}{\partial x^{2}} \right|_{x = 0}x^{2} + \frac{1}{2!}\left. \frac{\partial ^{2}V}{\partial x\partial y} \right|_{x = 0}xy + \frac{1}{3!}\sum\limits_{n = 1,2} \left. \frac{\partial ^{3}V}{\partial x^{n}\partial y^{3 - n}} \right|_{x = 0} x^{n}y^{3 - n} + \cdots \\
& = \dfrac{1}{2}kx^{2} + V^{(2)}\,xy + \left( V_3^{(3)}x^{3} + V_2^{(3)}x^{2}y + V_1^{(3)}xy^{2} \right) + \cdots \end{array}\label{eq3.6.31}\]
The first expansion coefficient is the harmonic force constant for \(x\), and the second term describes a bi-linear coupling whose magnitude \(V^{(2)}\) indicates how much a change in the variable \(y\) influences the variable \(x\). The remaining \(V^{(3)}\) terms are cubic expansion terms. \(V_0^{(3)}\) is the cubic anharmonicity of \(V(x)\), and the remaining two terms are cubic couplings that describe the dependence of \(x\) on \(y\). Introducing a time-dependent potential is equivalent to introducing a time-dependence to the operator \(y\), where the form and strength of the interaction is subsumed into the amplitude \(V\). In the case of the previous example, our formulation of the problem was equivalent to selecting only the \(V_2^{(3)}\) term, so that \({ \delta \kappa _{0} / 2 = }V_2^{(3)}\), and giving the value of \(y\) a time-dependence described by the Gaussian waveform. If we consider matrix elements in the other cubic terms, we recognize that terms such as \(V_2^{(3)}\) in the above example will give rise to single quantum excitations from \(|0\rangle\) to \(|1\rangle\) not present in our earlier solution.
Readings
\(\tag {1} \label {Cite1}\) |
Cohen-Tannoudji, C.; Diu, B.; Lalöe, F., Quantum Mechanics. Wiley-Interscience: Paris, 1977; p. 1285. |
\(\tag {2} \label {Cite2}\) |
Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 4. |
\(\tag {3} \label {Cite3}\) |
Sakurai, J. J., Modern Quantum Mechanics, Revised Edition. Addison-Wesley: Reading, MA, 1994; Ch. 2. |