Você está na página 1de 9

MODULE 14

Time-dependent Perturbation Theory


In order to understand the properties of molecules it is necessary to see how systems respond to
newly imposed perturbations on their way to settling into different stationary states. This is
particularly true for molecules that are exposed to electromagnetic radiation where the
perturbation is a rapidly oscillating electromagnetic field. Time dependent perturbation theory
allows us to calculate transition probabilities and rates, very important properties in the
Photosciences.
The approach we shall use is that for the time-independent situation, viz., examine a two-state
system in detail and then outline the general case of arbitrary complexity.
The two-state system
We can write the total hamiltonian for the system as the sum of a stationary state term and a
time-dependent perturbation
!" #"
$ $ $
" H H H t +
and suppose that the perturbation is one that oscillates at an angular fre%uency, , then
#" #"
$ $
" & cos H t H t
where
#"
$
H
is the hamiltonian for a time-independent perturbation and the factor & is for later
convenience. 'ur time-dependent (chr)dinger e%uation is
$
H i
t

h
*s before we consider a pair of eigenstates
#
and
&
representing the time-independent"
wavefunctions
#
and
&
with energy eigenvalues E1 and E2. These wavefunctions are the
solutions of the e%uation
!"
$
n n n
H E
and are related to the time-dependent functions according to
+
"
n
iE t
n n
t e


h
#
When the perturbation is acting the state of the system can be expressed as a linear combination
of the basis functions defined by the set in e%uation #,.-". We confine ourselves to .ust two of
them and the linear combination becomes
# # & &
" " " " " t a t t a t t +
/otice that we are allowing that the coefficients are time-dependent because the composition of
the state may evolve with time. The overall time dependence of the wavefunction is partly due
to the changes in the basis functions and partly due to the way the coefficients change with time.
*t any time t, the probability that the system is in state n is given by
&
"
n
a t . (ubstitution of
e%uation #,.0" into the (chr)dinger e%uation #,.1" leads to the following
!" #" !" #"
# # # # & & & &
# # & &
# # & &
# # & &
$ $ $ $ $
" "
"
H a H a H t a H a H t
i a a
t
a a
i a i i a i
t t t t



+ + +
+

+ + +
h
h h h h
/ow each of the basis functions satisfies the time dependent (chr)dinger e%uation, viz.
!"
$
n
n
H i
t

h
so the last e%uation in #,.2" simplifies down as
#" #"
# # & & # # & &
$ $
" " a H t a H t i a i a + + & & h h
where the dots over the coefficients signify their time derivatives. /ow we expand the last
e%uation by explicitly writing the time dependence of the wavefunctions
# &
# &
+ + #" #"
# &
+ +
# &
$ $
" # " &
# &
iE t iE t
iE t iE t
a H t e a H t e
i a e i a e


+
+
h h
h h
& & h h
/ow multiplying from the left by the bra
#
and use the orthonormality relationship we find
# & #
+ + + #" #"
# ## & #& #
" "
iE t iE t iE t
a H t e a H t e i a e

+
h h h
& h
where the matrix elements are defined in the usual way. /ow we write
&# & #
E E h when we
find that #,.##" becomes
&#
#" #"
# ## & #& #
" "
i t
a H t a H t e i a

+ & h
&
It is not unusual to find that the perturbation has no diagonal elements so we put
#" #"
## &&
" " ! H t H t
*nd then e%uation #,.#&" reduces to
&#
#"
& #& #
"
i t
a H t e i a

& h
or, rearranging
&#
#"
# & #&
#
"
i t
a a H t e
i

&
h
this provides us with a first order differential e%uation for one of the coefficients but it contains
the other one. We proceed exactly as above to obtain an e%uation for the other coefficient
&#
#"
& # &#
#
"
i t
a a H t e
i

&
h
In the absence of any perturbation, the two matrix elements are both e%ual to zero and the time
derivatives of the two coefficients also become zero. In which case the coefficients retain their
initial values and even though oscillates with time, the system remains frozen in its initially
prepared state.
/ow suppose that a constant perturbation, 3, is applied at some starting time and continued until
a time later. To allow this let us write
#" #"
#& &#
" and " 4 H t V H t V h h
where we have invo5ed the 6ermiticity of the hamiltonian. Then
&# &#
# & & #
4
i t i t
a iVa e a iV a e

& &
This is a pair of coupled differential e%uations, which can be solved by one of several methods.
'ne solution is
&#
+ &
&
" "
i t i t i t
a t Ae Be e

+
where A and B are constants that are determined by the initial conditions and is given by
&
& #+ &
&#
#
, "
&
V +
* similar expression holds for a1.
1
If, prior to the perturbation being switched on the system is definitely in state # then a1!" 7 #,
and a2!" 7 !. These initial conditions allow us to find * and 8 and eventually we can arrive at
two particular solutions, one for a1(t) and one for a2(t". These are
&#
#
&#
&
#
" cos sin
&
i t
i
a t t t e



+
' ;


and
&#
#
&
&
" sin
i t i V
a t te
_

,
The probability of finding the system in state & initially e%ual to zero" after turning on the
perturbation is given by
&
& &
" " P t a t
and by substitution of e%uation #2.&!" we are lead to the Rabi formula
&
&
& & #+ &
& &# &
&
&#
,
#
" sin , "
&
,
V
P t V t
V

_

+
' ;

+
,
/ow suppose the pair of states is degenerate, then &# 7 ! and
&
&
" sin P t V t
This function has the form shown in 9igure #,.#.
Figure 14.1: three plots of the function in e%uation #2.&1", with 3 7 # g", & f", and 15"
,
! & , 0 ; #!
!
!.&
!.,
!.0
!.;
#
g t "
f t "
5 t "
t
The system oscillates between the two states, spending as much time in # as in &. We also see
that the fre%uency of the oscillation depends on the strength of the perturbation, 3, so that strong
perturbations drive the system between the two states more rapidly than do wea5 perturbations.
(uch systems are described as <loose=, because even wea5 perturbations can drive them
completely from one extreme to the other.
The opposite situation is different, i.e. where the states are far apart in energy in comparison to
the absolute value of 3. Then
&
&
&#
, V >> and
&
&
& &#
&#
& #
" sin
&
V
P t t


,
*gain we find oscillation but now the probability of finding state & is never higher than
&
&
&#
, + V which is much less than unity. 6ere the probability of the perturbation driving the
system into state & is very small. 9urthermore, notice that the oscillation fre%uency is governed
by the energy separation and is independent of the perturbation strength. The only role of the
perturbation is to govern the fraction of the system in state &. The probability is higher at larger
perturbations.
any le!el systems
9or the many-level system we need to expand the wavefunction as a linear combination of all the
n-states contained in e%uation #,.,". This leads to very complicated e%uations when combined
with perturbations that are more complex than the time-independent one used above. To
approach this we use the approximation techni%ue called >variation of constants? and invented
by @irac. The problem we are faced with is to find out how the linear combination varies with
time under the influence of realistic perturbations. We base the approximation on the condition
that the perturbation is wea5 and applied for such a short time that all the coefficients remain
close to their initial values.
Aventually it can be shown that
#"
!
#
" "
fi
t
i t
f fi
a t H t e "t
i

h
-
where af is the coefficient for the final state that was initially unoccupied. This approximation
ignores the possibility that the route between initial and final states is via other states, i.e. it
accounts only for the direct route. It is a first order theory because the perturbation is applied
once and only once.
/ow we can use the expression in #,.&-" to see how a system will behave when exposed to an
oscillatory perturbation, such as light. 9irst we consider transitions between a pair of discreet
states
i
and
f
. Then we shall embed the final state into a continuum of states.
(uppose our perturbation oscillates with a fre%uency 7 & and it is turned on at t 7 !. Then
its hamiltonian has the form
#" #" #"
$
" & cos "
i t i t
H t H t H e e


+
/ow you see the reason for the factor of & in the expansion.". Putting this expression into
e%uation #,.&-" yields
#"
#
!
#" " "
" "
# #
" "
fi
fi fi
t
i t fi i t i t
i t i t
fi
fi fi
H
a t e e e "t
i
H
e e
i i i

+
+


+
' ;
+

h
h
where
fi
# E
f
$ E
i
. This is an obscure result but simplification is very straightforward. In
electronic spectroscopy and photoexcitation the fre%uencies and fi involved are very high, of
the order of #!
#-
s
-#
. Thus the first term in the braces is exceedingly small, whereas the second
term, in which the denominator can approach zero, can be very large. It is therefore possible to
ignore the first term. Then the probability of finding the system in state f after a time t, when it
was initially completely in state i, becomes
&
#"
&
&
& &
,
#
" " sin "
" &
fi
f f fi
fi
H
P t a t t


h
Bsing the relationship employed earlier that
#
#&
" H t V h we rewrite e%uation #,.&;" as
&
&
&
,
#
" sin "
" &
fi
f fi
fi
V
P t t

0
This expression loo5s very much li5e e%uation #,.&," that was developed for a static
perturbation on the two-level system. The difference being that fi is here replaced by (fi - )
which is termed the fre%uen&y offset. A%uation #,.&C" informs us that not only is the amplitude
of the transition probability dependent on the fre%uency offset, so is its time dependence. 8oth
these increase as approaches fi and when the difference becomes zero, the transition
probability is at a maximumD at this point the radiation field and the system are in resonan&e.
Avaluating the limit of e%uation #2.&C" as the two fre%uencies approach each other we find that
&
&
lim "
fi
f fi
P t V t

and the probability of the transition occurring as a result of the perturbation increases
%uadratically with time. The shape of e%uation #,.&2" and the time dependence of the amplitude
for four times are shown in 9igure #,.&.

Figure 14.2: Plots of e%uation #2.&2" at four different times #,&,1, and ," showing the
%uadratic dependence of the amplitude on time. The vertical scale is arbitrary.
2
&! #0 #& ; , ! , ; #& #0 &!
!
&.&
,.,
0.0
;.;
##
#1.&
#-.,
#2.0
#C.;
&&
g f # , "
g f & , "
g f 1 , "
g f , , "
r f
Transitions to states in a &ontinuum
In the previous section the transition was between a pair of discrete statesD now we consider the
case where the final state is part of a continuum of states, close to each other in energy. We can
still use e%uation #,.&C", to estimate the probability of promoting to one member of the
continuum, but now we need to integrate over all the transition probabilities that the perturbation
can induce in the system. We need to define a "ensity of states, ()) which relates to the
number of states accessible as a result of the perturbation, then
" E "E
is the number of final
states in the range E to E ' "E that are able to be reached. Then the total transition probability is
given by
" " "
f
ran(e
P t P t E "E

/ow we use e%uation #,.&C" and the relation


+
fi
E h
to arrive at the following
&
&
&
#
&
sin + "
" , "
+ "
fi
ran(e
E t
P t V E "E
E

h
h
/ow we set about simplifying this expression. 9irst, we recognize that the %uotient in the
integrand, which is e%uivalent to
& &
sin + ) ) is very sharply pea5ed when + E h , the radiation
fre%uency. This means that we can sensibly restrict ourselves to considering only those states that
have significant transition probabilities at fi. 8ecause of this we can evaluate the density
of states in the neighborhood of
fi fi
E h
and treat it as a constant. The other effect of this is
that although the matrix elements
fi
V
depend on A, we are concerned only with a narrow range
of energies contributing to the integral, that we can assume it is a constant. Bnder these
considerations e%uation #,.1&" simplifies to
&
&
&
#
&
sin + "
" " ,
+ "
fi fi
ran(e
E t
P t V E "E
E

h
h
To add one more approximation we convert the integral into a standard form by extending the
limits to infinity. 8ecause of the shape of the function, the value of the integrand has virtually no
area outside the actual range, so this approximation introduces very little error. /ow we simplify
;
again by setting
#
+ "
&
) E t h , which implies that
& + " "E t ") h
, and the probability
expression becomes
&
&
&
&
& sin
" "
fi fi
)
P t V E t ")
t )


,

h
Bsing the standard form that
&
sin )
")
)

then the re%uired probability becomes


&
" & "
fi fi
P t t V E h
If we now define the transition rate *fi" as the rate of change of the probability that the system
arrives at an initially empty state, we find that
&
& "
fi fi fi
* V E h
This expression is 5nown as the Fermi Golden Rule and it tells us that we can calculate the rate
of a transition if we 5now the s%uare modulus of the transition matrix element between the two
states and the density of final states at the fre%uency of the transition. It is a very useful
expression and we shall find many examples of its use in the coming wee5s.
C

Você também pode gostar