Now you are in the subtree of Physics public knowledge tree. 

Diagrammatics

The main goal here is to explain how, using the interaction representation, a systematic perturbation theory in the interaction strength can be developed for the Green's function. There are whole books devoted to this subject (e.g., Mattuck; Mahan) so we will just be giving an overview; see the attached sheet for key diagrams and information on "Feynman rules". We discussed connections of the one- and two-particle Green's functions to experimental quantities in the previous notes.

First we state the result, derived below: we can express the desired Heisenberg representation Green's function in terms of the interaction representation operators: for $t_1 > t_2$,
$$\begin{eqnarray} G_{\alpha \beta}(t_1,r_1,t_2,r_2) &=& -i \langle \Psi_\alpha(t_1,r_1) \Psi_\beta^\dagger(t_2,r_2) \rangle \cr &=& -i \langle S^{-1}(t_1,-\infty) \Psi_{0 \alpha}(t_1,r_1) S(t_1,-\infty) S^{-1}(t_2,-\infty) \Psi_{0\beta}^\dagger(t_2,r_2) S(t_2,-\infty) \rangle \label{gdef} \end{eqnarray}$$
Here the unitary $S$ operator is defined as (as before $\hbar=1$)
$$\begin{equation} S(t_1,t_2) = T \exp(-i \int_{t_2}^{t_1} V_0(t)\,dt), \end{equation}$$
where $V_0(t)$ is the interaction representation of the interaction part $V$ of the Hamiltonian $H = H_0+V$. The purpose of $S$ is to connect the interaction and Heisenberg representations: for a general operator $\Psi$ in Heisenberg representation,
$$\begin{equation} \Psi = S^{-1}(t,-\infty) \Psi_0 S(t,\infty). \end{equation}$$

Let us quickly review where the above expressions come from. The fundamental definition of the interaction representation is that the operators evolve according to the unperturbed Hamiltonian: using the quantum field operator as an example,
$$\begin{equation} \Psi_0(t,r) = e^{i H_0 t} \psi(r) e^{-i H_0 t} \end{equation}$$
How do wavefunctions then transform? Using $\phi$ to denote the Schrodinger wavefunction, we know that $\phi$ evolves according to
$$\begin{equation} i {\partial \phi \over \partial t} = (H_0 + V)\phi. \end{equation}$$
Since in the interaction representation, the $H_0$ part of the above time dependence was transferred to the operators, we might expect that in the interaction representation the wave function $\Phi_0$ will evolve according only to $V$. Explicitly,
$$\begin{equation} i {\partial \Phi_0 \over \partial t} = V_0(t) \Phi_0. \end{equation}$$
Here $V_0$ is the interaction representation of $V$: $V_0(t) = e^{i H_0 t} V e^{-i H_0 t}$. To see that this is correct, use the interaction representation expression $\Phi_0(t) = e^{i H_0 t} \phi(t)$, as required for expectation values to be constant. Then
$$\begin{eqnarray} i {\partial \Phi_0 \over \partial t} &=& i (i H_0 \Phi_0) + i e^{i H_0 t} {\partial \phi \over \partial t} = i (i H_0 \Phi_0) + i e^{i H_0 t} (-i (H_0+V) \phi)\cr &&= - H_0 \Phi_0 + e^{i H_0 t} (H_0 + V) \phi = e^{i H_0 t} V \phi = V_0(t) e^{i H_0 t} \phi = V_0(t) \Phi_0(t). \end{eqnarray}$$

The above can be written as
$$\begin{equation} \Phi_0(t+dt) = (1 - i\,dt\,V_0(t)) \Phi_0(t) \approx e^{-i V_0(t)\,dt}. \end{equation}$$
So, applying this relation many times over a finite interval $t_2 - t_1$ divided into many small $dt$ intervals, we obtain
$$\begin{equation} \Phi_0(t_2) = S(t_2,t_1) \Phi_0(t_1) \end{equation}$$
with
$$\begin{equation} S(t_2,t_1) = \prod_{t=t_1}^{t=t_2} e^{-i V_0(t)\,dt}. \end{equation}$$
Here the proper definition, as seen before in the Feynman path integral, is that we divide the interval from $t_1$ to $t_2$ into $N$ subintervals and then take the limit $N\rightarrow \infty$
We must be somewhat careful about combining these factors because $H_0$ and $V$ may not commute. The solution is to introduce the time-ordered exponential,
$$\begin{equation} S(t_2,t_1) = T \exp\left(-i \int_{t_1}^{t_2} V_0(t)\,dt\right), \end{equation}$$
where the definition of the time-ordering operator $T$ is, as before, that the operators appearing when the Hamiltonian is expanded appear with earliest times to the right. Note that $H$ and $V_0$ should be bosonic so that no fermionic exchanges, with associated minus signs, are needed in making this rearrangement.

Properties of $S$: clearly it is unitary so $S^{-1} = S^\dagger$, and has the property
$$\begin{equation} S(t_3,t_1) = S(t_3,t_2) S(t_2,t_1). \end{equation}$$
Now equation (\ref{gdef}) follows if we assume that at $t=-\infty$ the Schrodinger and interaction representations coincide, which is justified if we imagine that the perturbation Hamiltonian is adiabatically "turned on" at some time after $-\infty$. So, again for $t_1>t_2$,
$$\begin{equation} G_{\alpha \beta}(t_1,r_1,t_2,r_2) = -i \langle S^{-1}(t_1,-\infty) \Psi_{0 \alpha}(t_1,r_1) S(t_1,-\infty) S^{-1}(t_2,-\infty) \Psi_{0\beta}^\dagger(t_2,r_2) S(t_2,-\infty) \rangle \end{equation}$$
We can combine the two interior $S$ factors and rewrite the first to get
$$\begin{equation} G_{\alpha \beta}(t_1,r_1,t_2,r_2) = -i \langle S^{-1}(\infty,-\infty) S(\infty,t_1) \Psi_{0 \alpha}(t_1,r_1) S(t_1,t_2) \Psi_{0\beta}^\dagger(t_2,r_2) S(t_2,-\infty) \rangle \end{equation}$$
The advantage of this expression is that now the factors (except for the first term) are in proper chronological order. Writing $S$ for $S(\infty,-\infty)$, we have
$$\begin{equation} G_{\alpha \beta}(t_1,r_1,t_2,r_2) = -i \langle S^{-1} T \{ \Psi_{0 \alpha}(t_1,r_1) \Psi_{0\beta}^\dagger(t_2,r_2) S\} \rangle. \end{equation}$$

The above expression also holds for $t_1 < t_2$, if we recall the sign convention in $T$.
Under certain assumptions the factor $S^{-1}$ will just contribute an overall phase: if the ground state is nondegenerate, then adiabatic switching on and off of the perturbation Hamiltonian will leave the system in its ground state, and the factor $S^{-1}$ just becomes the exponential of the phase shift resulting from the energy change of the ground state. (Many of the minor complications that occur when the Green's function formalism we are developing is generalized to either finite temperature or nonequilibrium involve this factor.) With the above assumptions,
$$\begin{equation} G_{\alpha \beta}(t_1,r_1,t_2,r_2) = -i {\langle T \{ \Psi_{0 \alpha}(t_1,r_1) \Psi_{0\beta}^\dagger(t_2,r_2) S\} \rangle \over \langle S \rangle}. \end{equation}$$
(Here we used the fact that changing the sign of a phase $\phi$ corresponds to taking the reciprocal of $e^{i \phi}$.)

The idea of our perturbation theory is to expand $S$ in the numerator in powers of $V$ and then calculate the resulting correlation functions. The interaction representation $V_0$ itself can be written in terms of the $\Psi$ operators (a simple exercise):
$$\begin{equation} V_0(t) = \frac{1}{2} \int \Psi_{0\gamma}^\dagger(t,r_1) \Psi_{0\delta}^\dagger(t,r_2) \Psi_{0\delta}(t,r_2) \Psi_{0\gamma}(t,r_1) U(r_1-r_2) \,d^3r_1\,d^3r_2. \end{equation}$$
Hence we will have the desired perturbation expansion if we can calculate
averages like
$$\begin{equation} \langle \Psi_1 \Psi^\dagger_2 \Psi^\dagger_3 \Psi^\dagger_4 \Psi_4 \Psi_3 \rangle \end{equation}$$
where now we are writing $\Psi_1 = \Psi(t_1,r_1)$ and so forth.
Formally, we have
$$\begin{equation} \langle T \Psi_{0\alpha}(t_1,x_1) \Psi^\dagger_{0\beta}(t_2,x_2) S \rangle = \sum_{n=0}^\infty {(-i)^n \over n!} \int_{-\infty}^\infty ds_1\,\ldots \int_{-\infty}^\infty ds_n \langle T \Psi_{0\alpha}(t_1,x_1) \Psi^\dagger_{0\beta}(t_2,x_2) V_0(s_1) \ldots V_0(s_n)\rangle. \end{equation}$$
In words, the idea of Wick's theorem is that an average such as the above (note that the average is with respect to the noninteracting Hamiltonian, which is quadratic in the Fermi operators $\Psi_0$) is obtained by writing all possible pairings of operators; in each pair the operators appear in the same order as in the original quantity to be averaged.

Note that in the above expression for $G_{\alpha \beta}$, many of the terms generated will be cancelled by the factor $\langle S \rangle$ in the denominator, leaving the unperturbed Green's function $G^0_{\alpha \beta}$. Hence the interesting terms come when the first two operators ($\Psi_1$ and $\Psi^\dagger_2$) are contracted with

An explicit example is the Green's function to leading order in $V$ (for more information and a proof of Wick's theorem, look in any good textbook; the Feynman rules for $G(k,\omega)$ are shown in the attached page from the book by Mattuck). By the above cancellation, we can ignore all the terms where the first two operators $\Psi_1 \Psi^\dagger_2$ are contracted together. Ignoring these, we have
$$\begin{eqnarray} \langle T \Psi_1 \Psi^\dagger_2 \Psi^\dagger_3 \Psi^\dagger_4 \Psi_4 \Psi_3 \rangle &=& \langle T \Psi_1 \Psi^\dagger_3 \rangle \langle T \Psi_2^\dagger \Psi_4 \rangle \langle T \Psi_4^\dagger \Psi_3 \rangle \cr &&+ \langle T \Psi_1 \Psi^\dagger_4 \rangle \langle T \Psi^\dagger_2 \Psi^3 \rangle \langle T \Psi^\dagger_3 \Psi_4 \rangle \cr &&-\langle T \Psi_1 \Psi^\dagger_3 \rangle \langle T \Psi^\dagger_2 \Psi_3 \rangle \langle T \Psi^\dagger_4 \Psi_4 \rangle \cr &&-\langle T \Psi_1 \Psi^\dagger_4 \rangle \langle T \Psi^\dagger_2 \Psi_4 \rangle \langle T \Psi_3^\dagger \Psi_3 \rangle. \end{eqnarray}$$
(For fermions, to use Wick's theorem we also need to add a sign factor $(-1)^P$ where $P$ is the number of exchanges required to move the operators into the pairs. This has been done in the above.)

Some of these contractions are trivial: operators evaluated at the same point just give the mean density, $\langle \Psi^\dagger \Psi \rangle = n = (2 m \mu)^{3/2} / 3 \pi^2$. The others can be expressed in terms of the noninteracting Green's function. For example,
$$\begin{equation} \langle T \Psi_1 \Psi^\dagger_3 \rangle = i G^0_{13}, \langle T \Psi^\dagger_2 \Psi_4 \rangle = - i G^0_{42}. \end{equation}$$

Finally we are in a position to write out the terms to first order in the interaction potential $U$. We have
$$\begin{equation} i G^1_{12} = \frac{1}{2} \int d^3 r_3\,dt_3\,d^3r_4\,dt_4\,U(r_3-r_4) \left[ - G_{13}^0 G_{34}^0 G_{42}^0 - G_{14}^0 G_{43}^0 G_{32}^0 + i n G_{13}^0 G_{32}^0 + i n G_{14} G_{42} \right]. \end{equation}$$
These four terms can be simplified into two pairs by interchanging the names of variables 3 and 4 in the integration. So we are left with
$$\begin{equation} i G^1_{12} = \int d^3 r_3\,dt_3\,d^3r_4\,dt_4\,U(r_3-r_4) \left[i n G^0_{14} G^0_{42} - G_{13}^0 G_{34}^0 G_{42}^0 \right]. \end{equation}$$

These two terms correspond to "Hartree" and "exchange" terms (exercise in problem set 3). They are traditionally diagramatically represented (see left side of attached page) using a dotted line for $U$ and solid lines for the $G^0$. These lines come together at "vertices": for the above perturbation theory, each vertex joins a dotted line and two straight lines, one incoming and one outgoing.

Normally one works in momentum space for actual calculations. Also, as mentioned before the above formalism can be extended with a bit of work to calculate averages in other states than the ground state $|0\rangle$. Both finite-temperature calculations and even general nonequilibrium calculations (beyond linear response) are possible, but often the technical complexity in fully nonequilibrium problems is overwhelming. The number of diagrams increases rapidly with the desired order of accuracy: there are 10 diagrams at second order in the above perturbation theory. Many diagrammatic approximations are based on selecting out a particular subset of diagrams and finding some resummation trick.

In addition to doing perturbation theory in interaction strength, diagrammatic techniques are also very important for noninteracting or interacting particles in a random potential. We will use other techniques when we discuss such random problems later, but you should be aware that diagrammatic perturbation theory has probably been the most important method for such problems. In particular, there is a famous "supersymmetry" technique for such problems developed by Efetov and others in the 1980s (cf. textbook of Efetov).

Previously we gave a nearly complete derivation of the Feynman rules for the Coulomb interaction in a Fermi system in coordinate space. The steps in this process were first introducing the interaction representation to write the Green's function as
$$\begin{equation} G_{\alpha \beta}(t_1,r_1,t_2,r_2) = -i {\langle T \{ \Psi_{0 \alpha}(t_1,r_1) \Psi_{0\beta}^\dagger(t_2,r_2) S\} \rangle \over \langle S \rangle}. \end{equation}$$
where $S=S(\infty,-\infty)$ is the unitary time translation operator containing $V_0$,
and then expanding in $V_0$ and using Wick's theorem to evaluate the terms in the remainder.

Note that essentially the same procedure would go through for two-particle Green's functions, etc. To obtain the Feynman rules in momentum space is simply a matter of taking Fourier transforms, so we will just quote the results (the diagrams themselves look quite similar). First, the diagram for the unperturbed propagator of momentum $p$ and frequency $\omega$ is just a directed solid line from right to left. We will say that this line has no "vertices".

The perturbations at order $n$ (that is, with $n$ powers of $V$) have $2n$ vertices: a vertex consists of two directed solid lines, one incoming and one outgoing, plus a dotted line that represents the interaction potential $U$. The interaction potential is frequency-independent (if it is instantaneous in time), so it can carry any value of $\omega$.
Sometimes one will want to consider retarded or other time-dependent interactions, in which case it is important to retain $\omega$ on the dotted lines. Both total frequency and momentum ("total 4-momentum" although the situations we consider do not have relativistic invariance) are conserved at each vertex.

To write the integral corresponding to a given diagram for $i G_{\alpha \beta}$, every solid line represents a factor $i G_{\alpha \beta}^0(\omega,p)$, and every dotted line a factor $-i U(\omega,p)$. A closed loop can immediately be replaced by the numerical density $n^0(\mu)$, as we saw before in real-space perturbation theory, since this represents a contraction $\langle \Psi^\dagger \Psi \rangle$ with the same argument for both operators.

The second rule is that 4-momentum is conserved at each vertex, and that internal momenta which are not fixed by momentum conservation are integrated $d^4 P / (2\pi)^4$. Similarly, unfixed spins are summed over.

The final rule is that, since we are dealing with fermions, every closed loop with more than one vertex (i.e., not the sort of simple closed loop that gives a factor of $n$) contributes an overall factor of $(-1)$. Thus the diagram has an overall factor of $(-1)^L$, where $L$ is the number of such loops.

As a simple example, let us set up the diagrammatic calculation of screening by bubble diagrams. (Historically I believe that this is called RPA, for "random phase approximation.") The actual integration will be left as an exercise for problem set 3. Starting from the first-order diagram with no loop, insert 1, 2, etc. "bubbles" into the interaction line. Each bubble adds a factor (ignoring spin)
$$\begin{equation} B(Q) = U(Q) (-1) \int {d^4P \over (2\pi)^4} (i G^0(Q)) (i G^0(Q-P)). \end{equation}$$
So we obtain that the effect of all the bubble diagrams is to replace $U(Q)$ by
$$\begin{equation} {\tilde U}(Q) = U(Q) (1 + B(Q) + B(Q)^2 + B(Q)^3+\ldots) = {U(Q) \over 1 - B(Q)}. \end{equation}$$
Your mission will be to show that at zero frequency (the static limit) and with some other approximations this is enough to screen the Coulomb interaction from $4 \pi e^2 / q^2$ to $4 \pi e^2 / (q^2 + k_0^2)$ (Thomas-Fermi screening).