Math 262A: Lecture 10

In this lecture we will actually start down the path to Feynman diagrams; how far down this path we will progress remains to be seen.

Feynman diagrams are combinatorial objects closely associated with an area of theoretical physics called quantum field theory, and so we will begin with a nesessarily brief and sketchy discussion of what QFT is. The truth is that nobody knows what quantum field theory “is” in the sense of a mathematical definition, but I can at least try to give you a sense of the mathematical structures that are involved, and the sorts of computations that physicists perform.

Let X be a manifold, representing a “universe” which we want to understand. We consider functions on X as measurements on this universe, or “observables.” We must say what the outputs of these observations: they may be numbers, or vectors, or operators, or something else. In other words, we want to specify a “target” Y for observations f \colon X \to Y, and typically Y is some other manifold. So the space of observables is the function space \mathrm{Fun}(X,Y). For example, one could take X=\mathbb{R}^4, a manifold which is supposed to model our physical experience of “spacetime,” and Y=\mathbb{R}, meaning that our observables are numerical measurements on spacetime, which sounds reasonable and not at all problematic.

The next step is to define a “field,” which a measure on \mathrm{Fun}(X,Y), the space of all functions f \colon X \to Y. Physicists would like to prescribe this measure by giving its density against the background flat Lebesgue measure, and this density is supposed to be of the form

e^{-\frac{1}{\hbar} S(f)},

where \hbar > 0 is called the “semiclassical parameter” and S \colon \mathrm{Fun}(X,Y) \to \mathbb{R} is a functional called the “action.” The total flatness and uniformity of the Lebesgue measure represents entropy, while the density above represents energy in the sense that it leads to measure concentrating near the minimizers of the action, much as we saw when we initially discussed the Laplace method, and as is familiar from the setting of statistical mechanics.

Once we have prescribed the spaces X and Y and the action S \colon \mathrm{Fun}(X,Y) \to \mathbb{R}, we are looking at something which is more or less what physicists call a QFT, and we start asking questions about it. One of the main things that physicists would like to do is to evaluate the integral

\mathcal{Z} = \int\limits_{\mathrm{Fun}(X,Y)} e^{-\frac{1}{\hbar}S(f)}\mathrm{d}f,

which is called the “partition function” or “path integral.” It is not expected that this can be done exactly in most cases; when S=S_0 is such that an exact evaluation of the corresponding \mathcal{Z}=\mathcal{Z}_0 is possible, we are looking at what is called a “free theory.” The goal then is to select a new action S which is a “perturbation” of S_0, i.e. S is close to S_0 in some sense. One then tries to estimate the new path integral \mathcal{Z} corresponding to S in the limit \hbar \to 0. What is supposed to happen is that in this so-called “semiclassical limit,” there emerges an approximation of the form

\mathcal{Z} \sim \mathcal{Z}_0(1+a_1\hbar + a_2\hbar^2 + a_3\hbar^3+\dots).

The terms a_k\hbar^k are called “quantum terms,” and the sum of all these terms is typically a divergent series, i.e. it has radius of convergence zero. So what this meant by the above is that there is that \mathcal{Z} admits a complete asymptotic expansion: we have

\mathcal{Z} = \mathcal{Z}_0(1 + a_1\hbar + \dots +a_k\hbar^k + o(\hbar^k))

as \hbar \to 0 for any nonnegative integer k. Now the question that what physicists want to address is how to compute the quantum coefficients a_k. The idea is that these coefficients should all be computable in terms of the “correlation functions” of the free theory, meaning integrals of the form

\int\limits_{\mathrm{Fun}(X,Y)} A(f) e^{-\frac{1}{\hbar}S_0(f)} \mathrm{d}f.

The computation of these correlation functions is typically very intricate, and Feynman diagrams are a graphical bookkeeping device introduced by Feynman in order to help organize such computations. The theory is very well developed from a physical perspective, and is in regular, constant use in contemporary theoretical physics.

The problem is that none of the above makes sense mathematically: even when one makes very reasonable, realistic choices like X=\mathbb{R}^4 and Y=\mathbb{R}, the corresponding space of functions \mathrm{Fun}(X,Y) is infinite-dimensional, and hence does not admit a Lebesgue measure. This means that the definition of the field as a measure which is absolutely continuous with respect to Lebesgue measure makes no sense, and the path integral

\mathcal{Z} = \int\limits_{\mathrm{Fun}(X,Y)} e^{-\frac{1}{\hbar}S(f)}\mathrm{d}f

is an ill-defined functional integral.

This leaves us with two options. One is to try make the above general construction rigorous by developing some kind of theory of functional integration that makes mathematical sense. This is an extremely interesting undertaking, but not one that we will discuss. The second option is to choose X,Y in such a way that \mathrm{Fun}(X,Y) is finite-dimensional. The only way this can happen is if one of X,Y is there zero-dimensional manifold \{\bullet\} consisting of a single point. Choosing Y=\{\bullet\} is not advisable, since then \mathrm{Fun}(X,Y) is also zero-dimensional, which leads to a trivial theory. However, if we choose X=\{\bullet\}, then \mathrm{Fun}(X,Y) \simeq Y, and we have something non-trivial, namely a toy model aptly named zero-dimensional quantum field theory.

What we are going to do now is develop the zero-dimensional quantum field theory corresponding to the data

X=\{\bullet\}, Y=\mathbb{R}, S=\text{ a smooth function on }\mathbb{R}.

The path integral corresponding this quantum field theory is

\mathcal{Z} = \int\limits_{\mathbb{R}} e^{-\frac{1}{\hbar} S(x)} \mathrm{d}x,

which is non-problematic provided we choose S such that the integral converges. We have even identified previously an action which gives us a free theory: if we choose

S_0(x) = \frac{x^2}{2},

the corresponding path integral evaluation is Lord Kelvin’s favorite formula:

\mathcal{Z}_0 = \sqrt{2\pi \hbar}.

And the next part of the construction, the semiclassical limit of the path integral corresponding to a more general action S, is just the classical Laplace method, which we now state formally.

Theorem 1: Let [a,b] \subseteq \mathbb{R} be a (possibly infinite) interval, and let S \colon [a,b] \to \mathbb{R} be a smooth function that attains a global minimum at a unique point c \in (a,b). Then

\int\limits_{[a,b]} e^{-\frac{1}{\hbar}S(x)} \mathrm{d}x = e^{-\frac{S(c)}{\hbar}} \sqrt{\frac{2\pi\hbar}{S''(c)}}A(\hbar),

where A is a smooth function on [0,\infty) such that A(0)=1.

Theorem 1 tells us that, for any nonnegative integer k, we have

\int\limits_{[a,b]} e^{-\frac{1}{\hbar}S(x)} \mathrm{d}x = e^{-\frac{S(c)}{\hbar}} \sqrt{\frac{2\pi\hbar}{S''(c)}}(1+a_1\hbar + \dots +a_k\hbar^k + o(\hbar^k)),

where the sum in the brackets is just the kth Maclaurin polynomial and the error term follows from Taylor’s theorem (it is important to note that A is smooth, but not necessarily analytic, i.e. its Maclaurin series may be divergent, and even if it is convergent it need not sum to A(\hbar)). So we are in a position where we have the perfectly well-defined problem of computing the “quantum corrections” a_k. This will be our goal in the next several lectures, and we will see that the Feynman diagrams which accompany this problem are classical combinatorial objects, namely maps on surfaces.

Leave a Reply