Math 262A: Lecture 18

The main point of this topics course (if there was one) is that there is a surprising and useful relationship between the asymptotics of integrals (a la Laplace) and the combinatorics of topological maps (Feynman diagrams). There are a few levels of sophistication to this relationship, which may be thought of as corresponding to zero-dimensional scalar, vector, and matrix-valued quantum field theories. We will try to summarize this relationship now from a high-level perspective, meaning that the details might be not quite right, but the basic idea is more or less sound.

Formally, for the scalar theory the connection is something like

\int\limits_\mathbb{R} e^{-N(\frac{x^2}{2} + \sum_{d=3}^\infty p_d\frac{x^d}{d})}\mathrm{d}x \sim \sum\limits_\Gamma \frac{1}{|\mathrm{Aut} \Gamma|} N^{v(\Gamma)-e(\gamma)} p_\Gamma, \quad N \to \infty

where the sum is over maps in which every vertex has degree at least 3, the numbers v(\Gamma) and e(\Gamma) are the number of edges and vertices of \Gamma, respectively, and the “amplitude” p_\Gamma is the product of the coefficients p_d from the perturbation of the Gaussian density on the integral side according to the vertex degree sequence of \Gamma,

p_\Gamma = \prod\limits_{V \in \Gamma} p_{\mathrm{deg}V}.

Now, we have that


where r(\Gamma) is the circuit rank of \Gamma and c(\Gamma). This means that we can organize the asymptotic expansion of the integral as a generating function for maps sorted by their circuit rank:

\int\limits_\mathbb{R} e^{-N(\frac{x^2}{2} + \sum\limits_{d=3}^\infty p_d\frac{x^d}{d})} \mathrm{d}x \sim \sum\limits_{k=0}^\infty \frac{1}{N^k}\sum\limits_{\substack{\Gamma \\ r(\Gamma)=k+1}} \frac{1}{|\mathrm{Aut}\Gamma|} p_\Gamma.

This is nice, and it allows us to compute graphically the coefficients in the asymptotic expansions of various integrals, for example the integral which figures in Stirling’s formula. Physicists call it the “loop expansion.” Importantly, it is actually true in the analytic (as opposed to just formal) sense, meaning that if you truncate the (divergent) expansion at order N^k then the error is o(N^k), because of the Laplace method, which shows that the integral in question really is concentrated in a neighborhood of the minimizer of the integrand. If you take logs, meaning that you are only interested in the order of magnitude of the integral you started with, then the loop expansion only involves connected graphs, which reduces the complexity quite a bit. An unfortunate aspect of the loop expansion is that it misses out on the fact that the relevant diagrams are not just combinatorial objects — graphs — but actually graphs embedded in surfaces. This expansion does not see the surface, meaning that it does not encode its topology.

We saw in Lecture 17 that it is indeed possible to get topological information by changing the space over which we integrate from \mathbb{R} to its noncommutative counterpart \mathrm{H}(N), the Euclidean space of N \times N Hermitian matrices. Formally, the expansion from the scalar case is modified to

\int\limits_{\mathrm{H}(N)} e^{-\mathrm{Tr} N(\frac{x^2}{2} + \sum_{d=3}^\infty p_d\frac{x^d}{d})}\mathrm{d}x \sim \sum\limits_\Gamma \frac{1}{|\mathrm{Aut} \Gamma|} N^{v(\Gamma)-e(\gamma)+f(\Gamma)} p_\Gamma, \quad N \to \infty,

where f(\Gamma) is the number of faces of the topological map \Gamma. Using the Euler formula,


we can reorganize the formal N \to \infty expansion of the above matrix integral into not a loop expansion, but a genus expansion,

\int\limits_{\mathrm{H}(N)} e^{-N(\frac{x^2}{2} + \sum_{d=3}^\infty p_d\frac{x^d}{d})}\mathrm{d}x \sim \sum\limits_{k=0}^\infty N^{2-2k}\sum\limits_{\substack{\Gamma \\ g(\Gamma)=k}} \frac{1}{|\mathrm{Aut} \Gamma|}  p_\Gamma, \quad N \to \infty,

where now the inner sum is over maps with vertices of degree at least three of a fixed topological genus. This is even nicer than the loop expansion, because it sees the topology of maps (genus) as opposed to just their combinatorics (circuit rank), but it is much harder to prove that it is correct analytically, meaning that stopping the (divergent) expansion after k terms leaves an o(N^{-2k}) error. The basic problem is that the Laplace principle doesn’t work: the principle relies on the fact that the contribution to an integral over \mathbb{R} with integrand of the the form e^{-NS(x)} is dominated by a small neighborhood of the maximum of S(x), but if we swap out \mathbb{R} for \mathrm{H}(N), then a box in this N^2-dimensional Euclidean space has volume O(e^{N^2}), so that contributions to the integral from Lebesgue measure are on the same scale as contributions from the integrand itself.

This is not at all an easy problem to deal with, but I can at least give you an idea of how analysts have gotten past this obstruction. The starting point is a classical result due to Hermann Weyl the subset of \mathrm{H}(N) consisting of Hermitian matrices with a given list of eigenvalues,

\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_N,

has Euclidean volume

C_N \prod\limits_{1 \leq i < j \leq N} (\lambda_i-\lambda_j)^2,

where C_N is a constant depending only on N. Now, each such isospectral set is, by the Spectral Theorem, an orbit of the unitary group \mathrm{U}(N) acting on \mathrm{H}(N) by conjugation, i.e. a set of the form

\mathcal{O}_\lambda = \{U\mathrm{diag}(\lambda_1,\dots,\lambda_N)U^{-1} \colon U \in \mathrm{U}(N)\}.

Incidentally, the orbits \mathcal{O}_\lambda are symplectic manifolds, but we don’t need to pay attention to this here. The point that is relevant for us is the Weyl integration formula, which is just the change of variables formula resulting from Weyl’s volume computation: if f(X) is a function on \mathrm{H}(N) which is invariant under the conjugation action of \mathrm{U}(N), then

\int\limits_{\mathrm{H}(N)} f(X) \mathrm{d}X = c_N\int\limits_{\mathbb{W}^N} f(\lambda_1,\dots,\lambda_N) \prod_{i<j} (\lambda_i-\lambda_j)^2 \mathrm{d}\lambda,

where the integration is with respect to Lebesgue measure on the Weyl chamber \mathbb{W}^N \subset \mathbb{R}^N, i.e. the convex set

\mathbb{W}^N = \{(\lambda_1,\dots,\lambda_N) \in \mathbb{R}^N \colon \lambda_1 > \dots > \lambda_N\},

and f(\lambda_1,\dots,\lambda_N) is, by abuse of notation, the value of the original function f on any Hermitian matrix with eigenvalues \lambda_1,\dots,\lambda_N. In particular, a matrix integral of the Laplace form

\int\limits_{\mathrm{H}(N)} e^{-N\mathrm{Tr}S(X)} \mathrm{d}X,

can be rewritten via the Weyl integration formula as

c_N\int\limits_{\mathbb{W}^N} e^{-N\sum_{i=1}^N S(\lambda_i)} \prod_{i<j} (\lambda_i-\lambda_j)^2\mathrm{d}\lambda,

and the remaining integral over eigenvalues can yet again be rewritten in the more compelling form

\int\limits_{\mathbb{W}^N} e^{-N^2(\frac{1}{N} \sum_{i=1}^N S(\lambda_i)-\frac{2}{N} \sum_{i<j} \log(\lambda_i-\lambda_j))} \mathrm{d}\lambda.

There are a few things to say here. First and foremost, the point of all of this is that we have reduced from integrating over the huge space \mathrm{H}(N) of dimension N^2 to integrating over the N-dimensional set \mathbb{W}^N, which is a massive reduction in degrees of freedom, and in particular contributions from the Lebesgue measure are now order e^N. Second, the integrand is of order e^{-N^2}, so it seems like the Laplace principle may yet be correct in this context: the main contributions to the integrand come from a small neighborhood of the minimizer of the function

\mathcal{S}(\lambda_1,\dots,\lambda_N) = \frac{1}{N} \sum_{i=1}^N S(\lambda_i)-\frac{2}{N} \sum_{i<j} \log(\lambda_i-\lambda_j).

Third, we have actually met this action before, back when we looked at the asymptotic enumeration of permutations with restricted decreasing subsequence length: it represents the potential energy of a system of N identical point charges on a wire with logarithmic repulsion (which is indeed the electrostatic repulsion in two dimensions), but confined by the potential well S(\lambda), which was S(\lambda)=\frac{1}{2}\lambda^2 last time we met it, but now can be anything. As N\to \infty, the distribution of these charges will crystallize around the configuration which minimizes the action \mathcal{S}, which in particular means that the empirical distribution \mu_N of these N point charges — i.e. the probability measure on \mathbb{R} which places mass 1/N at each particle — will converge as N \to \infty to a continuous probability measure \mu_\mathcal{S} on \mathbb{R} called the “equilibrium measure” corresponding to S. More precisely, in the language of calculus of variations, we have

\lim_{N\to \infty} \frac{1}{N^2} \log \int\limits_{\mathrm{H}(N)} e^{-NS(X)} \mathrm{d}X = - \inf\limits_{\mu \in \mathcal{P}(\mathbb{R})} \mathcal{S}(\mu),


\mathcal{S}(\mu) = \int\limits_\mathbb{R} S(x)\mu(\mathrm{d}x) - \int\limits_\mathbb{R}\int\limits_\mathbb{R} \log |x-y|\mu(\mathrm{d}x)\mu(\mathrm{d}y).

It is a theorem that for nice enough S, the equilibrium measure always exists and is unique. This is the Laplace principle for matrix integrals. Although it is technically very involved, a fine analysis of the convergence to equilibrium measure shows that under reasonable hypotheses the formal genus expansion above is indeed correct, analytically, as an asymptotic expansion. This means that the Feynman diagrams for matrix integrals really are maps on surfaces. It is interesting that this connection is in fact more typically used to say something about maps, not about integrals: there are independent methods for computing the coefficients in the asymptotic expansion of Hermitian matrix integrals based on orthogonal polynomials, and once these coefficients are found in some independent fashion one has also determined the generating function for maps of a given genus. For example, taking the potential S(x) = \frac{x^2}{2} + p_4\frac{x^4}{4} and calculating the corresponding equilibrium measure, one can obtain an explicit formula for the number of ways to glue a sphere from a given number of squares. A good entry point into all of this is here.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s