Math 262A: Lecture 9

This week, we will discuss a remarkable law of nature discovered by George Polya. Consider a particle situated at a given site of the lattice \mathbb{Z}^d. At each tick of the clock, the particle makes a random jump to a neighboring lattice site, with equal probability of jumping in any direction. In other words, the particle is executing simple random walk on \mathbb{Z}^d. A random walk is said to be recurrent if it returns to its initial position with probability one; otherwise, it is said to be transient.

Theorem (Polya): Simple random walk on \mathbb{Z}^d is recurrent for d=1,2 and transient for d \geq 3.

Many proofs of this theorem are known, and the extent to which any two such proofs are different is a perennial subject of debate. In this lecture and the next, we will present the proof given here, which combines a number of the techniques we have so far encountered in this course (exponential generating functions, Laplace method), and introduces new ones that we will need going forward (ordinary generating functions, Borel transform). We will use all of these new methods when we begin our discussion of Feynman diagrams next week.

In case you are frustrated with the admittedly slow pace of the course so far and want to know what is planned for its remainder, here is a rough outline. First I want to discuss Feynman diagrams for zero-dimensional scalar-valued QFTs, which are combinatorial maps. Then I would like to discuss Feynman diagrams for zero-dimensional matrix-valued QFTs, which are again maps on surfaces but with more of their topological structure visible. After this I want to look at a framework which subsumes both of these, namely Feynman diagrams for zero-dimensional QFTs with values in a finite-dimensional von Neumann algebra. Finally, I would like to discuss the combinatorics of zero-dimensional QFTs with values in a matrix group, a situation which generalizes the method of stationary phase and for which the appropriate Feynman diagrams seem to be branched covers of compact Riemann surfaces.

Back to Polya’s theorem. For n \in \mathbb{N}, let E_n denote the event that simple random walk on \mathbb{Z}^d returns to its initial position for the first time after n steps. It is convenient to set E_0=\emptyset, corresponding to the fact that the particle is tautologically situated at its initial position at time zero and this should not count as a return. Write p_n for the probability of E_n. The events E_n are mutually exclusive for different values of n, and

E=\bigsqcup\limits_{n=0}^\infty E_n

is the event that the particle returns to initial position after finitely many steps. We would like to calculate the probability

p= \sum\limits_{n=0}^\infty p_n

that E occurs.

Let us clarify that we are viewing \mathbb{Z}^d as a graph, whose vertices are the points of \mathbb{Z}^d with two vertices being adjacent if and only if they are unit Euclidean distance apart. A loop on \mathbb{Z}^d is then just a closed walk in the usual sense of graph theory. We consider the set of loops based at an arbitrary but fixed vertex of \mathbb{Z}^d, and denote by \ell_n the number of length n loops based at this point. It is convenient to consider the base point itself as a loop of length zero, the trivial loop, so that \ell_0=1.

A loop is said to be indecomposable if it is not the concatenation of two loops. Let r_n denote the number of length n indecomposable loops based at the given point, with the convention that the trivial loop is not indecomposable so that r_0=0 (this is like excluding 1 from the primes).

Proposition 1: For any n \geq 1, we have

\ell_n = \sum\limits_{k=0}^n r_k \ell_{n-k}.

Proof: Every loop of length n \geq 1 consists of an indecomposable loop of some length k \geq 1, followed by a (possibly trivial) loop of length n-k.


If we divide the formula in Proposition 1 by (2d)^n, which is the total number of n-step walks on \mathbb{Z}^d beginning at the given base point, we get the recursion

q_n = \sum\limits_{k=0}^n p_k q_{n-k},

where p_n is the probability of a first return at time n as above, and q_n is the probability that the particle is situated at its initial position at time n.

Exercise 1: Which is bigger, p_n or q_n? (Hint: this is a trick question whose purpose is to make sure you are clear on the notation).

We now consider the ordinary generating functions of the sequences (p_n)_{n=0}^\infty and (q_n)_{n=0}^\infty, which by definition are the formal power series

P(z) = \sum\limits_{n=0}^\infty p_nz^n \quad\text{ and }\quad Q(z) = \sum\limits_{n=0}^\infty q_nz^n.

The product of these generating functions is

P(z)Q(z) =  \sum\limits_{n=0}^\infty \left(\sum\limits_{k=0}^n p_kq_{n-k}\right)z^n = \sum\limits_{n=1}^\infty \left(\sum\limits_{k=0}^n p_kq_{n-k}\right)z^n = Q(z)-1,

where we used the fact that p_0=0 together with Proposition 1. Now, each of the series P(z) and Q(z) has radius of convergence at least 1, because each has coefficients of absolute value at most 1, and therefore we may consider the equality


as an identity in the algebra of analytic functions on the open unit disc in \mathbb{C}. Since the coefficients of Q(z) are nonnegative numbers, the function Q(z) is non-vanishing on the interval [0,1), and we thus have

P(z) = 1- \frac{1}{Q(z)}, \quad z \in [0,1).

Now, since

P(1) = \sum_{n=0}^\infty p_n = p,

we can apply Abel’s power series theorem to obtain the following formula for the probability p that we are trying to calculate:

p = \lim\limits_{\substack{z \to 1 \\ z \in [0,1)}} P(z) = 1 - \frac{1}{\lim\limits_{\substack{z \to 1 \\ z \in [0,1)}}Q(z)}.

So, everything hinges on the limiting behavior of Q(z) as z approaches 1 through positive reals: we have either

\lim\limits_{\substack{z \to 1 \\ z \in [0,1)}}Q(z) = \infty,

in which case p=1 (recurrence), or

\lim\limits_{\substack{z \to 1 \\ z \in [0,1)}}Q(z) < \infty,

in which case p<1 (transience).

In order to determine which of the two limiting behaviors above Q(z) actually exhibits, we need some kind of tractable expression for this function. Combinatorially, this amounts to asking what can be said about the loop generating function

L(z) = \sum\limits_{n=0}^\infty \ell_n z^n

since we have Q(z) = L(\frac{z}{2d}) (Gian-Carlo Rota is said to have proclaimed that “probability is just combinatorics divided by n,” and that is exactly what is going on here). While there is not much that we can say about the ordinary generating function L(z) for lattice loops, it turns out that the corresponding exponential generating function,

E(z) = \sum\limits_{n=0}^\infty \ell_n \frac{z^n}{n!},

is analyzable. This is because the basic features of exponential generating functions, as discussed back in Lecture 3.

Let’s recall how this works. In this paragraph it is important to keep the dimension d of \mathbb{Z}^d in mind, so we write E_d(z) for the exponential generating function of the sequence (\ell_n^{(d)})_{n=0}^\infty of length n loops on \mathbb{Z}^d. To be concrete, let us consider what is going on for d=2. A walk on \mathbb{Z}^d is a sequence of steps, each of which is either horizontal or vertical, and so it is in effect a shuffle of two walks on \mathbb{Z}^1, one corresponding to horizontal steps and the other to vertical steps. Thus, a loop on \mathbb{Z}^2 is a shuffle of two loops on \mathbb{Z}^1. More quantitatively, we have

\ell_n^{(2)} = \sum\limits_{k=0}^n {n \choose k}\ell_k^{(1)}\ell_{n-k}^{(1)},

since to build a length n loop on \mathbb{Z}^2 we must choose a number k corresponding to the number of horizontal steps in the look, then choose a cardinality k subset of \{1,\dots,n\} corresponding to the times at which horizontal steps occur, and finally choose one of the \ell_k^{(1)} one-dimensional k-step loops to build on the chosen subset. The algebraic translation of this combinatorial reasoning is the generating function identity

E_2(z) = E_1(z)E_1(z).

Exactly the same reasoning applies in any dimension d: we have

E_d(z) = E_1(z)^d.

The above makes it seem at least plausible that we will be able to achieve some fairly explicit understanding of the exponential loop generating function E_d(z), since we have reduced to the case d=1 which ought to be straightforward. Indeed, we clearly have

E_1(z) = \sum\limits_{n=0}^\infty \ell_n^{(1)} \frac{z^n}{n!} = \sum\limits_{k=0}^\infty \ell_{2k}^{(1)} \frac{z^{2k}}{(2k)!},

since any loop on \mathbb{Z}^1 must consist of an even number of steps, half of which are positive and half of which are negative. Moreover, the number of loops is just the number of ways to choose the times at which the positive steps occur, so that

E_1(z) = \sum\limits_{k=0}^\infty {2k \choose k} \frac{z^{2k}}{(2k)!} = \sum\limits_{k=0}^\infty \frac{z^{2k}}{k!k!},

which is a nice, explicit power series. The question now is: can we sum this series? What this really means is: can we find the function whose Maclaurin series is E_1(z), and if so do we know anything useful about this function? Luckily, the answer turns out to be an emphatic yes: E_1(z) is a Bessel function.

The Bessel functions form a large class of special functions which crop up everywhere in mathematics, physics, and engineering, usually as solutions to physically meaningful ordinary differential equations. The particular class of Bessel functions relevant for us are called the modified Bessel functions of the first kind, and typically denoted I_\alpha(z), with \alpha being a parameter. The function I_\alpha(x) is one of two linearly independent solutions to the second order ODE

z^2 F''(z) + zF"(z) - (z^2+\alpha^2)F(z) = 0,

which is known as the modified Bessel equation (the other solution is of course the modified Bessel function of the second kind). Nineteenth century mathematicians found two ways to describe I_\alpha(z): as a power series,

I_\alpha(2z) = \sum\limits_{k=0}^\infty \frac{z^{2k+\alpha}}{k!(k+\alpha)!},

and as an integral,

I_\alpha(2z) = \frac{z^\alpha}{\sqrt{\pi}(\alpha-\frac{1}{2})!}\int_0^\pi e^{2z\cos\theta} (\sin \theta)^{2\alpha}\mathrm{d}\theta.

From the power series representation, we see that we have

E_1(z) = I_0(2z),

so that we now know

E_d(z) = I_0(2z)^d.

The above connection with Bessel functions leaves us in good shape to analyze the exponential loop generating function. However, this isn’t what we actually want — remember that we want to analyze the ordinary loop generating function in order to solve our problem. So we need a way to convert exponential generating functions into ordinary generating functions. This can be done using an integral transform called the Borel transform, which is in fact just an application of Euler’s integral representation of the factorial,

n! = \int_0^\infty t^n e^{-t} \mathrm{d}t,

which we discussed way back in Lecture 1. Indeed, if

E(z) = \sum\limits_{n=0}^\infty a_n \frac{z^n}{n!}

is the exponential generating function of the sequence (a_n)_{n=0}^\infty, then its Borel transform

\mathcal{B}E(z) = \int_0^\infty E(tz) e^{-t} \mathrm{d}t = \sum\limits_{n=0}^\infty a_n \frac{z^n}{n!} \int_0^\infty t^ne^{-t} \mathrm{d}t = \sum_{n=0}^\infty a_nz^n

is the ordinary generating function of this same sequence.

Now it is clear what must be done: we have to Borel transform the exponential loop generating function, which is a Bessel function, in order to get to the ordinary loop generating function. That is, we have

\mathcal{B}E_d(z) = \int\limits_0^\infty I_0(2tz)^d e^{-t} \mathrm{d}t,

so that we finally have the representation

Q(z) = \int\limits_0^\infty I_0(\frac{2tz}{d})^d e^{-t} \mathrm{d}t

for the function whose limit behavior we want to analyze.

Recall that what we want to do is analyze the z\to 1 behavior of

Q(z) = \int\limits_0^\infty I_0(\frac{2tz}{d})^d e^{-t} \mathrm{d}t,

and determine whether the limit is finite or infinite, so whether or not the above integral is convergent or divergent. Since all we care about is convergence or divergence, it suffices to consider the tail of the integral,

\int\limits_N^\infty I_0(\frac{2tz}{d})^d e^{-t} \mathrm{d}t,

with N large. Now because the integrand here is a Bessel function, it is itself an integral:

I_0(\frac{2tz}{d})^d = \int_0^\pi e^{tf(\theta)} \mathrm{d}\theta,

where f(\theta) = \frac{z}{d}\cos \theta and t \geq N is large. We are thus perfectly positioned for an application of the Laplace approximation (end point case),

\int_0^\pi e^{tf(\theta)} \mathrm{d}\theta \sim e^{tf(0)} \sqrt{\frac{\pi}{2t|f''(0)|}},\ t \to \infty.

Wading back through the formulas, what we have found is that

I_0(\frac{2tz}{d})^d e^{-t} \sim \text{const.} e^t(z-1)(tz)^{-\frac{d}{2}},\ t \to \infty.

Now, by the monotone convergence theorem we have

\lim\limits_{\substack{z \to 1 \\ z \in [0,1)}} \int_N^\infty e^t(z-1)(tz)^{-\frac{d}{2}} \mathrm{d}t = \int_N^\infty \lim\limits_{\substack{z \to 1 \\ z \in [0,1)}} e^t(z-1)(tz)^{-\frac{d}{2}} \mathrm{d}t = \int_N^\infty t^{-\frac{d}{2}}\mathrm{d}t,

an integral which diverges for d=1,2 and converges for d \geq 3. Polya’s theorem is proved, and if you’re willing to retrace our steps a bit you will find that you have a reasonable idea of what the mechanism is that is responsible for recurrence of simple random walk in dimensions d=1,2, and transience in dimensions d\geq 3. Indeed, if you want to get your hands dirty you can even try to estimate p by really staring at the above argument in order to get a feeling for how likely it is that simple random walk will escape to infinity.

I cannot resist mentioning that Bessel functions are intimately related to the enumeration of permutations with bounded decreasing subsequence length, which was our topic last week. The relation is via the following beautiful identity due to Ira Gessel.

Theorem (Gessel): For any d \in \mathbb{N}, we have

\sum\limits_{n=0}^\infty (n!)_d \frac{z^{2n}}{n!n!} = \det [I_{i-j}(2z)]_{i,j=1}^d,

where (n!)_d is the number of permutations in \mathrm{S}(n) with no decreasing subsequence of length d+1.

If you would like to see a proof of this theorem together with various pointers to the literature, I can refer you to this paper. In fact, since there will be no lecture post on Thursday, you can substitute this reading assignment for the missing lecture.


  1. Freddie says:

    “If we divide the formula in Proposition 1 by …” I may have the wrong end of the stick but should the next display have $q_n$ as the LHS, not $p_n$?

    1. You had the right end of the stick, thanks for catching that.

Leave a Reply