This week, we will discuss a remarkable law of nature discovered by George Polya. Consider a particle situated at a given site of the lattice . At each tick of the clock, the particle makes a random jump to a neighboring lattice site, with equal probability of jumping in any direction. In other words, the particle is executing simple random walk on . A random walk is said to be *recurrent* if it returns to its initial position with probability one; otherwise, it is said to be *transient*.

**Theorem (Polya):** Simple random walk on is recurrent for and transient for .

Many proofs of this theorem are known, and the extent to which any two such proofs are different is a perennial subject of debate. In this lecture and the next, we will present the proof given here, which combines a number of the techniques we have so far encountered in this course (exponential generating functions, Laplace method), and introduces new ones that we will need going forward (ordinary generating functions, Borel transform). We will use all of these new methods when we begin our discussion of Feynman diagrams next week.

In case you are frustrated with the admittedly slow pace of the course so far and want to know what is planned for its remainder, here is a rough outline. First I want to discuss Feynman diagrams for zero-dimensional scalar-valued QFTs, which are combinatorial maps. Then I would like to discuss Feynman diagrams for zero-dimensional matrix-valued QFTs, which are again maps on surfaces but with more of their topological structure visible. After this I want to look at a framework which subsumes both of these, namely Feynman diagrams for zero-dimensional QFTs with values in a finite-dimensional von Neumann algebra. Finally, I would like to discuss the combinatorics of zero-dimensional QFTs with values in a matrix group, a situation which generalizes the method of stationary phase and for which the appropriate Feynman diagrams seem to be branched covers of compact Riemann surfaces.

Back to Polya’s theorem. For , let denote the event that simple random walk on returns to its initial position for the first time after steps. It is convenient to set , corresponding to the fact that the particle is tautologically situated at its initial position at time zero and this should not count as a return. Write for the probability of . The events are mutually exclusive for different values of , and

is the event that the particle returns to initial position after finitely many steps. We would like to calculate the probability

that occurs.

Let us clarify that we are viewing as a graph, whose vertices are the points of with two vertices being adjacent if and only if they are unit Euclidean distance apart. A *loop* on is then just a closed walk in the usual sense of graph theory. We consider the set of loops based at an arbitrary but fixed vertex of , and denote by the number of length loops based at this point. It is convenient to consider the base point itself as a loop of length zero, the *trivial loop*, so that .

A loop is said to be *indecomposable* if it is not the concatenation of two loops. Let denote the number of length indecomposable loops based at the given point, with the convention that the trivial loop is not indecomposable so that (this is like excluding from the primes).

**Proposition 1:** For any , we have

.

*Proof:* Every loop of length consists of an indecomposable loop of some length , followed by a (possibly trivial) loop of length .

—Q.E.D.

If we divide the formula in Proposition 1 by , which is the total number of -step walks on beginning at the given base point, we get the recursion

,

where is the probability of a first return at time as above, and is the probability that the particle is situated at its initial position at time .

**Exercise 1:** Which is bigger, or ? (Hint: this is a trick question whose purpose is to make sure you are clear on the notation).

We now consider the ordinary generating functions of the sequences and , which by definition are the formal power series

The product of these generating functions is

where we used the fact that together with Proposition 1. Now, each of the series and has radius of convergence at least , because each has coefficients of absolute value at most , and therefore we may consider the equality

as an identity in the algebra of analytic functions on the open unit disc in Since the coefficients of are nonnegative numbers, the function is non-vanishing on the interval , and we thus have

Now, since

we can apply Abel’s power series theorem to obtain the following formula for the probability that we are trying to calculate:

So, everything hinges on the limiting behavior of as approaches through positive reals: we have either

in which case (recurrence), or

in which case (transience).

In order to determine which of the two limiting behaviors above actually exhibits, we need some kind of tractable expression for this function. Combinatorially, this amounts to asking what can be said about the loop generating function

since we have (Gian-Carlo Rota is said to have proclaimed that “probability is just combinatorics divided by ,” and that is exactly what is going on here). While there is not much that we can say about the ordinary generating function for lattice loops, it turns out that the corresponding exponential generating function,

is analyzable. This is because the basic features of exponential generating functions, as discussed back in Lecture 3.

Let’s recall how this works. In this paragraph it is important to keep the dimension of in mind, so we write for the exponential generating function of the sequence of length loops on To be concrete, let us consider what is going on for . A walk on is a sequence of steps, each of which is either horizontal or vertical, and so it is in effect a shuffle of two walks on one corresponding to horizontal steps and the other to vertical steps. Thus, a loop on is a shuffle of two loops on . More quantitatively, we have

since to build a length loop on we must choose a number corresponding to the number of horizontal steps in the look, then choose a cardinality subset of corresponding to the times at which horizontal steps occur, and finally choose one of the one-dimensional -step loops to build on the chosen subset. The algebraic translation of this combinatorial reasoning is the generating function identity

Exactly the same reasoning applies in any dimension : we have

The above makes it seem at least plausible that we will be able to achieve some fairly explicit understanding of the exponential loop generating function since we have reduced to the case which ought to be straightforward. Indeed, we clearly have

since any loop on must consist of an even number of steps, half of which are positive and half of which are negative. Moreover, the number of loops is just the number of ways to choose the times at which the positive steps occur, so that

which is a nice, explicit power series. The question now is: can we sum this series? What this really means is: can we find the function whose Maclaurin series is , and if so do we know anything useful about this function? Luckily, the answer turns out to be an emphatic yes: is a Bessel function.

The Bessel functions form a large class of special functions which crop up everywhere in mathematics, physics, and engineering, usually as solutions to physically meaningful ordinary differential equations. The particular class of Bessel functions relevant for us are called the modified Bessel functions of the first kind, and typically denoted , with being a parameter. The function is one of two linearly independent solutions to the second order ODE

which is known as the modified Bessel equation (the other solution is of course the modified Bessel function of the second kind). Nineteenth century mathematicians found two ways to describe : as a power series,

and as an integral,

From the power series representation, we see that we have

so that we now know

The above connection with Bessel functions leaves us in good shape to analyze the exponential loop generating function. However, this isn’t what we actually want — remember that we want to analyze the ordinary loop generating function in order to solve our problem. So we need a way to convert exponential generating functions into ordinary generating functions. This can be done using an integral transform called the Borel transform, which is in fact just an application of Euler’s integral representation of the factorial,

which we discussed way back in Lecture 1. Indeed, if

is the exponential generating function of the sequence then its Borel transform

is the ordinary generating function of this same sequence.

Now it is clear what must be done: we have to Borel transform the exponential loop generating function, which is a Bessel function, in order to get to the ordinary loop generating function. That is, we have

so that we finally have the representation

for the function whose limit behavior we want to analyze.

Recall that what we want to do is analyze the behavior of

and determine whether the limit is finite or infinite, so whether or not the above integral is convergent or divergent. Since all we care about is convergence or divergence, it suffices to consider the tail of the integral,

with large. Now because the integrand here is a Bessel function, it is itself an integral:

where and is large. We are thus perfectly positioned for an application of the Laplace approximation (end point case),

Wading back through the formulas, what we have found is that

Now, by the monotone convergence theorem we have

an integral which diverges for and converges for Polya’s theorem is proved, and if you’re willing to retrace our steps a bit you will find that you have a reasonable idea of what the mechanism is that is responsible for recurrence of simple random walk in dimensions and transience in dimensions . Indeed, if you want to get your hands dirty you can even try to estimate by really staring at the above argument in order to get a feeling for how likely it is that simple random walk will escape to infinity.

I cannot resist mentioning that Bessel functions are intimately related to the enumeration of permutations with bounded decreasing subsequence length, which was our topic last week. The relation is via the following beautiful identity due to Ira Gessel.

**Theorem (Gessel):** For any , we have

where is the number of permutations in with no decreasing subsequence of length .

If you would like to see a proof of this theorem together with various pointers to the literature, I can refer you to this paper. In fact, since there will be no lecture post on Thursday, you can substitute this reading assignment for the missing lecture.

“If we divide the formula in Proposition 1 by …” I may have the wrong end of the stick but should the next display have $q_n$ as the LHS, not $p_n$?

You had the right end of the stick, thanks for catching that.