In this Lecture and the next, we will discuss random walks on graphs. This is a randomized version of the discussion we had last week: we ask not for the number of -step walks between two specified vertices of a given graph — possibly with loops and/or multiple edges — on but rather for the probability that a particle initially situated at which, at each instant of discrete time, jumps randomly to a neighboring vertex, with equal probability of jumping along any edge, will arrive at at time We set Thus for each fixed vertex and each fixed we have a probability mass function on the vertex set meaning that that
This probability mass function in turn defines a probability measure on subsets according to
Thus is the probability that the particle is located at some vertex in the set at time
The number is a probabilistic version of the combinatorial number and the relationship between these two quantities is not totally straightforward.
Problem 4.1: Prove that and show by example that sometimes
As the above example shows, random walk on departing from a specified vertex doesn’t have as much symmetry as its non-random version; this is because it explores all walks beginning at whereas counts walks with specified boundary conditions. Indeed, the precise relationship between the numbers and is
where is the length of a geodesic path from to in One could also study random walk on started at and “conditioned” to arrive at after steps, which means studying a uniformly random sample from the set of walks counted by but this is quite different from simple random walk on Let us note that, in the important special case where is a –regular graph (see below for the definition of vertex degree), one has the straightforward relation
so that for regular graphs.
Problem 4.2: Compute for all values of in the case that is the complete graph on
We will approach the study of random walks on graphs from a linear algebraic perspective, as we did with the enumeration of non-random walks. Let be a graph on where We will assume that is connected, meaning that there exists at least one walk between every pair of vertices in Employing the same construction as in the previous lectures, let be a Hilbert space with orthonormal basis labeled by the vertices of We define the transition operator by its action on the vertex basis, which is as follows:
where is the number of edges joining to in and is the degree of i.e. the total number of distinct edges incident to the vertex (a loop contributes to this count). The meaning of the ratio is that it is a conditional probability: it is the probability that the particle is located at vertex at a specified time instant, given that it was located at vertex at immediately prior to this.
The connection between random walks on and the transition operator is conceptually quite similar to the relationship between ordinary (i.e. non-random) walks on and the adjacency operator as given in Theorem 1 of Lecture 2, but there are some important differences. First, as is clear from the above discussion, the transition operator is not necessarily selfadjoint, so one cannot invoke the spectral theorem to immediately conclude that it is diagonalizable (though this is the case, as proved below). Another difference is that, in the random walk setting, we would like to allow for the possibility that the point of departure of the walk (i.e. the location of the particle at time zero) is itself random. To encode this algebraically, we consider a vector such that
for all and
The vector represents the random location of the particle at time zero; note that taking to be one of the vectors in the vertex basis corresponds to the situation where the initial position of the particle is deterministic. For each and let denote the probability that a particle randomly evolving on (in the manner described above) is located at vertex at time jumps, given that its position at time zero was the vertex with probability We set In the case that is a vector in the vertex basis, we default to our old notation and write
Theorem 4.1: For each we have
Proof: The proof is by induction on . The case is immediate. For by the induction hypothesis we have
The th term in this sum is the conditional probability that a particle performing simple random walk with initial distribution in is located at at time given that it is located at at time ; summing over gives the unconditional probability of this event.
Theorem 4.1 means that we can use the transition operator to analyze simple random walk on in much the same way as we used the adjacency operator — provided that we can diagonalize it. We conclude this lecture by showing that this is at least theoretically possible.
Theorem 4.2: There exists a basis of consisting of eigenvectors of the transition operator , and moreover every eigenvalue of is real.
Proof: Let be the operator defined by
Since by hypothesis is connected, i.e. is a vector space automorphism (it is not necessarily a Hilbert space automorphism because it is not necessarily an isometry (you might want to think about when it is)). The inverse of is simply
Consider now the operator This is a “symmetrized” version of the transition operator in that it is selfadjoint. Indeed, we have
which is a symmetric function of and By the spectral theorem, admits an orthonormal basis such that
with each But since this means that
Since the vectors
are a basis of and as we have just seen each is an eigenvector of with the corresponding eigenvalue real.