Math 202C: Lecture 4

In this Lecture and the next, we will discuss random walks on graphs. This is a randomized version of the discussion we had last week: we ask not for the number W^r(i,j) of r-step walks between two specified vertices i,j of a given graph \mathrm{G} — possibly with loops and/or multiple edges — on \{1,\dots,n\}, but rather for the probability P^r(i,j) that a particle initially situated at i which, at each instant of discrete time, jumps randomly to a neighboring vertex, with equal probability of jumping along any edge, will arrive at j at time r. We set P^0(i,j) := \delta_{ij}. Thus for each fixed vertex i and each fixed r \in \mathbb{Z}_{\geq 0}, we have a probability mass function P^r(i,\cdot) on the vertex set \{1,\dots,n\}, meaning that that

\sum\limits_{j=1}^n P^r(i,j)= 1.

This probability mass function in turn defines a probability measure on subsets S \subseteq \{1,\dots,n\} according to

P^r(i,S) = \sum\limits_{j \in S} P^r(i,j).

Thus P^r(i,S) is the probability that the particle is located at some vertex in the set S at time r.

The number P^r(i,j) is a probabilistic version of the combinatorial number W^r(i,j), and the relationship between these two quantities is not totally straightforward.

Problem 4.1: Prove that W^r(i,j) = W^r(j,i), and show by example that sometimes P^r(i,j) \neq P^r(j,i).

As the above example shows, random walk on \mathrm{G} departing from a specified vertex i doesn’t have as much symmetry as its non-random version; this is because it explores all walks beginning at i, whereas W^r(i,j) counts walks with specified boundary conditions. Indeed, the precise relationship between the numbers P^r(i,j) and W^r(i,j) is

P^r(i,j) = \begin{cases} 0, \text{ if } r < \mathrm{d}(i,j) \\ \frac{W^r(i,j)}{\sum_{j=1}^n W^r(i,j)}, \text{ if } r \geq \mathrm{d}(i,j) \end{cases},

where \mathrm{d}(i,j) is the length of a geodesic path from i to j in \mathrm{G}. One could also study random walk on \mathrm{G} started at i and “conditioned” to arrive at j after r steps, which means studying a uniformly random sample from the set of walks counted by W^r(i,j), but this is quite different from simple random walk on \mathrm{G}. Let us note that, in the important special case where \mathrm{G} is a kregular graph (see below for the definition of vertex degree), one has the straightforward relation

P^r(i,j) = \frac{1}{k^r}W^r(i,j),

so that P^r(i,j) = P^r(j,i) for regular graphs.

Problem 4.2: Compute P^r(i,j) for all values of i,j,r in the case that \mathrm{G}=\mathrm{K}(n) is the complete graph on \{1,\dots,n\}.

We will approach the study of random walks on graphs from a linear algebraic perspective, as we did with the enumeration of non-random walks. Let \mathrm{G} be a graph on \{1,\dots,n\}, where n \geq 2. We will assume that \mathrm{G} is connected, meaning that there exists at least one walk between every pair of vertices in \mathrm{G}. Employing the same construction as in the previous lectures, let \mathcal{H}=\mathcal{H}(\mathrm{G}) be a Hilbert space with orthonormal basis \{\mathbf{e}_1,\dots,\mathbf{e}_n\} labeled by the vertices of \mathrm{G}. We define the transition operator T \in \mathrm{End}\mathcal{H} by its action on the vertex basis, which is as follows:

T\mathbf{e}_i = \sum\limits_{j=1}^n \frac{m_{ij}}{d_i}\mathbf{e}_j,

where m_{ij} is the number of edges joining i to j in \mathrm{G}, and d_i is the degree of j, i.e. the total number of distinct edges incident to the vertex i (a loop contributes 1 to this count). The meaning of the ratio \frac{m_{ij}}{d_i} is that it is a conditional probability: it is the probability that the particle is located at vertex j at a specified time instant, given that it was located at vertex i at immediately prior to this.

The connection between random walks on \mathrm{G} and the transition operator T is conceptually quite similar to the relationship between ordinary (i.e. non-random) walks on \mathrm{G} and the adjacency operator A, as given in Theorem 1 of Lecture 2, but there are some important differences. First, as is clear from the above discussion, the transition operator T is not necessarily selfadjoint, so one cannot invoke the spectral theorem to immediately conclude that it is diagonalizable (though this is the case, as proved below). Another difference is that, in the random walk setting, we would like to allow for the possibility that the point of departure of the walk (i.e. the location of the particle at time zero) is itself random. To encode this algebraically, we consider a vector \mathbf{p} \in \mathcal{H} such that

p_i:=\langle \mathbf{e}_i,\mathbf{p} \rangle \geq 0

for all \in \{1,\dots,n\} and

\sum\limits_{i=1}^n p_i =1.

The vector \mathbf{p} represents the random location of the particle at time zero; note that taking \mathbf{p} to be one of the vectors in the vertex basis \{\mathbf{e}_1,\dots,\mathbf{e}_n\} corresponds to the situation where the initial position of the particle is deterministic. For each r \in \mathbb{Z}_{\geq 0} and i \in \{1,\dots,n\}, let P^r(\mathbf{p},j) denote the probability that a particle randomly evolving on \mathrm{G} (in the manner described above) is located at vertex j at time r jumps, given that its position at time zero was the vertex i with probability \langle \mathbf{e}_i,\mathbf{p} \rangle. We set P^0(\mathbf{p},\mathbf{e}_j) := p_j. In the case that \mathbf{p}=\mathbf{e}_i is a vector in the vertex basis, we default to our old notation and write P^r(i,j) = P^r(\mathbf{p},j).

Theorem 4.1: For each r \in \mathbb{Z}_{\geq 0}, we have

P^r(\mathbf{p},j) = \langle T^r\mathbf{p}, \mathbf{e}_j \rangle.

Proof: The proof is by induction on r. The case r=0 is immediate. For r \in \mathbb{N}, by the induction hypothesis we have

T^{r-1}\mathbf{p} = \sum\limits_{i=1}^n P^{r-1}(\mathbf{p},i)\mathbf{e}_i.

Thus

T\mathbf{p} =\sum\limits_{i=1}^n P^{r-1}(\mathbf{p},i)T\mathbf{e}_i = \sum\limits_{i=1}^n P^{r-1}(\mathbf{p},i) \sum\limits_{j=1}^n \frac{m_{ij}}{d_i} \mathbf{e}_j = \sum\limits_{j=1}^n \left( \sum\limits_{i=1}^n P^{r-1}(\mathbf{p},i) \frac{m_{ij}}{d_i} \right)\mathbf{e}_j,

so that

\langle T^r\mathbf{p},\mathbf{e}_j \rangle = \sum\limits_{i=1}^n P^{r-1}(\mathbf{p},i) \frac{m_{ij}}{d_i}.

The ith term in this sum is the conditional probability that a particle performing simple random walk with initial distribution \mathbf{p} in \mathrm{G} is located at j at time r, given that it is located at i at time r-1; summing over i gives the unconditional probability of this event.

— Q.E.D.

Theorem 4.1 means that we can use the transition operator to analyze simple random walk on \mathrm{G} in much the same way as we used the adjacency operator — provided that we can diagonalize it. We conclude this lecture by showing that this is at least theoretically possible.

Theorem 4.2: There exists a basis of \mathcal{H} consisting of eigenvectors of the transition operator T, and moreover every eigenvalue of T is real.

Proof: Let D \in \mathrm{End}\mathcal{H} be the operator defined by

D\mathbf{e}_j = \sqrt{d}_j\mathbf{e}_j.

Since by hypothesis \mathrm{G} is connected, D \in \mathrm{GL}(\mathcal{H}), i.e. D is a vector space automorphism (it is not necessarily a Hilbert space automorphism because it is not necessarily an isometry (you might want to think about when it is)). The inverse of D is simply

D^{-1}\mathbf{e}_j = \frac{1}{\sqrt{d_j}} \mathbf{e}_j.

Consider now the operator \tilde{T} = D^{-1}TD. This is a “symmetrized” version of the transition operator T, in that it is selfadjoint. Indeed, we have

\langle \mathbf{e}_i, \tilde{T}\mathbf{e}_j \rangle = \frac{m_{ij}}{\sqrt{d_id_j}},

which is a symmetric function of i and j. By the spectral theorem, \mathcal{H} admits an orthonormal basis \{\tilde{f}_1,\dots,\tilde{f}_n\} such that

\tilde{T}\tilde{\mathbf{f}}_i = \lambda_i \tilde{\mathbf{f}}_i, \quad 1 \leq i \leq n,

with each \lambda_i \in \mathbb{R}. But since \tilde{T}=D^{-1}TD, this means that

T(D\tilde{\mathbf{f}}_i) = \lambda_i (D\tilde{\mathbf{f}}_i), 1 \leq i \leq n.

Since D \in \mathrm{GL}(\mathcal{H}), the vectors

\mathbf{f}_i = D\tilde{\mathbf{f}}_i, 1 \leq i \leq n

are a basis of \mathcal{H}, and as we have just seen each is an eigenvector of T, with the corresponding eigenvalue real.

— Q.E.D.

1 Comment

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s