# Math 202C: Lecture 2

It is no secret that one can encode a finite undirected graph $\mathrm{G}$ as a finite-dimensional Hilbert space together with a self-adjoint operator on that space. There are two ways to implement this: functional and formal. Suppose the vertex set of $\mathrm{G}$ is $\{1,\dots,n\}.$ The functional implementation is to take the Hilbert space to be

$\mathbb{C}^{\mathrm{G}}=\{f \colon \{1,\dots,n\} \to \mathbb{C}\},$

the space of complex-valued functions on the vertex set of the graph equipped with the scalar product

$\langle f,g \rangle = \sum\limits_{i=1}^n \overline{f(i)} g(i),$

$A\mathbf{1}_j = \sum\limits_{\{i,j\} \in E(\mathrm{G})} \mathbf{1}_i,$

where $\mathbf{1}_i \in \mathbb{C}^\mathrm{G}$ is the indicator function

$\mathbf{1}_i(x) = \begin{cases} 1, \text{ if } x= i\\ 0, \text{ if } x \neq i \end{cases}.$

The formal implementation is to take the Hilbert space to be

$\mathbb{C}\mathrm{G} = \mathrm{Span} \{\mathbf{e}_1,\dots,\mathbf{e}_n\}$

where $\{\mathbf{e}_1,\dots,\mathbf{e}_n\}$ is an orthonormal set of abstract vectors indexed by the vertices of $\mathrm{G},$ and define the adjacency operator by

$A \mathbf{e}_j = \sum\limits_{\{i,j\} \in E(\mathrm{G})} \mathbf{e}_i.$

Both implementations are useful, and it’s wise to stay flexible and not be too wedded to either one. For the time being, we’ll go with the formal one.

The purpose of the above trade-off is to replace combinatorial questions about graphs with algebraic questions about linear operators: since linear algebra is such a powerful tool, this is often very helpful, and is an instructive example of how helpful it can be to “linearize” a problem. In particular, let us consider the problem of counting walks in graphs.

Definition 1: Given a graph $\mathrm{G}$ on $\{1,\dots,n\}$, a pair of vertices $i,j\in \{1,\dots,n\},$ and a nonnegative integer $r,$ an $r$-step walk from $i$ to $j$ in $\mathrm{G}$ is a function $\omega \colon \{0,\dots,r\} \to \{1,\dots,n\}$ such that $\omega(0) = i,$ $\omega(r)=j,$ and $\omega(x),\omega(x+1)$ are adjacent vertices for all $x$ in the domain of $\omega.$ We write $\Omega^r(i,j)$ for the set of all such functions, and set $W^r(i,j) = |\Omega^r(i,j)|.$

Let us take the case of $\mathrm{G}=K_n,$ the complete graph on $\{1,\dots,n\}$, as a case study.

First, it is possible to count walks in the complete graph by direct combinatorial reasoning, without using any fancy linear algebra: thinking about the problem carefully reduces it to solving a system of two linear equations in two unknowns. Pick an arbitrary vertex of $K_n,$ say the vertex $1.$ Observe that

$\sum\limits_{j=1}^n W^r(1,j) = (n-1)^r,$

since the sum on the left hand side counts all $r$-step walks beginning at $1,$ while the right hand side reflects the fact that such a walk amounts to choosing between $n-1$ options $r$ times. Thinking about the left hand side a bit more, we realize that its terms are really only of two types: there’s the term $W^r(1,1)$ which counts loops of length $r$ based at $1,$ and then there are terms of the form $W^r(1,j)$ with $j \neq 1$ corresponding to $r$-step walks which aren’t loops. Moreover, for any two vertices $j_1,j_2$ both of which are distinct from $1,$ we have that $W^r(u,j_1) = W^r(u,j_2)$ because $K_n$ is “everywhere the same.” More precisely, verify the following.

Problem 2.1: The map

$\Omega^r(1,j_1) \longrightarrow \Omega^r(1,j_2)$

defined by $\omega \mapsto \tau \circ \omega,$ where $\tau \in \mathrm{S}(n)$ is the transposition $\tau = (j_1\ j_2)$ is a bijection.

In view Problem 2.1, the above summation identity simplifies to

$W^r(1,1) + (n-1)W^r(1,2) = (n-1)^r.$

We thus have a linear equation with two unknowns: if we can find one more equation we’re done. Looking at small values of $r,$ one observes that the relation

$W^r(1,2) = W^r(1,1)+(-1)^{r-1}$

appears to hold, and this can be verified by induction. Indeed, for the inductive step, we write

$W^{r+1}(1,2) = \sum\limits_{j \neq 2} W^r(1,j),$

since the $r$th step of an $r+1$-step walk from $1$ to $2$ can land anywhere but the target vertex on its penultimate step, and also

$W^{r+1}(1,1) = \sum\limits_{j \neq 1} W^r(1,j)$

for exactly the same reason. Subtracting, we get

$W^{r+1}(1,2) - W^{r+1}(1,1) = W^r(1,1)-W^r(1,2) = (-1)^r$

by the induction hypothesis. Thus we obtain the formula

$W^r(1,1) = \frac{(n-1)^r+(-1)^{r}(n-1)}{n},$

for loops and I leave it to you to complete the mission and obtain the formula for $W^r(1,2)$.

To solve the above problem through the systematic use of linear algebra rather than combinatorial reasoning, we can make use of the following general result.

Theorem 1: Let $\mathrm{G}$ be a finite undirected graph on $\{1,\dots,n\},$ and let $A \in \mathrm{End}\ \mathbb{C}\mathrm{G}$ be its adjacency operator. Then for any $i,j \in \{1,\dots,n\}$ and $r \in \mathbb{Z}_{\geq 0},$ we have

$W^r(i,j) = \langle \mathbf{e}_i,A^r\mathbf{e}_j \rangle.$

Proof: For $r=0,$ this is obvious: since $A^0=I$ is the identity operator, we have $\langle\mathbf{e}_i,A^0\mathbf{e}_j \rangle = \delta_{ij},$ and clearly $W^0(i,j)=\delta_{ij}.$ For $r \in \mathbb{N},$ the result follows from (a special case of) the formula for the matrix elements of a product $A_1\dots A_r$ of linear operators relative to a preferred basis:

$\langle \mathbf{e}_i, A_1 \dots A_r \mathbf{e}_j \rangle = \sum\limits_{\substack{\omega \colon \{0,\dots,r\} \to \{1,\dots,n\} \\ \omega(0)=i, \omega(r)=j}} \langle \mathbf{e}_{\omega(0)}, A_1 \mathbf{e}_{\omega(1)} \rangle \langle \mathbf{e}_{\omega(1)}, A_2 \mathbf{e}_{\omega(2)} \rangle \dots \langle \mathbf{e}_{\omega(r-1)},A_r \mathbf{e}_{\omega(r)} \rangle.$

In the particular case at hand, we have $A_1=\dots=A_r=A,$ the adjacency operator, so that for any $r \in \{0,\dots,n\}$ the scalar product $\langle \mathbf{e}_{\omega(x-1)}, A \mathbf{e}_\omega(x) \rangle$ equal to zero unless $\omega(x-1)$ and $\omega(x)$ are adjacent, in which case it is equal to $1$. Thus we have

$\langle i | A^r | j \rangle = \sum\limits_{\omega \in \Omega^r(i,j)} 1 = W^r(i,j).$

— Q.E.D.

I’m sure you think you know what’s coming next: an invocation of the Spectral Theorem. However, let’s leave that for next lecture, and end this one by observing that sometimes Theorem 1 is useful simply because of the efficacy of algebraic operations with linear operators. In particular, in the case of $K_n,$ we have $A=J-I,$ where $I,J \in \mathrm{End}\ \mathbb{C}\mathrm{K}_n$ are operators

$I\mathbf{e}_j = \mathbf{e}_j \quad\text{ and }\quad J\mathbf{e}_i = \sum_{j=1}^n \mathbf{e}_j.$

Clearly, these operators commute, so we can compute $A^r$ by means of the binomial theorem:

$A^r=(J-I)^r = \sum\limits_{s=0}^r {r \choose s} J^s (-I)^{r-s} = \sum\limits_{s=1}^r {r \choose s} (-1)^{r-s} J^s + (-I)^r.$

Problem 2.2: Prove that $J^s = n^{s-1}J.$

Using Problem 2.2, we can continue our above computation to

$A^r=(J-I)^r= \frac{1}{n}\sum\limits_{s=1}^r{r \choose s}n^s (-1)^{r-s} J + (-1)^rI,$

and using the binomial theorem again (in the other direction), this is

$A^r=\frac{1}{n}\left((n-1)^r -(-1)^r\right) J + (-1)^rI.$

In particular, we can compute the missing quantity $W^r(1,2)$ from the combinatorial argument above as the scalar product

$\langle \mathbf{e}_1,A^r \mathbf{e}_2 \rangle = \frac{1}{n}\left((n-1)^r -(-1)^r\right).$

Of course, all of this follows from the decomposition $A=J-I$ of the adjacency operator, which is a special feature of $\mathrm{K}_n.$ For general graphs, we really do have to resort to spectral theory, and we will start down this road next lecture.