Math 202C: Lecture 8

We are at the beginning of the theory of harmonic analysis on finite abelian groups, starting with the concrete example of the the rank k hypercube group, which we have chosen to define as the permutation group \mathrm{B}(k) generated by the disjoint transpositions

\tau_1=(1\ 2), \tau_2=(3\ 4), \dots, \tau_k = (2k-1\ 2k).

Typically, analysts think of harmonic analysis on \mathrm{B}(k) (or any other group) as the study of the Hilbert space \mathbb{C}^{\mathrm{B}(k)} of functions

a \colon \mathrm{B}(k) \longrightarrow \mathbb{C}

equipped with the scalar product

\langle a,b \rangle = \sum\limits_{\pi \in \mathrm{B}(k)} \overline{a(\pi)}b(\pi),

so that one has a canonical orthonormal basis \{\delta_\pi\} of \mathbb{C}^{\mathrm{B}(k)} consisting of indicator functions. This space has the structure of an algebra when equipped with the convolution product of functions,

[a*b](\pi) = \sum\limits_{\sigma \in \mathrm{B}(k)} a(\pi\sigma^{-1}) b(\sigma),

which is exactly what one gets from bilinear extension of group law in \mathrm{B}(k) lifted to the group element basis:

\delta_\rho * \delta_\sigma = \delta_{\rho\sigma}.

In particular, the convolution product on \mathbb{C}^{\mathrm{B}(k)} is quite different from the pointwise product. This is the right point of view to take when one is dealing with non-discrete groups (e.g. Lie groups), but for finite groups there is another way of writing things out which is often simpler and cleaner, and favored by algebraists.

The algebraists’ preferred Hilbert space \mathbb{C}\mathrm{B}(k) of formal \mathbb{C}-linear combinations of permutations \pi \in \mathrm{B}(k),

a = \sum\limits_{\pi \in \mathrm{B}(k)} a(\pi)\pi.

Such linear combinations are of course equivalent to functions on the group, and you can think of the linear combination a as a generating function which are equivalent to complex valued functions on \mathrm{B}(k). From this perspective the canonical orthonormal basis of \mathbb{C}\mathrm{B}(k) consists of the permutations \pi \in \mathrm{B}(k) themselves (rather than their indicator functions), which induces the equivalent scalar product,

\langle a,b \rangle = \sum\limits_{\pi \in \mathrm{B}(k)} \overline{\langle \pi, a\rangle} \langle \pi,b \rangle.

The convolution product on \mathbb{C}\mathrm{B}(k) is likewise replaced with the equivalent multiplication (written simply as juxtaposition)

ab = \sum\limits_{(\rho,\sigma) \in \mathrm{B}(k)^2} \langle \rho,a\rangle \langle \sigma,b\rangle \rho\sigma =\sum\limits_{\pi \in \mathrm{B}(k)} \left(\sum\limits_{\sigma \in \mathrm{B}(k)} \langle \pi\sigma^{-1},a\rangle \langle \sigma,b \rangle \right)\pi.

In short, \mathbb{C}^{\mathrm{B}(k)} and \mathbb{C}\mathrm{B}(k) are isomorphic both as Hilbert spaces and as algebras, so the choice to use one rather than the other is a matter only of preference. In practice, it is quite useful to be comfortable with both the analytic and the algebraic way of doing things.

Yet another way of repackaging everything is the following probably too-general perspective. Let us zoom back out to the very beginning, where we started with an arbitrary finite graph \Gamma. The algebraic way to understand this combinatorial object is to replace the vertex set V of \Gamma with the algebra \mathbb{C}\langle x_v \colon v \in V \rangle of polynomials in noncommuting variables x_v indexed by the vertices of v. In other words, this is the free algebra of rank |V|. This algebra is graded by degree, and we have a decomposition

\mathbb{C}\langle x_v \colon v \in V \rangle = \bigoplus_{d=0}^\infty \mathbb{C}^{(d)}\langle x_v \colon v \in V \rangle,

with the dth summand being the space of homogeneous polynomials of degree d. Observe that this vectors space is isomorphic to the space of complex-valued functions on V^d, i.e. the space of functions on d-tuples of vertices, or equivalently formal linear combinations of d-tuples of vertices. In particular, the linear component \mathbb{C}^{(1)}\langle x_v \colon v \in V\rangle is isomorphic to either/both \mathbb{C}^V and \mathbb{C}V, and the adjacency operator acts on this linear component to capture the edge structure of \Gamma. Indeed, the free algebra is isomorphic to the tensor algebra of the vector space of functions on the vertex set V. In the case where the vertex set of V of \Gamma is actually a group \mathrm{G}, we can mod out by the group law and consider the quotient

\mathbb{C}\langle x_g \colon g \in \mathrm{G} \rangle/\langle x_gx_h=x_{gh}\rangle

of the free algebra by the ideal corresponding to the group multiplication. This quotient consists solely of polynomials of degree 1, so functions on the group, or equivalently linear combinations of group elements, but now with the structure of an algebra.

Coming back to reality, the fact that \mathbb{C}\mathrm{B}(k) is an algebra and not just a vector space is reflected in the presence of a certain class of operators on the group algebra which are not present in the endomorphism algebra of a vector space devoid of multiplication.

Definition 1: Given b \in \mathbb{C}\mathrm{B}(k), the corresponding multilplication operator R_b\in \mathrm{End} \mathbb{C}\mathrm{B}(k) is defined by

R_ba = ab, \quad a \in \mathbb{C}\mathrm{B}(k).

We are using the letter “R” for multiplication operators because later on we will consider the group algebras of noncommutative groups, in which left and right multiplication do not necessarily coincide, and we have typically different left and right multiplication operators L_b and R_b. To put this the Math 202B way, the operator R_b is the image of b in the right regular representation of \mathbb{C}\mathrm{B}(k).

Note that multiplication operators corresponding to pure group elements (as opposed to linear combinations of group elements) are unitary operators on \mathbb{C}\mathrm{B}(k),

\langle \rho,R_\pi\sigma \rangle = \langle \rho,\sigma\pi \rangle = \langle \rho\pi^{-1},\sigma \rangle = \langle R_{\pi}^{-1} \rho,\sigma \rangle.

However, since we are working with the permutation group \mathrm{B}(k), in which all permutations are actually involutions, R_\pi is both unitary and selfadjoint, hence has eigenvalues \pm 1.

The importance of multiplication operators in our quest to understand the eigenvalues and eigenvectors of the Cayley graph of \mathrm{B}(k) as generated by the transpositions \tau_1,\dots,\tau_k is the following.

Proposition 1: The adjacency operator of the Cayley graph of \mathrm{B}_k is the multiplication operator R_t, where

t = \sum\limits_{i=1}^k \tau_i.

Proof: For any \rho \in \mathrm{B}(k), we have

R_t\rho = \sum\limits_{i=1}^k \rho\tau_i.

— Q.E.D.

In view of Proposition 1, the problem of understanding the eigenvalues and eigenvectors of the Cayley graph is that of understanding the eigenvalues and eigenvectors of

R_t = \sum\limits_{I=1}^k R_{\tau_i}.

To solve it, we will work out the spectral analysis of arbitrary multiplication operators on \mathrm{B}(k). The main tool in this endeavor is as second basis of the group algebra, which consists of special linear combinations \chi of group elements which have the property that

\langle \rho\sigma,\chi \rangle = \langle \rho,\chi \rangle \langle \sigma,\chi \rangle.

That is, \chi is the generating function of a group homomorphism

\mathrm{B}(k) \to \mathbb{C}^\times,

the multiplicative group of nonzero complex numbers. I like this way of stating things; in the analytic formalism, the words “character” and “homomorphism” mean exactly the same thing, a redundancy, whereas in the algebraic formalism “character” really means “generating function of a homomorphism.”

In Lecture 7, we explicitly worked out all homorphisms from \mathrm{B}(k) to \mathbb{C}^\times, showing that they have values in the subgroup generated by -1, and are moreover parameterized by the points of \mathrm{B}(k). We also showed that the characters are orthogonal,

\langle \chi_\pi,\chi_\sigma \rangle = 2^k[\pi=\sigma], \quad \pi,\sigma \in \mathrm{B}(k).

In particular, for any a \in \mathbb{C}\mathrm{B}(k), we have the character expansion

a = 2^{-k}\sum\limits_{\pi \in \mathrm{B}(k)} \langle \chi_\pi,a\rangle \chi_\pi.

The character expansion of a is also known as its Fourier expansion, with \langle \chi_\pi,a\rangle being the Fourier coefficients of a. The main reason that character expansions are “better” than expansions in the group basis is that the Fourier coefficients of a product ab are just the products of the corresponding Fourier coefficients of a and b.

The main reason that characters — which are generating functions of homomorphisms \mathrm{B}(k) \to \{-1,+1\} — are better than the original group elements is that their multiplication is extremely simple.

Theorem 1: For any \rho,\sigma \in \mathrm{B}(k), we have

\chi_\rho\chi_\sigma =0

if \rho \neq \sigma, and

\chi_\pi \chi_\pi = 2^k\chi_\pi.

Proof: Expanding in the permutation basis, we have

\chi_\rho\chi_\sigma = \sum\limits_{\pi_1 \in \mathrm{B}(k)} \left( \sum\limits_{\pi_2 \in \mathrm{B}(k)} \chi_\rho(\pi_1\pi_2^{-1}) \chi_\sigma(\pi_2) \right) \pi_1 = \sum\limits_{\pi_1 \in \mathrm{B}(k)} \left( \sum\limits_{\pi_2 \in \mathrm{B}(k)} \chi_\rho(\pi_1) \chi_\rho(\pi_2) \chi_\sigma(\pi_2) \right) \pi_1,

where we used that \chi_\rho is a homomorphism and \pi_2^{-1}=\pi_2. The result now follows from character orthogonality, which is Theorem 2 in Lecture 7.

— Q.E.D.

An immediate consequence is that the Fourier coefficients of a product are the products of the Fourier coefficients of the factors.

Corollary 1: For any a,b \in \mathbb{C}\mathrm{G}, we have

\langle \chi_\pi,ab\rangle = \langle \chi_\pi,a \rangle \langle \chi_\pi,b\rangle.

Proof: On one hand, the character expansion of the product ab is

ab = 2^{-k}\sum\limits_{\pi \in \mathrm{B}(k)} \langle \chi_\pi,ab\rangle \chi_\pi.

On the other hand, we compute the product a and b via their character expansions:

ab = \left( 2^{-k}\sum\limits_{\rho \in \mathrm{B}(k)} \langle \chi_\rho,a\rangle \chi_\rho\right)\left( 2^{-k}\sum\limits_{\sigma \in \mathrm{B}(k)} \langle \chi_\sigma,b\rangle \chi_\sigma\right) \\= 4^{-k} \sum\limits_{(\rho,\sigma) \in \mathrm{B}(k)^2}\langle \chi_\rho,a\rangle\langle \chi_\sigma,b\rangle \chi_\rho\chi_\sigma \\ = 2^{-k}\sum\limits_{\pi \in \mathrm{B}(k)} \langle\chi_\pi,a\rangle \langle \chi_\pi,b\rangle \chi_\pi,

where we applied Theorem 1 to get the final equality. Since the characters are linearly independent, the result follows.

—- Q.E.D.

In Lecture 9, we will apply the above to prove that M \in \mathrm{End} \mathbb{C}\mathrm{G} is a multiplication operator if and only if it acts diagonally in the character basis. We will also find the corresponding eigenvalues, and hence count walks on \mathrm{B}(k). In Lecture 10 we will then analyze convergence of random walk on \mathrm{B}(k) to equilibrium.

2 Comments

Leave a Reply