Math 202C: Lecture 3

In Lecture 2, we established the following unique factorization result for the symmetric group $\mathrm{S}(d).$

Theorem 1 (Jucys): Every permutation $\pi \in \mathrm{S}(d)$ admits a unique factorization

$\pi = (i_1\ j_1) \dots (i_r\ j_r)$

into transpositions $(i\ j),$ $1 \leq i, such that

$j_1 < \dots < j_r.$

Moreover, the number $r$ of factors in this decomposition is $r=|\pi|.$

We now consider some algebraic ramifications of Jucys’ result. Let $\mathcal{A}(d)$ denote the group algebra of the symmetric group. As you know from Math 202B, this is the $d!$-dimensional Hilbert space in which $\mathrm{S}(d)$ sits as an orthonormal basis, and which supports the bilinear product inherited from the group law on $\mathrm{S}(d)$. In fact, $\mathbb{C}\mathrm{S}(d)$ is a $*$-algebra, with antilinear involution given by $\pi^*=\pi^{-1}.$ All of this is true for arbitrary groups, and indeed you can view $\mathcal{A}$ as a functor from the category of groups to the category of unital associative $*$-algebras which transports each group $\mathrm{G}$ to its corresponding group algebra $\mathcal{A}(\mathrm{G}).$

Any subset of $\mathrm{S}(d)$ may be identified with a vector in $\mathbb{C}\mathrm{S}(d),$ simply by identifying it with the sum of its elements, and any family of disjoint subsets of $\mathrm{S}(d)$ is linearly independent, since any two of its members are orthogonal in $\mathbb{C}\mathrm{S}(d).$ In particular, for each $\alpha \vdash d$ we may view the corresponding conjugacy class $C_\alpha$ as an element of $\mathbb{C}\mathrm{S}(d)$, the sum of all permutations of cycle type $\alpha,$ and consider the linearly independent set in the group algebra,

$\{ C_\alpha \colon \alpha \vdash d\}.$

Let $\mathcal{Z}(d)$ denote the subspace of $\mathcal{A}(d)$ spanned by the conjugacy classes, the dimension of which is the number $p(d)$ of partitions of $d$. Given $C_\alpha,C_\beta \in \mathcal{Z}(d)$, we have

$C_\alpha C_\beta = \sum\limits_{\pi \in \mathrm{S}(d)} c_{\alpha\beta}^\pi \pi,$

where

$c_{\alpha\beta}^\pi = \left| \{ (\rho,\sigma) \in C_\alpha \times C_\beta \colon \rho\sigma=\pi\}\right|.$

Th number $c_{\alpha\beta}^\pi$ in fact depends only on the conjugacy class of $\pi$, for if $\rho\sigma=\pi$ then $(\kappa\rho\kappa^{-1})(\kappa\sigma\kappa^{-1})=\kappa\pi\kappa^{-1}$ for any $\kappa \in \mathrm{S}(d).$ Thus $\mathcal{Z}(d)$ is a subalgebra of $\mathbb{C}\mathrm{S}(d)$ called the class algebra, which is a particularly appropriate name for something encountered in an algebra class. We thus have

$C_\alpha C_\beta = \sum\limits_{\gamma \vdash d} c_{\alpha\beta}^\gamma C_\gamma$

where

$c_{\alpha\beta}^\gamma = \left| \{ (\rho,\sigma) \in C_\alpha \times C_\beta \colon \rho\sigma \in C_\gamma\}\right|.$

The numbers $c_{\alpha\beta}^\gamma$ are called the connection coefficients (aka structure constants) of the class algebra $\mathcal{Z}(d)$; they are the entries of the $p(d) \times p(d)$ multiplication table for conjugacy classes. More generally, we can take arbitrary polynomials in conjugacy classes, and write them as linear combinations of conjugacy classes, and this often has enumerative meaning. For example, consider the class $T=C_{(21^{d-2})}$ of transpositions. For any $r \in \mathbb{N}$, we have

$T^r = \sum\limits_{j \in \mathrm{Fun}(r,d)}\ \sum\limits_{\substack{i \in \mathrm{Fun}(r,d) \\ i

and each term in the sum corresponds uniquely to an $r$-step walk on the Cayley graph $\Gamma(d)$ of $\mathrm{S}(d)$ beginning at $\iota.$ Thus,

$T^r = \sum\limits_{\pi \in \mathrm{S}(d)} W^r(\pi) \pi = \sum_{\alpha \vdash d} W^r(\alpha) C_\alpha,$

which says that the combinatorial problem of counting $r$-step walks on the graph $\Gamma(d)$ from the identity permutation to a permutation of cycle type $\alpha$ is the same thing as the algebraic problem of resolving $T^r \in \mathcal{Z}(d)$ into a linear combination of conjugacy classes.

For any $\pi \in \mathrm{S}(d)$, we have

$C_\alpha \pi = \sum\limits_\rho \rho\pi \quad\text{ and }\quad \pi C_\alpha= \sum\limits_\rho \pi\rho,$

where both sums are over $\rho \in C_\alpha$, and the two sums are in fact the same since $(\pi\rho\pi^{-1})\pi = \pi \rho.$ Thus, the class algebra is a subalgebra of the center of the group algebra. Conversely, if $C \in \mathbb{C}\mathrm{S}(d)$ is a central element, then $\pi C \pi^{-1}=C \pi \pi^{-1}=C,$ whence $C$ is a linear combination of conjugacy classes, and we conclude that the class algebra $\mathcal{Z}(d)$ coincides with the center of the group algebra $\mathbb{C}\mathrm{S}(d).$ Again, all of the above is true for any group, and you can view $\mathcal{Z}$ as a functor from groups to commutative, unital $*$-algebras which transports each group $\mathrm{G}$ to its class algebra $\mathcal{Z}(\mathrm{G}).$

In addition to the conjugacy class themselves, the levels $L_r$ of the Cayley graph of $\mathrm{S}(d)$ belong to the class algebra, and are $\mathbb{Z}_2$-linear combinations of conjugacy classes,

$L_r = \sum\limits_{\substack{\pi \in \mathrm{S}(d) \\ |\pi|=r}} \pi = \sum\limits_{\substack{\alpha \vdash d \\ \ell(\alpha)=d-r}} C_\alpha.$

Lifted to the group algebra, Theorem 1 says that these central elements are given by the sums

$L_r = \sum\limits_{\substack{j \in \mathrm{Fun}(r,d) \\ j \text{ strictly increasing}}}\ \sum\limits_{\substack{i \in \mathrm{Fun}(r,d) \\ i

where $\mathrm{Fun}(r,d)$ is the set of functions from $[r]$ to $[d].$ Equivalently, we have

$L_r = \sum\limits_{\substack{ j \in \mathrm{Fun}(r,d) \\ j\text{ strictly increasing}}} Y_{j(1)} \dots Y_{j(r)},$

where $Y_1,Y_2,\dots,Y_d \in \mathbb{C}\mathrm{S}(d)$ are the sums

$Y_j = \sum\limits_{\{i \in [d] \colon i

Note that $Y_1$ is an empty sum, hence equal to the zero element in $\mathbb{C}\mathrm{S}(d).$ A nice way to picture these sums is to take the conjugacy class $T \subset \mathrm{S}(d)$ of transpositions and view it as the $d \times d$ strictly upper triangular matrix over $\mathbb{C}\mathrm{S}(d)$ whose entry in row $i$ and column $j$ is the transposition $(i\ j)$ whenever $i and $0 \in \mathbb{C}\mathrm{S}(d)$ when not. Then $Y_j$ is simply the $j$th column sum of the matrix $T$. For example, in the case $d=4,$ we have

$\begin{bmatrix} 0 & (1\ 2) & (1\ 3) & (1\ 4) \\ 0 & 0 & (2\ 3) & (2\ 4) \\ 0 & 0 & 0 & (3\ 4) \\ 0 & 0 & 0 & 0\end{bmatrix},$

and

$Y_1 = 0 \\ Y_2 = (1\ 2) \\ Y_3 = (1\ 3) + (2\ 3) \\ Y_4=(1\ 4)+(2\ 4) +(3\ 4).$

The transpositions sums $Y_1,Y_2,\dots,Y_d$ are known as the Young-Jucys-Murphy elements of the group algebra $\mathbb{C}\mathrm{S}(d).$ Note that the YJM elements are non-central, since they correspond to proper subsets of a conjugacy class (namely, the class $T$), and therefore cannot be expressed as linear combinations of conjugacy classes. Nevertheless, we have the following.

Proposition 2: The YJM elements commute with one another.

Proof: Problem set 2.

In view of the commutativity of the elements $Y_1,\dots,Y_d$, we may equivalently write Theorem 1 as

$L_r = \sum\limits_{\substack{R \subseteq [d] \\ |R|=r}} \prod\limits_{j \in R} Y_j,$

which says that $L_r$ is the sum of all squarefree monomials of degree $r$ in the commuting elements $Y_1,\dots,Y_d$. But this is a polynomial you know from Math 202B: it is the elementary symmetric polynomial $e_r$ of degree $r$ in $d$ variables. In particular, Theorem 1 is equivalent to the following polynomial identity in the class algebra $\mathcal{Z}(d)$.

Theorem 2: $L_r = e_r(Y_1,\dots,Y_d).$

As you know from Math 202B, the elementary symmetric polynomials are important because they express the coefficients of a polynomial as functions of its roots (the inverse problem, expressing the roots as functions of the coefficients, is harder). Thus Theorem 2 is equivalent to the following factorization in the class algebra.

Theorem 3: For any $\hbar \in \mathbb{C}$, we have

$\sum\limits_{r=0}^d \hbar^r L_r = \prod\limits_{i=1}^d (\iota + \hbar Y_i).$

In addition to being a direct incarnation of the relationship between the roots of a polynomial and its coefficients, there is another reason why the elementary symmetric polynomials are deserving of their name. Let $X_1,\dots,X_d$ be commuting indeterminates, and consider the corresponding polynomial algebra $\mathbb{C}[X_1,\dots,X_d].$ As you know from Math 202B, polynomials $f \in \mathbb{C}[X_1,\dots,X_d]$ satisfying

$f(X_{\sigma(1)},\dots,X_{\sigma(d)}) = f(X_1,\dots,X_d) \quad\text{for all }\sigma \in \mathrm{S}(d)$

are called symmetric polynomials, and their totality forms a subalgebra of $\mathbb{C}[X_1,\dots,X_d]$ denoted $\mathbb{C}[X_1,\dots,X_d]^{\mathrm{S}(d)}.$ This subalgebra is itself a polynomial algebra.

Theorem 3 (Newton): We have $\mathbb{C}[X_1,\dots,X_d]^{\mathrm{S}(d)} = \mathbb{C}[e_1,\dots,e_d].$

Theorem 3 is often referred to as the fundamental theorem on symmetric polynomials. A proof is given in Professor Sam’s 202B notes (Theorem 3.3.4). Combining Theorem 3 with Theorem 2, we learn the following: even though the JM elements $Y_1,\dots,Y_d$ do not themselves belong to the class algebra $\mathcal{Z}(d)$, any symmetric function of them does.

Theorem 4: For any $f \in \mathbb{C}[X_1,\dots,X_d]^{\mathrm{S}(d)},$ the group algebra element $f(Y_1,\dots,Y_d) \in \mathbb{C}\mathrm{S}(d)$ is central.

Theorem 4 is quite remarkable, and has many interesting consequences. Here is one. Suppose we weaken our definition of strictly monotone walks on the edge-labeled Cayley graph $\Gamma(d)$ of the symmetric group $\mathrm{S}(d)$ to just plain monotone walks, meaning walks which have the feature that the labels of the edges they traverse form a weakly increasing sequence.

Theorem 5: The number $\vec{W}^r(\pi)$ depends only on the conjugacy class of $\pi$.

Proof: The complete homogeneous symmetric polynomial of degree $r$ in commuting variables $X_1,\dots,X_d$ is the sum of all monomials of degree $r$ in these variables (hence the name “complete”). Equivalently,

$h_r(X_1,\dots,X_d) = \sum\limits_{\substack{j \in \mathrm {Fun}(r,d) \\ j\text{ weakly increasing}}} X_{j(1)} \dots X_{j(d)}.$

Evaluating $h_r$ on the YJM elements, we obtain

$h_r(Y_1,\dots,Y_d) = \sum\limits_{\substack{j \in \mathrm{Fun}(r,d) \\ j \text{ weakly increasing}}}\ \sum\limits_{\substack{i \in \mathrm{Fun}(r,d) \\ i

which is precisely the sum of all weakly monotone walks on $\Gamma(d)$ beginning at $\iota.$ Thus,

$h_r(Y_1,\dots,Y_d) = \sum\limits_{\pi \in \mathrm{S}(d)} \vec{W}^r(\pi) \pi.$

But since $h_r$ is a symmetric polynomial, Theorem 4 guarantees that $h_r(Y_1,\dots,Y_d) \in \mathcal{Z}(d)$, so that we must have $\vec{W}^r(\pi)=\vec{W}^r(\kappa\pi\kappa^{-1})$ for all $\pi,\kappa \in \mathrm{S}(d),$ as claimed.

— Q.E.D.

No direct combinatorial proof of Theorem 5 is known.

Exercise: In the symmetric group $\mathrm{S}(4)$, write down all factorizations of the full cycle $\gamma = (1\ 2\ 3\ 4)$ into three transpositions; identify the factorizations which are weakly monotone, and those which are strictly monotone. Repeat this procedure with the full cycle $\gamma'=(1\ 3\ 2\ 4).$