Enumeration of topics:
Category Archives: 202C
Math 202: Function algebras
The function algebra of a finite set is the set
of functions
with operations defined pointwise. A basic family of functions are the elementary functions,
Theorem F1. The elementary functions are nonzero pairwise orthogonal projections.
Combining Theorem F1 with Theorem A3, we get that the set is a basis of
.
The algebra can be understood completely. In particular, subalgebras can be explicitly classified. A partition of
is a set
of nonempty disjoint subsets of
whose union is
. The elements
of
are called its blocks. The set
of partitions of
is partially ordered set under the refinement order, where
if and only if
can be obtained from
by breaking blocks. Associated to every
is the subalgebra
consisting of functions constant on the blocks of
Note that
is isomorphic to
Theorem F2. The set is the complete collection of subalgebras of
. Moreover,
if and only if
This classification theorem can be proved in several ways. A nice argument uses the finite-dimensional Stone-Weierstrass theorem. A subalgebra is said to separate points if, for every
there exists a function
such that
Theorem F3. If separates points, then
There is a parallel classification of ideals in . Let us order the set
of all subsets of
by inclusion. Associated to each
is the ideal
Theorem F4. The set is the complete collection of ideals in
. Moreover,
if and only if
Just like Theorem F2 shows that we cannot discover any new algebras by looking at subalgebras of functional algebras, Theorem F4 reveals that we cannot discover any new algebras by taking quotients of function algebras.
Theorem F5. For any the quotient algebra
is isomorphic to the function algebra
Now let us classify von Neumann algebra structures on . This means that we will find all Frobenius scalar products on
; since
is commutative there is no distinction between left-Frobenius and right-Frobenius. Equivalently, we classified faithful states on
, all of which are tracial. A function
is called a probability distribution if
for all
, and
Theorem F6. States on
are in bijection with probability distributions on
: every state is of the form
for a unique probability distribution
. Moreover,
is faithful if and only if
is nonvanishing.
Theorem F6 is equivalent to the statement that all Frobenius scalar products are of the form
for a unique probability distribution having full support in
Going beyond understanding the inner workings of a given function algebra we can classify all algebras
which are isomorphic to a function algebra.$ Clearly, any such
must be commutative. A basis of
consisting of pairwise orthogonal projections is called a Fourier basis.
Theorem F7. An algebra is isomorphic to a function algebra if and only if it admits a Fourier basis.
Theorem F7 is very important, and so is its proof. The fact that admits a Fourier basis is clear (the elementary functions). Conversely, suppose that
is an abstract algebra in which we are able to find a Fourier basis
indexed by some finite set
. Then, every element
can be expanded in this basis,
and the coefficients in this expansion define a function . The mapping
defined by is an algebra isomorphism called the Fourier transform. The “the” here is justified by the following.
Theorem F8. The only Fourier basis is is the basis
of elementary functions.
Theorem F8 means that if a Fourier transform exists, then it is unique up to relabeling the label set
.
Math 202: Abstract Algebras
This is the review post for Part : abstract algebras.
Throughout the Math 202 sequence and on the qualifying exam, the term “algebra” by default refers to a complex vector space of positive, finite dimension equipped with a unital, associative, bilinear multiplication
denoted
and an antilinear, antimultiplicative, involution conjugation
denoted
The fundamental example is Two further examples are
with coordinatewise multiplication and conjugation, and
with matrix multiplication and Hermitian conjugation. Both coincide with
for
Thus, we have an example of a commutative algebra of every dimension, and for algebras of square dimension bigger than one we also have a noncommutative example.
Inside a given algebra there are various classes of special elements. The broadest of these is the class of normal elements, which consists of those
which commute with their conjugate,
Two subclasses thereof are selfadjoint elements which are equal to their conjugate,
, and unitary elements which satisfy
Normal elements which are both selfadjoint and unitary are called reflections and they satisfy
Selfadjoint elements which can be factored as
are called nonnegative, and selfadjoint elements such that
are called projections. Two projections which satisfy
are said to be orthogonal.
Theorem A1. Every can be written
with
selfadjoint. Furthermore, this decomposition is unique and
is normal if and only if
and
commute.
Theorem A2. An algebra is commutative if and only if all its elements are normal.
Theorem A3. Any set of nonzero pairwise orthogonal projections in is linearly independent.
The category has algebras as objects. Morphisms are linear transformations
such that
and these are called algebra homomorphisms. Two algebras and
are said to be isomorphic if and only if there exists a bijective homomorphism between them.
Theorem A4. If is a one-dimensional algebra then there exists a unique isomorphism
Theorem A5. If is a two-dimensional algebra then it is commutative.
Given any algebra homomorphism the set
is a subspace of which contains
and is closed under multiplication and conjugation, i.e. it is a subalgebra of
The set
is a subspace of which does not necessarily contain
However,
is closed under multiplication and conjugation, and in fact for every
we have
for all
, i.e.
is an ideal in
Like the collection of subspaces of a given vector space, we can partially order the set of subalgebras of a given algebra by set-theoretic inclusion. This poset has a unique minimal element given by
and a unique maximal element of the poset of subalgebras of
is
itself. Given two subalgebras
and
of
, there is a unique maximal subalgebra contained in both of them, namely
. There is also a unique minimal subalgebra containing both
and
, namely the intersection of all subalgebras containing
. This is called the algebra generated by
and it is denoted
Theorem A6. We have
In fact, for an arbitrary subset there is a unique minimal algebra
containing
, namely the intersection of all subalgebras containing it, and this is called the subalgebra generated by
.
Theorem A7. We have where
An important subalgebra of a given algebra is its center,
We have if and only if
is commutative, and
can be viewed as a measure of how (non)commutative
is.
In fact, subalgebras always come in pairs: given the corresponding centralizer is
Theorem A8. If and
are subalgebras of
such that
then their centralizers are subalgebras such that
One may also consider the set of commutative subalgebras of a given algebra ordered by inclusion. The intersection of any family of commutative subalgebras is again a commutative subalgebra. However, for an arbitrary set
there may not exist a commutative subalgebra containing it; this occurs for example if
consists of a single non-normal element.
Theorem A9. A set is contained in a commutative subalgebra of
if and only if
is commutative.
The unique minimal element of the poset of commutative subalgebras of is still
, but there may be many maximal commutative subalgebras
having the property that they are not contained in any strictly larger commutative subalgebra.
Theorem A10. A commutative subalgebra is a maximal commutative subalgebra if and only if
A natural question is whether we can define a scalar product on a given algebra which is compatible with its multiplication and conjugation. Given any scalar product on
, we can normalize it so that
A scalar product so normalized is said to have the left-Frobenius property if
We say that it has the right-Frobenius property if
A scalar product satisfying which has both the left-Frobenius property and the right-Frobenius property is called a Frobenius scalar product.
The existence of Frobenius scalar products on is intimately related to the existence of special linear functionals on
. The reason for this is that given any linear functional
we can define a corresponding Hermitian form on
by
and this form has the left-Frobenius property.
A faithful state is a linear functional satisfying
and
with equality if and only if
Theorem A.10. Faithful states on are in bijection with left-Frobenius scalar products on
A trace is a linear functional which satisfies
Theorem A.11. Faithful tracial states on are in bijection with Frobenius scalar products on
An algebra equipped with a Frobenius scalar product is called a von Neumann algebra. Equivalently, a von Neumann algebra is an algebra
together with a faithful tracial state.
Math 202C: Lecture 14
Let us begin by reviewing some basic material from Math 202AB. Let be an orthonormal basis of a Hilbert space
. The corresponding diagonal subalgebra
is the set of all operators which act diagonally on
That is,
if and only if
for each
where
is a scalar. Equivalently,
where is the orthogonal projection of
onto the line spanned by
A physicist would write this projection operator as
and so that in Dirac notation the above decomposition becomes
We have that is isomorphic to
via
. This is a particularly transparent instance of the Fourier transform.
We know from Math 202B that is a maximal abelian subalgebra of
, and moreover that every MASA in
is
for some orthonormal basis
. You should know why this is true, and moreover you should be able to prove that it is true to someone who does not believe you. Subsequently, you can and should taunt them with your understanding of the algebra
A basic problem in linear algebra is: given does there exist an orthonormal basis
such that
? According to the Spectral Theorem, the answer is “yes” if and only if
is normal. People who live on the wrong side of the tracks refer to this as “unitary diagonalizability,” but you should not mock their intellectual poverty: even if you enjoy thinking in austere algebraic terms an occasional trip to the wrong side of the tracks can be rewarding.
With that polemic out of our system, let us return to our current object of study: the tower of symmetric groups
For each choose an arbitrary parameterization of the isomorphism classes of irreducible unitary representations of
by the set
of partitions of
and for each
choose an arbitrary representative
of the corresponding isomorphism class. These choices being made, we define a graph
called the branching graph of the tower of symmetric groups. The vertex set of
is the set
of all partitions, so is an infinite graph. Moreover, this graph is graded: each
is an independent set, and every edge of the graph joins a vertex of
to a vertex of
for some
The precise adjacency relation is the following: if
and
then
is an edge of
if and only if when
is restricted to a representation of
it contains an irreducible subspace
isomorphic as a representation of
to
The work that we have done over the last two lectures tells us that the graph we have now constructed is simple: it has no multiple edges. Thus, by construction, for any positive integers
and any
we have an orthogonal decomposition
where is the set of geodesics in
joining a point of
to the specified point
, and for each such geodesic
is a nonzero subspace of
in which
acts irreducibly. Note that while all of these subspaces
are distinct (indeed, pairwise orthogonal), sum of them may be isomorphic to one another as representations of
; indeed, for any particular
the number of spaces in the above decomposition isomorphic to
is
The most extreme case of this is the case where
is an orthogonal decomposition of into one-dimensional subspaces known as the Gelfand-Tsetlin lines of
If we choose a unit vector
from each Gelfand-Tsetlin line
we obtain an orthonormal basis of
called a Gelfand-Tsetlin basis. Typically one ignores the phase ambiguity implicit in such a choice and simply calls any such choice
“the” Gelfand-Tsetlin basis of
which really means that the basis is uniquely defined up to the phase ambiguity. In any case, this gives our first handle on the dimensions of the irreducible representations of the symmetric groups: we now know that
Now, for each and every
let
be the subalgebra of
consisting of all elements
whose image
belongs to the diagonal subalgebra
of the GT-basis
In other words, the image
of under the Wedderburn isomorphism
has the feature that, if we look at it as a block matrix relative to the -basis in every irreducible representations, the
-block of
is a diagonal matrix. Thus, if we define the Gelfand-Tsetlin subalgebra of
to be
then we have obtained a maximal commutative subalgebra of consisting of those elements
which act diagonally on the GT-basis
of every irreducible representation
of
In fact, in “Phase I” of our current project (representation theory of the symmetric groups), we met the Gelfand-Tsetlin algebra in a different guise. Recall that the Jucys-Murphy subalgebra of
was defined to be the algebra generated by
where is the center of
We had to put in some significant work to show that
, which is shorthand for saying that the Jucys-Murphy specialization of the polynomial algebra
is a surjection onto
This effort front loading is now starting to pay off.
Theorem 14.1. We have
Proof: First we prove that For this we use the fact, since we have already shown
it is sufficient to show that each JM-element
belongs to the
-algebra
Since
this reduces to showing that the sum
of all transpositions in
belongs to
To demonstrate this, fix
and let
be its orthogonal decomposition into irreducible -invariant subspaces, as constructed above. On one hand, each vector in the GT-basis
belongs to one of the spaces
and on the other
acts as a scalar operator in each of these spaces by Schur’s Lemma.
Now we prove using finite-dimensional Stone-Weierstrass and the fact that
is isomorphic to the function algebra
, where
is the set of all geodesics from
to
in the branching graph
(make sure you understand why). Under this isomorphism,
becomes a subalgebra
of
and in order to invoke Stone-Weirstrass to prove that
we just have to show that
separates points. If
are distinct geodesics
in
then there necessarily exists
such that
passes through
and
passes through
and
In particular,
and
are non-isomorphic irreducible representations of
. Now, every element
acts as a scalar operator in each of these irreducible representations: the eigenvalue of
acting in
is the central character
and the eigenvalue of acting in
is the central character
If it were the case that for all
this would force
for all which would contradict the linear independence of characters of non-isomorphic irreducible representations.
Math 202C: Lecture 13
*** Problems due May 3 at 23:59 ***
Let be finite groups. In Lecture 12, we proved that restriction of irreducible representations of
to
is multiplicity-free if and only if the centralizer
is a commutative subalgebra of
Our objective now is to use this criterion to establish that restriction of irreducible representations of
to
is multiplicity free.
For the moment we stay in the general context where the group and its subgroup
are arbitrary; eventually we will set
and
By definition, the centralizer
is the set of all
which commute with every
and it is clear that this condition is equivalent to
which is in turn equivalent to
with To understand those elements
which have the above property, consider an equivalence relation on
defined by
That is, equivalence classes under this relation are orbits of acting on
by conjugation. Let
be a set parameterizing the distinct
-conjugacy classes in
,
In the case where this is just the set of conjugacy classes in
as per usual, making the following a generalization of something we already know.
Problem 13.1. Prove that the set is an orthogonal basis of
In the case the centralizer
is the center
which is commutative by definition, and the above problem recovers the 202B fact that the center of the convolution algebra of a group has a basis consisting of indicator functions of conjugacy classes. At the other extreme, if
is the subgroup consisting solely of the identity element, then
which is commutative precisely when
is abelian. In between these two extremes, it may be more difficult to determine whether or not
is commutative, but there is a particular circumstance which will force this to be the case.
Problem 13.2. Prove that if the class basis consists of selfadjoint elements, then
is a commutative algebra.
We now specialize to the case and
We would like to find a concrete indexing of the
-conjugacy classes in
. For permutations, the operation of conjugation has a simple combinatorial meaning: conjugating
by
relabels the elements in the cycles of
according to
. If
is confined to
then it cannot relabel
, hence in order for two permutations
and
in
to be
-conjugates it is necessary that the cycle containing
in each have the same length.
Problem 13.3. Prove that the above condition is also sufficient for and
to be
-conjugates.
We can now conclude that -conjugacy classes in
are indexed by pairs
consisting of an integer
corresponding to the length of the cycle containing
and a partition
consisting to the cycle-type of the remainder of the permutation. In particular, we get the dimension formula
where is the number of partitions of
. Furthermore, it is easy to see that if an
-conjugacy class
contains a permutation
, then it also contains
, since the inverse of a permutation not only has the same cycle type, but actually the same elements within each cycle. Thus, the centralizer
is commutative by Problem 13.2.
The significance of this commutativity is that, by Lecture 12, it is equivalent to multiplicity-free branching for irreducible representations of the symmetric groups: the isotypic decomposition of any irreducible representation of
when viewed as a representation of
has the form
In Lecture 14, we will use this fact to construct a special orthonormal basis in which we will then exploit mercilessly.
Math 202C: Lecture 12
*** Problems in this lecture due 05/03/2026 at 23:59 ***
Let be an algebra. In Week I of Math 202B, we saw that subalgebras of
always come in pairs: associated to each subalgebra
is its centralizer
the set of elements in
which commute with every element in
We used this fact to characterize maximal abelian subalgebras of
as precisely those commutative subalgebras
which are equal to their own centralizer,
A problem we might have already posed back then is to characterize those subalgebras for which
is commutative. Today, we will address this question in the case where
is the convolution algebra of a finite group
and
is the convolution algebra of a subgroup
viewed as the subalgebra of
spanned by the orthogonal set
For any element , let
denote its image in the regular representation of
Furthermore, let
be a set indexing isomorphism classes of irreducible unitary representations of
, and for each
let
denote the image of
in a representative
of the isomorphism class indexed by
Problem 12.1. Prove that if and only if
for each
and every
Let be arbitrary but fixed. Thanks to Problem 12.1, understanding when
is commutative reduces to understanding when the algebra
is commutative. If it were the case that
was an irreducible representation of
this would be extremely easy, since then Schur’s Lemma would force
to be one-dimensional. However,
is an irreducible representation of
and there is no reason why it should be irreducible as a representation of the subalgebra
So, we will have to understand what conditions would cause
to be commutative by analyzing the behavior of
under restriction from
to
Let be a set parameterizing isomorphism classes of irreducible unitary representations of
(or equivalently irreducible linear representations of
). Consider the isotypic decomposition of
as a representation of
, which has the form
where two summands and
are isomorphic unitary representations of
if and only if
In finer detail, each
further decomposes as
where and
are isomorphic irreducible unitary representations of
and
is a positive integer.
Problem 12.2. Prove that if , then
maps each
-isotypic component
of
into itself.
Problem 12.2 allows us to focus on intertwiners for a fixed
. Each such operator may be visualized as a block matrix
where Now, since
and
are irreducible unitary representations of
Schur’s Lemma tells us that
is one-dimensional, so in fact each block
can be identified with a scalar
and
itself can be identified with an
matrix of complex numbers. We thus have an algebra isomorphism
which is important as a standalone fact, but also tells us that is commutative if and only if
for each
We have thus established the following theorem.
Theorem 12.1. The centralizer is a commutative subalgebra of
if and only if the restriction of each irreducible unitary representation of
to a unitary representation of
is multiplicity free: for every
we have
Math 202C: Lecture 11
*** Problems in this lecture due 05/03/2026 at 23:59 ***
In Lecture 10, we proved that the Jucys-Murphy specialization is a surjection of the algebra
of symmetric polynomials onto the center
of the convolution algebra of the symmetric group
That is, for every central element
there exists a symmetric polynomial
such that
where
is the evaluation of on the Jucys-Murphy elements
Equivalently, combining the fundamental theorem of symmetric polynomials with the fact that
where
is the sum of all elements on level of the Cayley graph
of
as generated by the class
of transpositions, we can say that there exists a (not necessarily symmetric) polynomial
such that
So the two takeaways are:
- Every central element is a symmetric polynomial in the Jucys-Murphy elements
;
- Every central element is a polynomial in the level elements
Symbolically, we have
which means that we have two surjective specializations
,
the first obtained by substituting for the variables of symmetric polynomials in
variables, and the second obtained by substituting
for the variables of arbitrary polynomials in
variables.
This begs the question: what happens when we substitute for the variables of arbitrary polynomials in
variables? More precisely, recall that
is the commutative subalgebra of
generated by
where is the center of the subalgebra
of
spanned by permutations which fix the points
Our original proof of the fact that the JM-elements commute was simply to point out that they lie in the commutative algebra
We may thus regard the JM-specialization as an algebra homomorphism
defined by
Since we know that surjects
onto
the natural guess is that it surjects
onto
.
Theorem 11.1. The JM-specialization is a surjection of onto
In fact, this is a direct consequence of what we have already established, which gives that
is surjective for each .
Problem 11.1. Write out a careful proof of Theorem 11.1.
Note to those who are perusing one of the recommended texts, namely this one: the material we have established so far takes us up to and through the proof of Theorem 4.4.5, as well as the result there called Olshanskis’s theorem, although our arguments have been quite different.
Math 202C: Lecture 10
Let be a finite connected simple graph with vertex set
and edge set
. We pass to the Hilbert space
with orthonormal basis
and consider the selfadjoint operators
which encode the entire metric structure of . We have that
is the zero operator for all
which exceed the diameter of
while the nonzero operators
form an orthogonal set in
with respect to the Hilbert-Schmidt scalar product. However, not much is known about the subalgebra
of
generated by the selfadjoint operators
If is the Cayley graph of a finite group
corresponding to a symmetric set of generators
we are in a more favorable situation because the operator
is the image of an element of the convolution algebra
namely
in the regular representation. Because the regular representation is faithful, we can view as the subalgebra of
generated by the selfadjoint elements
and leverage the group structure of
to help us understand the situation. For example, if
is an abelian group we automatically get that
is a commutative algebra.
Problem 10.1. Let be the Cayley graph corresponding to the cyclic group
of order
with generating set
This graph is easy to picture: it is a cycle on
vertices. Show that
and compute the dimension of this commutative algebra.
Now let us consider the Cayley graph of the symmetric group
as generated by the conjugacy class of transpositions,
This is a nonabelian group, and yet the operators
commute. This is a consequence of the fact that the elements
where denotes the conjugacy class of permutations of cycle type
, belong to the center
of
. Thus, the sphere sums
generate a subalgebra
of
. In fact, we have the following classical result of Farahat-Higman.
Theorem 10.1.
In this lecture we will prove Theorem 10.1, or at least explain why it is true. Our starting point is the first main result of Math 202C, namely that
where are the Jucys-Murphy elements and
is the elementary symmetric polynomial of degree
. This together with the second main result in Math 202C so far, the fundamental theorem of symmetric polynomials, gives us a specialization
of the algebra of symmetric polynomials defined by
The image of under
is exactly
so that Theorem 10.1 is equivalent to the following.
Theorem 10.2. is surjective.
Problem 10.1. Carefully explain the equivalence between Theorems 10.1 and 10.2.
In order to prove Theorem 10.2, we would ideally like to explicitly construct a family of symmetric polynomials indexed by partitions
such that
In fact, we know how to do this in three cases: the polynomials
map to the conjugacy classes
under
Next, we would like to construct symmetric polynomials and
such that
and
However, at present we only know that
Before going any further, let us note that the standard labeling of conjugacy classes in
is not ideal for our purposes. Instead, it is much better to define the reduced cycle-type of a permutation
to be the partition
obtained by subtracting
from each part of
Then, the decomposition of the levels
of the Cayley graph becomes
where is the conjugacy class of permutations in
of reduced cycle type
In this language, the data we have so far is
It is definitely not the case that — what is actually true is that
and the three data points above are precisely those in which a single level actually is a conjugacy class.
Now let us come back to the first unknown case of our problem: we know that
but we do not yet know that each of the summands in this expression can be expressed as a symmetric polynomial in the JM-elements. The space of homogeneous degree
symmetric polynomial in
variables is two-dimensional, with monomial basis
We already know what is, so it remains to calculate
Now,
The first sum is just copies of
giving a net contribution of
The terms of the second sum are with
so the product gives one of the two distinct three-cycles
or
and the opposite product
gives the other one, so that
We thus have a system of two linear equations,
The second equation gives us
which is a symmetric polynomial in the JM-elements, and substituting this into the first equation and solving we get
,
so that we have successfully expressed each of the conjugacy classes of
of reduced cycle type
as a symmetric polynomial in the JM-elements.
For level in
the class expansion of the monomial symmetric polynomials
of degree three is
which modulo levels becomes
If we order the partitions of as
this is the triangular system
These computations can, to some extent, be performed in full generality. The result which comes out of the attempt to do so is the following fairly explicit description of the images of all monomial symmetric polynomials under the JM-specialization .
Theorem 10.2. For each level of the Cayley graph
, not only do we have
but in fact for every partition
there holds a class expansion
,
in which the coefficients are polynomials in
. In fact,
is independent of
for
and modulo lower levels
, we have
where means that
are partitions of
with
coarser than
Math 202C: Lecture 9
*** Problems due April 26 at 23:59 ***
Week Four review and preview.
We began with the algebraic approach to graphs, which you can think of as a natural road to go down after Math 202A. Given a finite, connected, simple graph with vertex set
, we pass to the Hilbert space
with orthonormal basis
We encode the edge set
as a linear transformation of
which by abuse of notation we also denote
because it contains exactly the same information. That is, we define the adjacency operator by
where the sum is over all such that
The adjacency operator is self-adjoint, and we could equally well define it by
where the sum is over all such that
Either definition of
is consistent with
More generally, for any nonnegative integer the matrix element
equals the number of
-step walks
in the graph
. The linear algebraic approach is to find an orthonormal basis
such that
belongs to the maximal commutative subalgebra
consisting of operators which act diagonally on
We also observed that the adjacency operator is the first member of a sequence
of selfjadoint linear operators on
which we called the level operators of the graph
These operators encode not just adjacency in
but its complete structure as a finite connected metric space. By definition,
where is geodesic distance in
. That is,
acts on an input vertex
by summing all vertices
in the sphere of radius
centered at
. Equivalently,
We noted that although is a finite family of selfadjoint operators in
, it may not be a commutative family. Indeed, we have
where and
are the spheres of radii
and
centered at the vertices
and
respectively. There is no general reason why this would be equal to
Indeed, the subalgebra of
generated by the selfadjoint operators
is in general not well-understood.
Problem 9.1. If is a finite connected simple graph of diameter
, show that
is an orthogonal set in
with respect to the Hilbert-Schmidt (aka Frobenius) scalar product. In particular, conclude that
is a subalgebra of dimension at least
Although the algebra generated by the selfadjoint operators
is not well-understood, the complete sequence of level operators can be conveniently wrapped into a single operator
called the exponential distance operator of The sum is finite because
is the zero operator for all
which exceed the diameter of
Note that
is really a one-parameter family of operators in
depending on
; note also that
is selfadjoint for
The reason for the name “exponential distance operator” is that
which says that the matrix of is the entrywise exponential of the distance matrix of
In recent years, graph theorists have grappled with the spectral theory of without recognizing its true significance: the exponential distance operator encodes the same information as the distance operator in a way which interfaces more meaningfully with statistical mechanics on graphs. We spent some time exploring this in Week One.
Now we move on to a concrete case of the above in which is a specific family of graphs. Namely, we consider the graphs
in which
is the symmetric group and
is the conjugacy class of transpositions. This looks strange because the edge set of a graph is not a subset of its vertex set; however for Cayley graphs of groups it can be regarded as such. More precisely, two vertices
are adjacent in
if and only if there exists
such that
The fact that we have group law means that we can consider adjacency as induced by multiplication. This same feature propagates up to the level of linear operators, where we can also identify
with the adjacency operator
using convolution in
which is itself induced by multiplication in
. Namely, for any arbitrary subset
we have a corresponding vector in
which in turn gives becomes an operator ,
Applying this recipe to the conjugacy class implements the adjacency operator on
as right multiplication by
More generally,
is the sphere of radius one centered at the identity permutation
If we apply this recipe to the sphere
of radius
centered at
we first get the vector
which in turn becomes the operator
These facts hold for all Cayley graphs. What is special about the case is that all of the spheres
are invariant under conjugation, and in fact are disjoint unions of conjugacy classes in an especially transparent way,
where is the conjugacy class of permutations of cycle type
This immediately implies that
is a central element in and hence that
are commuting selfadjoint operators. In particular, there exists an orthonormal basis
such that
belong to the corresponding diagonal subalgebra
Actually, we can say something stronger. Starting with a subset
, when we make the chain of identifications
the first arrow is quantization and the second is the regular representation. Saying that is conjugation invariant is exactly the same thing as saying that
which in turn means (by Schur’s Lemma) that
acts as multiplication by a scalar operator with eigenvalue
in every irreducible subrepresentation
of
So, not only are the operators
simultaneously diagonalizable, they are actually simultaneously diagonal in any orthonormal basis
of any irreducible representation
. But this is all abstractly true — what we do not have is a concrete formula for the Fourier transform
of
acting in a given irreducible representation
In particular, we do not have any explicit numerical understanding of the spectrum of the adjacency operator
of the graph
and this is the information that we wish to obtain.
The first step down the path to explicitly diagonalizing the level operators of is recognize path is to recognize that the central elements
are nonlinear functions of a fundamental family of non central elements which are even more granular than conjugacy classes. Namely, we decompose
,
where is the set of transpositions of the form
with
These are the Jucys-Murphy subsets of
, and the corresponding algebra elements
give a decomposition
into pairwise orthogonal elements whose norms are
Even though the Jucys-Murphy elements do not lie in the center of the convolution algebra they commute with one another. Hence, the Jucys-Murphy operators
are commuting selfadjoint operators, and hence are simultaneously diagonalizable. The simple but important way to see this commutativity is to recognize that we have a chain of groups
in which is identified with the subgroup of
consisting of permutations which fix
This gives a chain of subalgebras
in which is the span of all permutations which fix
Each of these subalgebras comes with its own center
and while the commutative subalgebras
are not nested (in fact for
) they commute with one another, and the the JM-elements
belong to the commutative subalgebra
generated by their union,
We proved that pairwise commute by showing that they are contained in the algebra
Indeed, we have that
is contained in the commutative algebra generated by in
The key result which marks the transition from linear algebra to polynomial algebra in Math 202C is the following.
Theorem 9.1. We have
where is the elementary symmetric polynomial of degree
A good way to test your understanding of this key result is to solve the following problem.
Problem 9.2. Prove that the exponential distance operator of the Cayley graph
factors as
Deduce from this that is invertible for all complex
satisfying
Theorem 9.1 is what led us into a discussion of polynomials in general, and symmetric polynomials in particular. The algebra of polynomials in
commuting selfadjoint variables has basis
indexed by the set of nonnegative integer vectors
We equip
with the scalar product in which
is orthonormal. We then have the orthogonal decomposition
where
with
We consider the (infinite-dimensional) unitary representation of the symmetric group
in which
is the polynomial algebra as above and
The space of invariants is (by definition) the algebra of symmetric polynomials in
commuting selfadjoint variables. It has the orthogonal decomposition
with An orthogonal basis for
is given by the monomial symmetric polynomials
where ranges over the set
of nonnegative integer vectors with weakly decreasing coordinates, and
is the set of all
’s whose weakly decreasing rearrangement is
.
Problem 9.3. Give a bijection between
and the set of all
-element subsets of
Express the sum of the elements of
in terms of the sum
of the coordinates of
The elementary symmetric polynomials are defined by
,
where is the vector whose first
coordinates equal
and remaining coordinates equal
For
, we declare
to be the zero polynomial. Let
be the set of nonnegative integer vectors of infinite length whose coordinates are weakly decreasing and eventually equal zero, and define
.
The condition is necessary and sufficient in order that
is not the zero polynomial, and we proved that
is a basis of This result is called the fundamental theorem of symmetric polynomials.
We are now going to consider a further algebraic consequence of this result. Let be the convolution algebra of the symmetric group
. For each
the symmetric group
is isomorphic to the subgroup of
consisting of permutations which fix the numbers
Thus we can view
as the subalgebra of
with basis
Let
denote the center of
sitting inside
The Jucys-Murphy algebra
is by definition the commutative subalgebra of
generated by the set
Within this subalgebra we have the Jucys-Murphy elements
where is the conjugacy class of transpositions in
One of our first results is that level sets
are elementary symmetric polynomials evaluated on JM-elements,
Since , combining this with the fundamental theorem of symmetric polynomials gives the following.
Theorem 9.2. For any we have
In words, this says that when you evaluate any arbitrary symmetric polynomial on the Jucys-Murphy elements and expand this out as a linear combination of permutations, every pair of conjugate permutations appears with the same coefficient.
As an example, let us consider the complete homogeneous symmetric polynomials, which are defined as
,
where the sum is over all partitions of .
Problem 9.4 Prove that
and deduce from this that
where is the number of
-step monotone walks
on the Cayley graph of
Thus, the function counts factorizations
of a given permutation into transpositions subject to the condition
Problem 9.2 shows that depends only on the cycle type of
Although this fact has been known for over fifteen years, no combinatorial explanation has been found.
I will say a bit more about this enumerative problem since I know that some of you are Stirling number aficionados. For any permutation the minimum length of a factorization of
into transpositions is
and this upper bound of course applies to weakly monotone factorizations as well. Moreover, since we know that there exists a unique strictly monotone geodesic
there is at least one weakly monotone geodesic.
Theorem 9.2. The number of weakly monotone factorizations of a full cycle into
transpositions is
where are the central factorial numbers of the second kind.
The central factorial numbers are certain generalizations of Stirling numbers which appear in quite a variety of situations.
The upshot of the above is that we actually know a few things about the algebra for the all-transpositions Cayley graph
of the symmetric group. Not only do we know that the distance operators
, we have a polynomial representation of them in terms of the JM-operators
In particular, we know that the commutative algebra
is a subalgebra of
. Over the course of the next two lectures, we will prove that in fact
We will then pivot and give a completely different spectral description of this algebra in terms of a distinguished orthonormal basis of
, known as its Gelfand-Tsetlin basis.
Math 202C: Lecture 8
As in Lecture 7, let denote the set of nonnegative integer vectors
with weakly decreasing coordinates. Then, each
can be viewed as a partition of the number
having at most parts. Set
We know that the monomial symmetric polynomials
form a basis of the degree component
of the algebra
of symmetric polynomial in
commuting selfadjoint variables
The fundamental theorem of symmetric polynomials gives a second basis of constructed from the elementary symmetric polynomials
Let denote the set of all partitions of
, and observe that we may think of this as the set of all nonnegative integer vectors
with weakly decreasing coordinates whose total sum
equals
We extend the definition of the elementary symmetric polynomial multiplicatively by defining
where by convention . Then, in order for
not to be the zero polynomial it is necessary and sufficient that
which is equivalent to the condition
The fundamental theorem of symmetric polynomials says that
is a basis of Equivalently, the claim is that
is a basis of
In Lecture 7, we assembled the main ingredients needed to establish this. First, we showed that for any we have
where is the number of
matrices with entries in
whose row sums are encoded by
and whose column sums are encoded by
Note that this is consistent with what we have seen: if
then there cannot be a
-matrix with
columns whose first row sums to
and all the coefficients
are zero. Second (this is a homework problem), we have
unless
in dominance order on
and moreover
Now let us complete the proof of the fundamental theorem. From the above, for any such that
we have
where we are suppressing the variables to lighten the notation. Now let
be such that
Since transpose of Young diagrams is an involution, this is equivalent to
, and the above becomes
Now in the above I am simply going to replace the letter with
because this helps me keep things straight in my mind (this has no mathematical content, it’s purely psychological). Making this letter replacement the above is
Now, the splitting of the term of the sum corresponding to we have
where the sum on the right hand side is now over partitions which are strictly less than
in dominance order on
First we show that spans
Suppose otherwise. Then, there must be some monomial symmetric polynomial
which cannot be represented as a linear combination of the elementary polynomials
Let us choose such an
with
minimal in the dominance order on
Then, by our formula above,
and the summation on the right does lie in the span of , which gives a contradiction.
Now we show that is linearly independent. Indeed, suppose
where not all coefficients equal zero. Choose a nonzero coefficient
with
maximal in dominance order. Then, in the monomial expansion of the left hand side, the coefficient of
is exactly
, which forces
contradiction.
This completes the proof that is a basis of the finite-dimensional Hilbert space
Since the argument was formulated for arbitrary degree
we conclude that
is a basis of the infinite-dimensional algebra . Finally, because
is defined multiplicatively, this is equivalent to
showing that the algebra of symmetric polynomials in variables
is isomorphic to the algebra
of all polynomials in variables