*** Problem in this lecture due April 19 at 23:59 ***
In Lecture 5, we constructed the polynomial algebra and the symmetric polynomial algebra
. The basic combinatorial objects underlying these constructions are integer compositions and integer partitions. Let
Then,
with
where Now we quantize, meaning that we replace
with the Hilbert space
having orthonormal basis
The disjoint union decomposition of the infinite set then becomes an orthogonal direct sum decomposition
of the infinite-dimensional space into finite-dimensional subspaces
Moreover, we have a natural notion of multiplication in defined by
which has the feature that
On one hand, the infinite-dimensional algebra is similar to the convolution algebra of an abelian group: if
then
Indeed, is precisely the convolution algebra of finitely-supported functions on the infinite abelian monoid
On the other hand,
is something quite familiar and intuitive: the algebra of polynomials in
commuting variables. Indeed, under the identification
we have
In this sense, what the familiar algebra of polynomials in commuting variables “really” is is the convolution algebra of the commutative monoid
The symmetric group acts on
according to
and this permutation action respects the decomposition of into finite-dimensional components
This permutation action gives rise to a unitary representation
of
, where
is the group homomorphism defined by
Equivalently, thinking of as the algebra of polynomials in
variables this is
The algebra of symmetric polynomials in variables is the space
latex S_n$-invariant vectors in this unitary representation, i.e. polynomials with the property that
As explained in Lecture 5, it is not too difficult to construct a basis of . Indeed, one need only observe that orbits of the the action of
on
are naturally indexed by the set
For a fixed positive integer the set
may be identified with the set of partitions of
having at most
parts. Then, associated to each
is the corresponding orbit sum
and is an orthogonal basis of
This is the basis of monomial symmetric polynomials, which have the form
So while is not in general a monomial, it is a polynomial having the feature that any of its constituent monomials determines all the others.
Problem 6.1. Give necessary and sufficient conditions under which actually is a monomial.
Because is not in general a monomial, it is not in general the case that
This is somewhat analogous to the fact that convolution of conjugacy classes in the center of the convolution algebra of a noncommutative group is determined by, but not the same as, multiplication of group elements. As in that context, the natural question is to compute the connection coefficients of the monomial basis of ,
In support of this analogy, consider the following.
Problem 6.2. Show that for any , the coefficient of the monomial
in the product
is
Taking the analogy to this natural conclusion, remember the a priori surprising fact the center of the convolution algebra, whose multiplication appears quite complicated in the class basis, is actually isomorphic to a pointwise function algebra. In particular, in the abelian case we have .$
Theorem 6.1 (Newton). The algebra of symmetric polynomials in
variables is isomorphic to the algebra
of polynomials in
variables.
Note that this is an infinite-dimensional phenomenon: the proper subalgebra of
is isomorphic to the full algebra in which it resides. In particular, the isomorphism is definitely not the tautological inclusion.
Theorem 6.1. is called the fundamental theorem of symmetric polynomials, and as indicated above it is generally attributed to Newton. Among many other achievements, Newton laid the foundations for classical mechanics based on his theory of gravity, and developed a theory of optics which involved sticking a knitting needle into his own eye to demonstrate that color is an intrinsic property of light as opposed to an artifact of perception. Now that’s applied mathematics. Given that symmetric polynomials were good enough for Newton, it’s a bit hard to take complaints about our subject matter being “too pure” seriously. But ok, nowadays we’re doing data science, not physical science — why would you care about reconciling gravitation with quantum mechanics when you can make a living applying principal components analysis to an endless supply of large rectangular matrices?
From the data science perspective, Newton’s theorem says that the elementary symmetric polynomials form a universal coordinate system for polynomial features of unordered data. More precisely, his result is that is isomorphic to
where
Here so
is the zero polynomial for
It is notationally convenient to define
to be the constant polynomial
Then, for any partition
i.e. for any vector
from the set
of infinite integer vectors with weakly decreasing coordinate eventually equal to zero, we can extend the definition of the elementary symmetric polynomials by declaring
or equivalently
where is the number of nonzero coordinates of
The condition required for this product to be nonzero is that
. Now, the set of partitions with largest part at most
is in bijection with the set of partitions with at most
parts via transposing Young diagrams,
Thus, Newton’s theorem is equivalent to the following.
Theorem 6.2. For any the set
is a vector space basis of
In Lecture 7, we will prove Theorem 6.2 by giving a combinatorial interpretation of the coefficients in the expansion
of the (extended) elementary symmetric polynomials in terms of the monomial symmetric polynomials
. In fact, the value of this computation extends beyond its immediate application to Newton’s theorem. Indeed, the expansion above is universal in the sense that it remains valid under any specialization, i.e. under any algebra homomorphism
. In particular, we have
where are the Jucys-Murphy elements in the convolution algebra
of the symmetric group. Recall that we established the commutativity of the JM-elements by proving that they lie in the commutative subalgebra
of
generated by
which we named the JM-subalgebra of . We then showed that
i.e. the levels of the symmetric group are precisely the elementary symmetric polynomials specialized on the JM-elements.
Next week, we shall see that essentially the same argument used to prove Newton’s theorem can be retooled to prove the apparently much more sophisticated result that not only lie in the algebra
, but in fact generate it. This will essentially complete Phase I of our development of the representation theory of the symmetric group. Phase II will be to show that
has a completely different spectral interpretation as the so-called Gelfand-Tsetlin subalgebra of
. As discovered by Okounkov and Vershik, the fact that the Jucys-Murphy and Gelfand-Tsetlin subalgebras of
are one and the same completely unlocks the representation theory of the symmetric groups.
For now, here is something concrete to think about.
Problem 6.3. Show that is equal to the number of permutations in
consisting of
disjoint cycles.
For context, recall that the evaluation of is the classical problem of summing powers of consecutive numbers. At some point in your life you undoubtedly learned that
and this may have led you to discover that
which probably means that you tried to understand what was going on with in general. The problem is “solved” by Faulhaber’s formula, which is not very satisfying. The situation with
is much cleaner, and may help to mend this childhood trauma.
