Here is a link to the slides from today’s Zoom lecture: https://web.goodnotes.com/s/GPePlT3BSSbzDsGPUrqG2G#page-5
Note that some of this material was typed up towards the end of the Lecture 5 notes.
Here is a link to the slides from today’s Zoom lecture: https://web.goodnotes.com/s/GPePlT3BSSbzDsGPUrqG2G#page-5
Note that some of this material was typed up towards the end of the Lecture 5 notes.
Let be the real vector space of Hermitian matrices with the Hilbert-Schmidt scalar product
Let
be the function from Hermitian matrices to standard Euclidean space defined by
Theorem 5.1 (Hoffman-Wielandt Inequality). For any we have
Proof: By the spectral theorem, there are unitary matrices and non-increasing vectors
such that
The square of the Hilbert-Schmidt norm of the difference is
and
Therefore, if we can show
holds for all we will have proved the claimed inequality (make sure you understand why).
Now,
and is a doubly stochastic matrix. Therefore, the maximum value of
over all
is bounded by the maximum value of the function
as ranges over all doubly stochastic matrices. Since
is a linear function on the Birkhoff polytope, it achieves its maximum value at an extreme point of this convex set, so at a permutation matrix. The vertex corresponding to the identity matrix gives
and since the coordinates of and
are nonincreasing the identity permutation is the optimal matching.
– QED
Now let be a random Hermitian matrix.
Theorem 5.2. The following are equivalent:
We have already proved the equivalence of (1) and (2).
Problem 5.2. Complete the proof of Theorem 5.1.
At this point we have enough information to explain how the characteristic function of a unitarily invariant random Hermitian determines the joint distribution of its eigenvalues.
Theorem 5.2. The characteristic function of a unitarily invariant random Hermitian matrix is equal to the quantum characteristic function of its eigenvalues.
Obviously, this statement needs to be explained.
Let be a real random vector whose distribution
in
is arbitrary, and let
be an independent random unitary matrix whose distribution
in
is uniform (Haar) measure. Then
is a unitarily invariant random Hermitian matrix, and by Theorem 5.2 every unitarily invariant random Hermitian matrix is equal in law to a matrix constructed in this way.
Because is non-compact, there is no such thing as a uniformly random Hermitian matrix. The matrix
is as close as we can get: it is uniform conditional on the distribution of its eigenvalues
If
is the distribution of
in
, a random sample from
is obtained by sampling a vector from the distribution
on
and then choosing a uniformly random point on the
-orbit in
which contains this vector.
In terms of characteristic functions, this says that
where
is the characteristic function of a uniformly random Hermitian matrix from the orbit containing
viewed as a diagonal matrix. Thus,
is the characteristic function of a uniformly random Hermitian matrix with prescribed eigenvalues. This defines a bounded symmetric kernel
which we call the quantum Fourier kernel, because it is a superposition of standard -dimensional Fourier kernels
Problem 5.3. Prove that indeed and
Moreover, show that
is invariant under independent permutations of the coordinates of
If is the quantum Fourier kernel, then
deserves to be called the quantum characteristic function of random vector , or the quantum Fourier transform of the probability measure
What we have shown in this lecture is that for any unitarily invariant random Hermitian matrix
we have
where is any
-dimensional random vector whose components are eigenvalues of
(this is what Theorem 5.2 means). We can also state the above as
where is the diagonal of the random matrix
.
The conclusion to be drawn from all of this is that figuring out how to analyze the eigenvalues of a unitarily invariant random Hermitian matrix given its characteristic function
is exactly the same thing as figuring out how to analyze an arbitrary random real vector
given its quantum characteristic function
Before telling you how we will do this, let me tell you how we won’t. The main reason the Hoffman-Weilandt theorem was presented in this lecture is that the proof makes you think about maximizing the function
over unitary matrices The action
is exactly what appears in the exponent of the quantum Fourier kernel:
(I am going to stop writing the Haar measure explicitly in Haar integrals, so just
rather than
). Now, the method of stationary phase predicts that
should be approximated by contributions from the local maxima of the action, which means that this integral should localize to a sum over permutations. This turns out to be exactly correct, not just a good approximation, and the reasons for this are rooted in symplectic geometry. The end result is the following determinantal formula for the quantum Fourier kernel.
Theorem 5.4 (HCIZ Formula). For any with no repeated coordinates we have
This is a very interesting formula which makes manifest all the symmetries of the quantum Fourier kernel claimed in Problem 5.3 (but, you should solve that problem without using the determinantal formula). It is also psychologically satisfying in that it shows exactly how the quantum Fourier kernel is built out of classical one-dimensional Fourier kernels. However, it is not really good for much else. We will explain this further in the coming lectures. But first we will adapt all of the above to random rectangular matrices.
We have defined the characteristic function of a random Hermitian matrix according to the usual recipe for random variables taking values in a Euclidean space: it is the expectation
where the argument ranges over the real vector space
which carries the Hilbert-Schmidt scalar product
Equivalently,
where is the distribution of
in
. With this definition in hand, we can consider special classes of random Hermitian matrices carved out by clauses of the form
“A random Hermitian matrix is said to be (descriptor) if its characteristic function satisfies (property).”
A natural construction of this form arises from the Spectral Theorem. The set
of unitary matrices is a group under matrix multiplication, called the unitary group of rank The group
acts on the Euclidean space
by conjugation, which is a group homomorphism
from into the orthogonal group of the Euclidean space
defined by
Problem 4.1. Prove that really is a group homomorphism from
into orthogonal transformations of Euclidean space
One formulation of the Spectral Theorem says: the conjugation orbit
of any Hermitian matrix contains a diagonal matrix
which is unique up to simultaneous permutations of its rows and columns.
Definition 4.1. A random Hermitian matrix is said to be unitarily invariant if its characteristic function
satisfies
for all and all
That is,
is unitarily invariant if its characteristic function
is constant on each
-orbit of
Unitary invariance is a basic symmetry with many ramifications. First of all, it leads to a massive reduction in complexity by allowing us to view the characteristic function of as a function of
rather than a function of
Indeed, for any arbitrary Hermitian
the Spectral Theorem says that
for some unitary matrix
and some diagonal matrix
Unitary invariance of the characteristic function then gives
The diagonal Hermitian matrix
can be viewed as a vector so that the characteristic function of a unitarily invariant random Hermitian matrix
can be viewed as a continuous function
on -dimensional Euclidean space taking values in the closed unit disc
Problem 4.2. Prove that the characteristic function of a unitarily invariant random Hermitian matrix is a symmetric function on
Now we discuss the probabilistic ramifications of unitary invariance, i.e. what Definition 4.2 implies about the distribution of
Theorem 4.1. The distribution of a unitarily invariant random Hermitian matrix is completely determined by the joint distribution of its diagonal matrix elements
, which are exchangeable real random variables.
Proof: For real diagonal, we have
Consequently, we have
which is exactly the characteristic function of the real random vector
Thus, the characteristic function The exchangeability of
follows from Problem 4.1.
-QED
It is important to note that exchangeable random variables are identically distributed, but not necessarily independent.
Problem 4.3. Prove that the diagonal matrix elements of a unitarily invariant random Hermitian matrix are independent if and only if the distribution
is a Gaussian measure on
Now we characterize unitary invariance as a feature of the distribution of a random Hermitian matrix.
Theorem 4.2. A random Hermitian matrix is unitarily invariant if and only if
and
have the same distribution for every
Proof: Note that for any random Hermitian matrix and any deterministic unitary matrix
we have
by cyclic invariance of the trace. If is unitarily invariant, this becomes
which is equivalent to equidistribution of and
Conversely, if
and
have the same distribution then their characteristic functions are equal.
-QED
Theorem 4.2 is quite important for the following reason. Define a random linear operator acting on by
The matrix of in the standard basis
of
is
The matrix of
in some other orthonormal basis
is the
, where
is the matrix of the linear transformation defined by
Unitary invariance says that
has the same distribution as
– the random matrix of the random operator
has the same law for any choice of orthonormal basis in the Hilbert space
In this sense, unitarily invariant random Hermitian matrices are precisely those random matrices which correspond to canonically defined random Hermitian operators on finite-dimensional Hilbert space.
Lecture slides: https://share.goodnotes.com/s/Rhz5cq25WnLksLyCWa1ML5
Problem 3.1. True or false: the set of unistochastic matrices is a connected subset of the Birkhoff polytope.
Let be a Euclidean space and
its Borel sigma algebra.
Definition 1.1. A random variable taking values in is a measurable function
from a probability space
into
Every random variable leads a double life: it is both a function into and a measure on
Definition 1.2. The distribution of is the probability measure on Borel sets
defined by
Strictly speaking, it is not correct to conflate with its distribution
, since one could have a second random variable
which is distinct from
but has the same distribution – when this happens we say that
and
are “equal in distribution”, or “equal in law.” I heard the “double life” descriptor in a probability course taught by David Steinsaltz, and I think it is helpful provided one keeps the above caveat in mind. In fact I will go one step further and say that
leads a triple life – it is also a complex-valued function on
Definition 1.2. The characteristic function of is defined by the expectation
That is, the characteristic function of a random variable is the Fourier transform of its distribution.
Problem 1.1. Prove that the characteristic function of a random variable is an absolutely continuous function from into the closed unit disc in
The characteristic function of faithfully encodes its distribution: if
is another random variable, then
and
are equal in law if and only if they have the same characteristic function. This is a famous result in probability theory – a proof can be found in virtually any textbook on the subject. What makes characteristic functions so useful is that they transform probability into analysis, specifically harmonic analysis, which allows us to apply analytic methods to probabilistic questions. In particular, the famous Levy Continuity Theorem characterizes convergence in distribution for sequences of random variables in terms of the corresponding sequence of characteristic functions. This provides an efficient approach to the classical limit theorems of probability theory, the Law of Large Numbers and the Central Limit Theorem. Again, you will find an exposition of this approach in any standard probability text.
Random Matrix Theory is one of the most active areas of research in contemporary probability theory. It is a beautiful blend of algebra and probability which enjoys striking connections with many other fields, including combinatorics, number theory, physics, statistics, and something called “data science.” Strangely, RMT makes no use of characteristic functions – I am not aware of a single textbook on random matrices which even defines the characteristic function of a random matrix, despite the fact that this is simply a special case of the above construction.
To be concrete, let us take to be the real vector space of
Hermitian matrices equipped with the scalar product
where denotes the standard matrix trace.
Problem 1.2. Prove that the above really does define a scalar product on the space of Hermitian matrices, and write down an orthonormal basis.
Everything we have said so far about random variables taking values in a general Euclidean space is perfectly valid in the special case where
Thus, there is no definitional obstruction to analyzing a random variable
taking values in
via its characteristic function
But there is a conceptual issue: random matrices are random variables which lead not just a triple life, but a quadruple life. More precisely, a random Hermitian matrix
has associated to it not just a measure
and a function
but also an operator
defined by matrix multiplication
The random operator is a geometric object, a random linear transformation of
and as such has geometric invariants such as eigenvalues and eigenvectors.
The statistical behavior of the geometric invariants of is encoded in the characteristic function
of its matrix
. However, the characteristic function “sees”
as a random vector in the
-dimensional Euclidean space
not as a random operator on the
-dimensional Hilbert space
and the question of how to extract geometric information from
does not have an obvious or easy answer. Because of this, researchers in random matrix theory long ago abandoned characteristic functions. The subject has developed its own specific tools which have been very successful but have also distanced it from other parts of probability theory.
In recent years the question of whether a Fourier-analytic approach to random matrices might be possible after all has begun to be reconsidered. I do not claim to have a satisfactory answer to this question, but we will explore it in this topics course and perhaps see the beginnings of an answer. To get the most out of this endeavor, it would be best to absorb the standard approach to random matrix theory at the same time. The course notes by Todd Kemp and Terence Tao are both excellent resources which are freely available online.