Math 289A: Lecture 13

Let (B_N)_{N=1}^\infty be a sequence of deterministic Hermitian matrices, and let b_{N1},\dots,b_{NN} be any enumeration of the eigenvalues of B_N \in\mathbb{H}_N. Consider the corresponding sequence (X_N)_{N=1}^\infty of random Hermitian matrices defined by X_N=U_NB_NU_N^*, where U_N is a random unitary matrix whose distribution in \mathbb{U}_N is Haar measure. We then have a corresponding triangular array of real random variables,

\begin{matrix} X_{11} & {} & {} \\ X_{21} & X_{22} & {} \\ X_{31} & X_{32} & X_{33} \\ \vdots & \vdots & \vdots \end{matrix},

whose Nth row

X_{N1}=X_N(1,1), \dots, X_{NN} = X_N(N,N)

consists of the diagonal elements of the random matrix X_N. In terms of our input data B_N, we have

X_{Nj} = \sum\limits_{k=1}^N |U_N(j,k)|^2 b_{Nk}.

This is superficially similar to the setup for the Central Limit Theorem, but it is different: X_{N1},\dots,X_{NN} are exchangeable but not independent (make sure you understand why). Thus our first objective is simply to understand the N \to \infty asymptotic distribution of X_{N1}, the (1,1)-matrix element of a uniformly random Hermitian matrix with spectrum B_N.

In this lecture I will use the angled brackets notation favored by physicists to denote expectation. I will also use the angled bracket with a subscript “c” for cumulants (the subscript could also stand for connected). So for example the variance of X_{N1} is

\langle X_{N1}^2\rangle_c = \langle X_{N1}^2\rangle - \langle X_{N1}\rangle\langle X_{N1}\rangle.

Problem 13.1. Prove that \langle X_{N1} \rangle = \frac{1}{N}\mathrm{Tr}\,B_N. Also show that for d > 1 we have \langle X_{N1}^d + x\rangle_c = \langle X_{N1}^d\rangle_c for any constant x (this translation invariance is a general property of cumulants).

Our Optimized Leading Cumulants Formula says that for any 1 \leq d \leq N we have

\langle X_{N1}^d \rangle_c = N^{2-d} \sum\limits_{\alpha,\beta \vdash d} \frac{p_\beta(b_{N1},\dots,b_{NN})}{N^{\ell(\alpha)+\ell(\beta)}}L_N(\alpha,\beta),

where

p_\beta(b_{N1},\dots,b_{NN}) = \prod\limits_{i=1}^{\ell(\beta)} \mathrm{Tr}(B_N^{\beta_i})

and

L_N(\alpha,\beta) = (-1)^{\ell(\alpha)+\ell(\beta)} \sum_{g=0}^\infty N^{-2g}\vec{H}_g(\alpha,\beta)

is \pm 1 times a convergent positive series.

What is nice about the Optimized Leading Cumulants Formula is that it almost immediately suggests the possibility of Gaussian limiting behavior: because of the factor N^{2-d} in the formula, it looks like cumulants of degree d \geq 3 should vanish in the N \to \infty limit, which is the Gaussian signature. To actually establish this we need to determine how the rest of the formula behaves as N grows large.

This is not so difficult. We have the finite sum

\sum\limits_{\alpha,\beta \vdash d} \frac{p_\beta(b_{N1},\dots,b_{NN})}{N^{\ell(\alpha)+\ell(\beta)}}L_N(\alpha,\beta),

and the number of terms in the sum has no dependence on N. The quantity L_N(\alpha,\beta) is O(1) as N \to \infty (make sure you understand why). This leaves the fraction

\frac{p_\beta(b_{N1},\dots,b_{NN})}{N^{\ell(\alpha)+\ell(\beta)}}.

The denominator is literally just a power of N. As for the numerator, we have the bound

|p_\beta(b_{N1},\dots,b_{NN})| \leq N^{\ell(\beta)} \|B_N\|^d,

where \|B_N\| = \max |b_{Nj}| is the spectral radius of B_N. Thus, we have

\left| \frac{p_\beta(b_{N1},\dots,b_{NN})}{N^{\ell(\alpha)+\ell(\beta)}} \right| \leq \frac{\|B_N\|^d}{N^{\ell(\alpha)}} \leq \frac{\|B_N\|^d}{N}.

We therefore have the cumulant bound

\left| \langle X_{N1}^d\rangle_c\right| \leq t_d N^{1-d} \|B_N\|^d

where t_d>0 depends only on d.

Theorem 13.1. If the input sequence B_N of Hermitian matrices is such that \|B_N\| \leq M\sqrt{N} for some constant M and all N \in \mathbb{N}, and if \gamma_2 = \lim_N N^{-1} \mathrm{Tr} B_N^2 exists, then the centered random variable Y_N = X_{N1} - N^{-1}\mathrm{Tr} B_N converges to a centered Gaussian of variance \gamma_2.

Proof: By definition the first cumulant of Y_N is zero for every N \in \mathbb{N}, and from translation invariance of cumulants and the bound established above we have that all cumulants of degree d >2 converge to zero. From the Optimized Leading Cumulants formula and the estimates above, the only O(1) term in the second cumulant is exactly N^{-1}\mathrm{Tr} B_N^2. Therefore, Y_N converges to a centered Gaussian random variable Y of variance \gamma_2 as N \to \infty, and the convergence holds in distribution by the Moment Method.

-QED

Leave a Reply