Math 202B: Lecture 7

*** Problems in this lecture due Feb. 1 ***

Let \mathcal{A} be an algebra. In Lecture 6, we introduced the notion of a Frobenius scalar product on \mathcal{A}. This is by definition a scalar product on \mathcal{A} which is compatible with its algebra structure in the sense that

\langle B,A^*C\rangle =\langle AB,C\rangle = \langle A,CB^*\rangle

holds for all A,B,C \in \mathcal{A}. The first equality above is called the left Frobenius identity, and the second is called the right Frobenius identity. By scaling, we may assume that the multiplicative unit I \in \mathcal{A} is a unit vector with respect to the corresponding norm, and we build this normalization condition into the definition of a Frobenius scalar product.

The upshot of Lecture 6 is that the existence of a Frobenius scalar product on \mathcal{A} is equivalent to the existence of a special kind of linear functional on \mathcal{A}, namely a faithful tracial state.

Theorem 7.1. An algebra \mathcal{A} admits a Frobenius scalar product if and only if it admits a faithful tracial state.

Proof: In Lecture 6, we showed that if \tau is a faithful tracial state on \mathcal{A} then

\langle A,B \rangle_\tau = \tau(A^*B)

defines a Frobenius scalar product on \mathcal{A}. Conversely, suppose we have a Frobenius scalar product on \mathcal{A} and define a corresponding linear functional by

\tau(A) = \langle I,A \rangle.

Applying the left Frobenius identity, we have

\tau(A^*A) = \langle I,A^*A\rangle = \langle A,A \rangle \geq 0,

with equality if and only if A = 0_\mathcal{A}. This shows that \tau is a faithful state on \mathcal{A}. Furthermore, the left Frobenius identity gives

\tau(AB) = \langle I,AB\rangle = \langle A^*I,B \rangle=\langle A^*,B\rangle

and the right Frobenius identity gives

\tau(BA) = \langle I,BA \rangle = \langle IA^*,B\rangle =\langle A^*,B\rangle,

which shows that \tau is a trace. \square

Note that in the above argument shows that existence of a scalar product on \mathcal{A} which need only verify the left Frobenius scalar product is equivalent to existence of a faithful but not necessarily tracial state on \mathcal{A}.

Definition 7.2. A von Neumann algebra is an algebra \mathcal{A} equipped with a Frobenius scalar product. Equivalently, a von Neumann algebra is an algebra \mathcal{A} equipped with a faithful tracial state.

In Lecture 6, we classified states on the function algebra \mathcal{F}(X) of a finite set X, showing that they are in bijection with probability measures on X. Under this bijection, faithful states correspond to probability measures whose support is all of X. The trace notion is irrelevant because \mathcal{F}(X) is commutative.

Problem 7.1. Show that the normalized L^2-scalar product

\langle A,B \rangle = \frac{1}{|X|}\sum\limits_{x \in X} \overline{A(x)}B(x),

is a Frobenius scalar product on \mathcal{F}(X). Which probability measure on X does it correspond to?

As we have stressed from the beginning of Math 202B, \mathcal{F}(X) is the fundamental example of a commutative algebra. The fundamental example of a noncommutative algebra is \mathcal{E}(X) = \mathrm{End}\mathcal{F}(X)$, the algebra of linear operators on the Hilbert space \mathcal{F}(X)=L^2(X). In this lecture, we will classify states, faithful states, and faithful tracial states on \mathcal{E}(X).

For notational purposes, it is convenient to view \mathcal{F}(X) as a Hilbert space V containing the finite set X as an orthonormal basis – this is the algebraist’s notation, where we identify the elementary function E_x with $latex,$ so that the decomposition

A = \sum\limits_{x \in X} A(x)E_x

of a function A on X is identified with a formal linear combination

A \sum\limits_{x \in X} \alpha_x x

of the points of X. Then, \mathcal{E}(X) = \mathrm{End}(V) is the vector space of all linear operators on the Hilbert space V. We are now considering not just the vector space structure of \mathrm{End}(V)=\mathcal{E}(X), but its algebra structure, where multiplication is composition and conjugation is adjoint.

Let us briefly review the basic aspects of \mathrm{End}(V) familiar from Math 202A, where we analyzed its vector space structure. In particular, a basis of \{E_{yx} \colon x,y \in X\} is given by the elementary operators

E_{yx}v = y \langle x,v\rangle, \quad v \in V,

and the expansion of any A \in \mathrm{End}(V) in the elementary basis is

A=\sum\limits_{x,y \in X} \langle y,Ax \rangle E_{yx},

where the scalar product is that in the underlying Hilbert space V. This is nothing more or less than saying that the matrix of the elementary operator E_{yx} with respect to the orthonormal basis X \subset V is the elementary matrix with a single 1 into row y and column x and all other entries equal to 0, and that every matrix can be written as a linear combination of elementary matrices. The advantage to doing things our way is that we don’t need to choose an ordering of the basis X and keep track of indices.

For the purposes of Math 202B, we also want to know how the elementary operators behave with respect to conjugation and multiplication.

Proposition 7.3. We have

E_{yx}^*=E_{xy} \quad\text{and}\quad E_{zy}E_{xw} = \langle y,x\rangle E_{zw}.

Proof: Compare two calculations: first

\langle w,E_{yx}v\rangle = \langle w,y \langle x,v\rangle\rangle= \langle x,v \rangle \langle w,y\rangle,

and second

\langle E_{xy}w,v \rangle = \langle x \langle y,w\rangle,v\rangle=\langle x,v \rangle \overline{\langle y,w\rangle}=\langle x,v \rangle \langle w,y\rangle.

The fact that these two computations produce the same result proves that E_{yx}^*=E_{xy}. For the multiplication rule, we have

E_{zy}E_{xw}v=E_{zy}x\langle w,v \rangle = z\langle y,x\rangle\langle w,v\rangle,

and also

\langle y,x\rangle E_{zw}v=\langle y,x\rangle z \langle w,v\rangle,

which coincide. \square

From Proposition 7.3, we get that \{E_{xx} \colon x \in X\} is a set of orthogonal selfadjoint idempotents which span the space of operators acting diagonally on the basis X. So, we have associated to every finite set X three algebras,

\mathcal{F}(X) = \mathrm{Span}\{E_x \colon x \in X\},

\mathcal{E}(X)=\mathrm{Span}\{E_{yx} \colon x,y \in X\},

\mathcal{D}(X) = \mathrm{Span}\{E_{xx} \colon x \in X\},

related as follows.

Problem 7.2. Prove that \mathcal{D}(X) is isomorphic to \mathcal{F}(X), and that \mathcal{D}(X) a maximal abelian subalgebra of \mathcal{E}(X) = \mathrm{End}\, \mathcal{F}(X).

You may wish to ponder the above, rewrite it in various ways, think of matrices versus operators, etc. At some point I want to be able to make statements like “consider the maximal abelian subalgebra of the symmetric group algebra consisting of all operators acting diagonally in the Young basis,” and I want you to have the muscles required to lift this heavy statement off the board and drop it in your head.

Coming back to Frobenius scalar products, in Math 202A we put a scalar product on \mathrm{End}(V)=\mathcal{E}(X) by declaring the elementary basis to be orthonormal.

Definition 7.4. The Hilbert-Schmidt scalar \langle \cdot,\cdot\rangle_{HS} product on \mathrm{End}(V) is the scalar product in which \{E_{yx} \colon x,y \in X\} is an orthonormal basis,

\langle E_{zy},E_{xw} \rangle_{HS} = \langle z,x\rangle \langle y,w\rangle.

As we showed in Math 202A, the above definition leads easily to the following formula for calculating the Hilbert-Schmidt scalar product of any two operators in terms of the scalar product in the underlying Hilbert space.

\langle A,B \rangle_{HS}= \sum\limits_{x \in X} \langle Ax,Bx\rangle,

and we used this scalar product for various linear algebraic purposes. Now we want to show that, up to a minor detail, the Hilbert-Schmidt scalar product on \mathrm{End}(V) is a Frobenius scalar product, and in fact it is the only Frobenius scalar product on the full operator algebra \mathrm{End}(V). The minor detail is that

\langle I,I \rangle_{HS} = \sum\limits_{x \in X} \langle E_{xx},E_{xx}\rangle_{HS}=\dim V,

so that the identity operator I \in \mathrm{End}(V) is not a unit vector in the Hilbert-schmidt norm \|A\|_{HS}=\sqrt{\langle A,A\rangle_{HS}}. Therefore, we will normalize and define

\langle A,B\rangle_F := \frac{1}{\dim V}\langle A,B\rangle_{HS}, \quad A,B \in \mathrm{End}(V).

We will prove the following on Monday.

Theorem 7.5. The normalized Hilbert-Schmidt scalar product \langle \cdot,\cdot \rangle_F is the unique Frobenius scalar product on \mathrm{End}(V).

Math 202B: Lecture 6

The main objects of study in Math 202B are finite-dimensional algebras. Unlike Hilbert spaces, which were the primary focus of Math 202A, the product in an algebra is vector-valued and not scalar-valued. The question arises as to whether we can unify the two by introducing a scalar product on a given algebra \mathcal{A}, so that it is also a Hilbert space.

Certainly, the answer is yes: since \mathcal{A} is in particular a finite-dimensional vector space, we may simply choose a vector space basis of \mathcal{A} and equip \mathcal{A} with the scalar product in which this basis is orthonormal. However, this scalar product has nothing to do with the algebra structure on \mathcal{A}. We would prefer a Hilbert space structure on \mathcal{A} which interfaces meaningfully with the algebra structure.

For example, we might want to find a scalar product on \mathcal{A} such that the multiplicative identity I \in \mathcal{A} is a unit vector in the corresponding norm. This is easy: choose a basis of \mathcal{A} which contains \mathcal{A} and apply the above construction. But our notion of a scalar product on \mathcal{A} which is compatible with the algebra structure will be much more demanding than this.

Definition 6.1. A Frobenius scalar product on \mathcal{A} is a scalar product which satisfies

\langle B,A^*C\rangle =\langle AB,C \rangle = \langle A,CB^*\rangle.

Definition 6.1 describes a scalar product on \mathcal{A} which satisfies two identities, called the left Frobenius identity and the right Frobenius identity. If \mathcal{A} is the endomorphism algebra of a finite-dimensional Hilbert space, we know that such a scalar product exists from Math 202A, where we constructed the Frobenius scalar product on \mathrm{End}(V) using the scalar product on the underlying Hilbert space V. The question we address now is whether such a scalar product can be obtained more generally, when \mathcal{A} is not necessarily the endomorphism algebra of a Hilbert space.

To explore this question, our first step is to choose a linear functional on \mathcal{A} rather than a linear basis in \mathcal{A}. Indeed, associated to every linear functional

\tau \colon \mathcal{A} \longrightarrow \mathbb{C}

is a sesquilinear form

\langle \cdot,\cdot \rangle_\tau \colon \mathcal{A} \times \mathcal{A} \longrightarrow \mathbb{C}

defined by

\langle A,B \rangle_\tau = \tau(A^*B).

Here is the computation verifying sesquilinearity. First,

\langle \alpha_1A_1+\alpha_2A_2,\beta_1B_1+\beta_2B_2\rangle_\tau = \tau((\alpha_1A_1+\alpha_2A_2)^*(\beta_1B_1+\beta_2B_2))=\tau(\overline{\alpha}_1\beta_1A_1^*B_1+\overline{\alpha}_1\beta_2A_1^*B_2+\overline{\alpha}_2\beta_1A_2^*B_1+\overline{\alpha}_2\beta_2A_2^*B_2),

which uses both antillinearity of conjugation and bilinearity of multiplication in \mathcal{A}. Second, linearity of \tau gives

\tau(\overline{\alpha}_1\beta_1A_1^*B_1+\overline{\alpha}_1\beta_2A_1^*B_2+\overline{\alpha}_2\beta_1A_2^*B_1+\overline{\alpha}_2\beta_2A_2^*B_2)=\overline{\alpha}_1\beta_1\tau(A_1^*B_1)+\overline{\alpha}_1\beta_2\tau(A_1^*B_2)+\overline{\alpha}_2\beta_1\tau(A_2^*B_1+\overline{\alpha}_2\beta_2\tau(A_2^*B_2).

Third, remembering the definition of \langle \cdot,\cdot \rangle_\tau gives

\langle \alpha_1A_1+\alpha_2A_2,\beta_1B_1+\beta_2B_2\rangle_\tau =\overline{\alpha}_1\beta_1\langle A_1,B_1\rangle_\tau+\overline{\alpha}_1\beta_2\langle A_1,B_2\rangle_\tau+\overline{\alpha}_2\beta_1\langle A_2,B_1\rangle_\tau+\overline{\alpha}_2\beta_2\langle A_2,B_2\rangle_\tau,

which is sesquilinearity.

Since a scalar product is a Hermitian sesquilinear form, so we want it to be the case that

\langle A,B \rangle_\tau=\tau(A^*B)

coincides with

\overline{\langle B,A \rangle_\tau}=\overline{\tau(B^*A)}.

Since conjugation is antimultiplicative, we have

\overline{\tau(B^*A)}=\overline{\tau((A^*B)^*)},

and we see that the property we really need from \tau is

\tau(A^*)=\overline{\tau(A)}.

So a linear functional \tau which yields a Hermitian form on \mathcal{A} via the recipe \langle A,B \rangle_\tau=\tau(A^*B) must have this special homomorphism-like feature. There is no guarantee that such a functional exists.

Problem 6.1. Show that \tau(A^*)=\overline{\tau(A)} if and only if \tau(X) \in \mathbb{R} for selfadjoint X.

We have now shown how to construct a Hermitian form \langle \cdot,\cdot \rangle_\tau on \mathcal{A} using a linear functional on \langle \cdot,\cdot \rangle_\tau which has the extra feature \tau(A^*)=\overline{\tau(A)}. We also want this form to be nonnegative, meaning that

\langle A,A \rangle_\tau=\tau(A^*A)

is a nonnegative real number. This is in fact a stronger assumption than \tau(A^*)=\overline{\tau(A)}, as Evan pointed out in lecture.

Problem 6.2. Show that \tau(A^*A) \geq 0 for all A \in \mathcal{A} implies \tau(A^*)=\overline{\tau(A)} for all A \in \mathcal{A}.

Linear functionals on an algebra which are normalized and nonnegative have a special name.

Definition 6.1. A linear functional \tau \colon \mathcal{A} \to \mathbb{C} is called a state if it satisfies \tau(I_\mathcal{A})=1 and \tau(A^*A) \geq 0 for all A \in \mathcal{A}. If moreover \tau(A^*A) =0 implies A=0_\mathcal{A}, then \tau is called a faithful state.

Problem 6.3. Finish the proof that if \tau is a faithful state on \mathcal{A} then \langle \cdot,\cdot \rangle_\tau is a scalar product on \mathcal{A}, and I_\mathcal{A} is a unit vector in the corresponding norm.

Now comes the question of whether the scalar product \langle \cdot,\cdot \rangle_\tau on \mathcal{A} induced by a faithful state \tau \colon \mathcal{A} \to \mathbb{C} is a Frobenius scalar product, as per Definition 6.1. Let us see: we have

\langle AB,C \rangle_\tau = \tau((AB)^*C)=\tau(B^*(A^*C))=\langle B,A^*C\rangle_\tau,

so we get the left Frobenius identity for free. For the right Frobenius identity, we need it to be the case that

\tau(B^*(A^*C))=\tau((A^*C)B^*),

so we require yet more from \tau.

Definition 6.2. A linear functional \tau \colon \mathcal{A} \to \mathbb{C} is called a trace if it satisfies \tau(AB)=\tau(BA) for all A,B \in \mathcal{A}.

Of course, if \mathcal{A} is a commutative algebra then every linear functional is a trace. If not, there is no reason why a trace need exist.

Definition 6.3. A von Neumann algebra is a pair (\mathcal{A},\tau) consisting of an algebra \mathcal{A} together with a faithful tracial state \tau.

Recall that in Math 202B all algebras are assumed finite-dimensional unless stated otherwise; the same convention applies to von Neumann algebras. Thus, while infinite-dimensional Von Neumann algebras are very interesting objects which have been and continue to be much-studied, they are not on our menu.

We have one example of a von Neumann algebra from Math 202A: the algebra \mathcal{E}(X)=\mathrm{End} \mathcal{F}(X) of linear operators on the function algebra of a finite set X (or equivalently, the endomorphism algebra of any finite-dimensional Hilbert space V, since V contains an orthonormal basis X). In Math 202B, we will soon see a whole new class of von Neumann algebras, namely convolution algebras of finite groups. Abstractly, we can characterize Von Neumann algebras as follows: \mathcal{A} is a von Neumann algebra (i.e. admits a faithful tracial state) if and only if it is isomorphic to a subalgebra of \mathrm{End}(V) for some Hilbert space V. We will prove this next lecture, and this characterization will motivate our quest to classify the subalgebras of \mathrm{End}(V).

To end this lecture, let us consider the existence question for faithful states for our tamest example algebra, namely the function algebra \mathcal{F}(X) of a finite set X. As we have seen, this algebra is very easy to analyze and we can classify faithful states on \mathcal{F}(X) without much difficulty.

Problem 6.4. Show that states on \mathcal{F}(X) are in bijection with probability measures on X. (Hint: think about expected value).

For an abstract, possibly noncommutative algebra \mathcal{A} we cannot make any concrete statements about the existence of states and traces without assuming that \mathcal{A} has additional attributes. However, assuming such functionals exist we can make an important statement about the region of the space of linear functionals on \mathcal{A} which they occupy.

Problem 6.5. Let \mathcal{A} be an algebra such that the sets

\{\text{states on }\mathcal{A}\} \supseteq \{\text{faithful states on }\mathcal{A}\} \supseteq \{\text{faithful tracial states on }\mathcal{A}\}

are nonempty. Show that they are convex subsets of the linear dual of \mathcal{A}.

Assuming the set of states on \mathcal{A} is nonempty, it is a convex set whose extreme points are called pure states.

Problem 6.6. Classify the pure states on \mathcal{F}(X), and show that they are precisely the algebra homomorphisms \mathcal{F}(X) \to \mathbb{C}. (Hint: this will help you to understand the general principle that if expectation is a multiplicative functional, then the underlying distribution must be a delta measure).

Math 202B: Lecture 5

***All problems in this lecture are due 01/20 at 23:59***

***No lecture on 01/16***

By now you should have a reasonably good feeling for subalgebras. Unfortunately, this is only half the battle. Let \mathcal{A} and \mathcal{B} be algebras, and let \Phi \colon \mathcal{A} \to \mathcal{B} be an algebra homomorphism. Since \mathcal{A} and \mathcal{B} are vector spaces, and since \Phi is a linear transformation, \mathrm{Ker}(\Phi) is a subspace of \mathcal{A} and \mathrm{Im}(\Phi) is a subspace of \mathcal{B}. Since \Phi is not just a linear map but an algebra homomorphism, one hopes that its kernel and image will have additional structure.

Problem 5.1. Prove that \mathrm{Im}(\Phi) is a subalgebra of \mathcal{B}.

It is not necessarily true that \mathrm{Ker}(\Phi) is a subalgebra of \mathcal{A}, because it is not necessarily true that \Phi(I_\mathcal{A})=0_\mathcal{B}. However, the other subalgebra properties, namely closure under conjugation and closure under multiplication, do hold for \mathrm{Ker}(\Phi). In fact, \mathrm{Ker}(\Phi) is not just closed under multiplication – it has the stronger property that AK \in \mathrm{Ker}(\Phi) for any A \in \mathcal{A} and any K \in \mathrm{Ker}(\Phi). This observation leads to the consideration of subspaces of \mathcal{A} which have this absorption property.

Definition 5.1. A subspace \mathcal{J} of \mathcal{A} is said to be an ideal if it is closed under conjugation and absorbs multiplication, in the sense that AJ and JA lie in \mathcal{J} for any J \in \mathcal{J} and all A \in \mathcal{A}.

Subalgebras of \mathcal{A} are smaller algebras embedded in \mathcal{A}. They inherit the algebra structure of \mathcal{A} under restriction. Ideals on the other hand are special subspaces of \mathcal{A} which can be used to produce smaller algebras by collapsing rather than restricting. More precisely, since an ideal \mathcal{J} in \mathcal{A} is a subspace we can form the quotient vector space \mathcal{A}/\mathcal{J} whose points are translations of \mathcal{J},

[A]=\{A+J \colon J \in \mathcal{J}\}.

The zero vector in the quotient space is [0_\mathcal{A}]=\mathcal{J}, and linear combinations are defined by

\alpha_1[A_1]+\alpha_2[A_2]=[\alpha_1 A_1 + \alpha_2 A_2].

The linear transformation

\Pi \colon \mathcal{A} \longrightarrow \mathcal{A}/\mathcal{J}

defined by \Pi(A) = [A] has kernel \mathcal{J}, so by the rank-nullity theorem

\dim \mathcal{A}/\mathcal{J}=\dim \mathcal{A}-\dim \mathcal{J}.

The fact that \mathcal{J} is not just a subspace but an ideal allows us to go further and put an algebra structure on \mathcal{A}/\mathcal{J} by defining conjugation as [A]^*=[A^*] and multiplication as [A][B]=[AB].

Problem 5.2. Check that \mathcal{A}/\mathcal{J} really is an algebra.

We conclude that in our quest to understand the structure of a given algebra \mathcal{A}, we have to understand not just its subalgebras but also its ideals. Thankfully, in many ways ideals are simpler and easier to understand than subalgebras. For example, recall that we have decided to view the set of all subalgebras of \mathcal{A} as a poset under inclusion, making it an induced subposet of the lattice of subspaces of \mathcal{A}, but not an induced sublattice: the max of two subalgebras of \mathcal{A} is not just the span of their union, but the algebra generated by their union. Concerning the poset of ideals of \mathcal{A}, this is an induced sublattice of the lattice of all subspaces without needing any additional constructions.

Problem 5.3. Given two ideals \mathcal{J},\mathcal{K} in \mathcal{A}, show that

\mathrm{Span}(\mathcal{J} \cup \mathcal{K})= \mathcal{J}+\mathcal{K}=\{J+K \colon J \in \mathcal{J},\ K \in \mathcal{K}\}.

As we saw in Lecture 4, we have a correspondence between partitions of X and subalgebras of \mathcal{F}(X). The combinatorial objects which parameterize ideals of \mathcal{F}(X) are simpler — they are just subsets of \mathcal{X}. Given a subset S \subseteq X, we define a corresponding ideal in \mathcal{F}(X) by

\mathcal{I}(S) = \{A \in \mathcal{F}(X) \colon A(x)=0 \text{ for all }x \in S\}.

Thus \mathcal{I}(S) is the set of all functions in \mathcal{F}(X) which vanish on every point of S. Hence, we call \mathcal{I}(S) the vanishing ideal of \mathcal{S}.

Problem 5.4. Prove that \mathcal{I}(S) really is an ideal in \mathcal{F}(X). Moreover, show that for subsets S \leq T of X we have \mathcal{I}(T) \leq \mathcal{I}(S).

A nice feature of vanishing ideals in \mathcal{F}(X) is that they are coordinate spaces relative to the elementary basis \{E_x \colon x \in X\}.

Problem 5.5. Show that elementary functions \{E_x \colon x \in X\backslash S\} form a basis of \mathcal{I}(S). Moreover, show that the quotient algebra \mathcal{F}(X)/\mathcal{I}(S) is isomorphic to the function algebra \mathcal{F}(S).

We can now view S \mapsto \mathcal{I}(S) as an order-preserving function from the lattice of subsets of X to the lattice of ideals of \mathcal{F}(X).

Problem 5.6. Show that the mapping S \mapsto \mathcal{I}(X) is injective.

With the above problems solved, we will have an isomorphism between the lattice of subsets of X and the lattice of ideals of \mathcal{F}(X) once we show surjectivity.

Theorem 5.2. The mapping S \mapsto \mathcal{J}(S) is surjective.

Proof: Given an arbitrary ideal \mathcal{J} of \mathcal{F}(X), we need to construct a corresponding point set S \subseteq X such that \mathcal{I}(S) = \mathcal{J}. A natural candidate is the variety defined by \mathcal{J}, which by definition is the set

V(\mathcal{J}) = \{x \in X \colon A(x)=0 \text{ for all }A \in \mathcal{J}\}

all points in X at which every function A \in \mathcal{J} vanishes. By construction, we have \mathcal{J} \leq \mathcal{I}(V(\mathcal{J})) and we need to show that the reverse inclusion also holds

Let S = V(\mathcal{J}). In order to show \mathcal{I}(S) \leq \mathcal{J}, it is sufficient to show that E_x \in \mathcal{J} for all x \in X\backslash S. This is because we know that the set \{E_x \colon x \in X \backslash S\} is a basis of \mathcal{I}(S). So, fix a point x \in X \backslash S. Since \mathcal{S} is the variety cut out by \mathcal{J}, there must exist a function F_x \in \mathcal{J} which does not vanish at x, else x would be a point of S. Furthermore, we can assume WLOG that F_x(x)=1. We do not have any information about the values of the function F_x at any other points. However, multiplying F_x by the elementary function E_x produces something very simple, namely E_xF_x=E_x. Since F_x \in \mathcal{J} and \mathcal{J} is an ideal, we have E_x \in \mathcal{J}. \square

The above proof showed that for any subset S \subseteq X, we have

\mathcal{I}(V(\mathcal{I}(S)))=\mathcal{I}(S).

This is a very simple finite set version of Hilbert’s Nullstellensatz, a basic theorem of algebraic geometry on vanishing sets of polynomial ideals which will be discussed in Math 202C.

Math 202B: Lecture 4

***All problems assigned in this lecture are due January 20th at 23:59***

Let us begin this lecture with a physical way of thinking about the function algebra \mathcal{F}(X) of a finite set X. Imagine that X is the set of all possible outcomes of an experiment, for example a chemical reaction whose yield depends on environmental factors, the proportions in which the constituents are mixed, and so forth. After the experiment has been performed, we want to determine what substance has been produced. From this point of view, \mathcal{F}(X) represents the collection of all numerical measurements or “observables” of the yield that can theoretically be computed. For example, T \in \mathcal{F}(X) assigns to each x \in X its temperature T(x), and H \in \mathcal{F}(X) which assigns to each x \in X its specific heat H(x), etc.

In practice (as opposed to in theory), our experiment is performed in a laboratory with limited resources. For example, we may have access to a thermometer but not to a Bunsen burner, so we can compute T(x) but not H(x). The set of all measurements which we can perform in our laboratory is then a proper subalgebra \mathcal{A} of \mathcal{F}(X) which contains T but not H. A natural question now is: given a proper subalgebra \mathcal{A} <\mathcal{F}(X) of accessible observables, can we compute enough measurements to distinguish between any two outcomes? In operational terms, \mathcal{A} separates the points of X if our laboratory is advanced enough that we can perform at least one measurement which distinguishes any two distinct outcomes.

Definition 4.1. We say that \mathcal{A} separates the points of X if, for any distinct x,y \in X, there is A \in \mathcal{A} such that A(x) \neq A(y).

Unfortunately, as soon as our laboratory is limited in any way operationally indistinguishable outcomes exist.

Theorem 4.2. A subalgebra \mathcal{A} \leq \mathcal{F}(X) separates the points of X if and only if \mathcal{A}=\mathcal{F}(X).

Proof: One direction is clear: if \mathcal{A}=\mathcal{F}(X) then we have access to the set \{E_x \colon x \in X\} of all elementary functions, and for any distinct points x,y \in X we have E_x(x)=1 and E_x(y)=0.

Conversely, suppose that \mathcal{A} \leq \mathcal{F}(X) is a subalgebra which separates X. Pick an arbitrary point x \in X. Then, for each y \in X\backslash \{x\} there exists a function F_y \in \mathcal{A} such that

\alpha = F_y(x) \quad\text{and}\quad \beta = F_y(y)

are distinct numbers. The centered and scaled function

\tilde{F}_y(z) = \frac{F_y(z)-\beta}{\alpha-\beta}

then satisfies

\tilde{F}_y(x) =1 \quad\text{and}\quad \tilde{F}_y(y)=0.

We thus have the factorization

E_x =\prod\limits_{y \in X\backslash \{x\}} \tilde{F}_y.

Since \mathcal{A} is closed under products, we have shown that \mathcal{A} contains the elementary function E_x. Since x \in X was arbitrary, we have shown that \mathcal{A} contains the elementary basis \{E_x \colon x \in X\}. \square

The category of finite sets is a full subcategory of the category whose objects are compact Hausdorff spaces and whose morphisms are continuous functions — give a finite sets the discrete topology, in which every singleton set is open. The fact that the algebra of all continuous functions on any Hausdorff topological space can separate any two distinct closed sets holds in this larger topological category (this is Urysohn’s Lemma). The opposite direction, that a separating subalgebra \mathcal{A} is equal to the algebra of continuous functions, has to be weakened to the statement that \mathcal{A} is uniformly dense in the algebra of all continuous functions (this is the Stone-Weierstrass theorem).

Let us now generalize Theorem 4.2 within the category of finite sets.

Definition 4.2. A partition of X is a set \mathfrak{p} of disjoint nonempty subsets of X whose union is X. The elements of \mathfrak{p} are referred to as its “blocks.”

Let \mathfrak{P}(X) denote the set of all partitions of X. Note that \mathfrak{P}(X) is in bijection with the set of equivalence relations on X — a partition \mathfrak{p} defines an equivalence relation on X in which points are equivalent if they are elements of the same block, and conversely an equivalence relation on X defines a partition whose blocks are equivalence classes.

We make \mathfrak{P}(X) into a poset as follows: for each \mathfrak{p},\mathfrak{q} \in \mathfrak{P}(X), we declare \mathfrak{p} \leq \mathfrak{q} if and only if \mathfrak{q} can be obtained by partitioning blocks of \mathfrak{p}. Equivalently, every block of \mathfrak{q} is contained in a block of \mathfrak{p}. This is called the refinement order on \mathfrak{P}(X). If \mathfrak{p} \leq \mathfrak{q}, we say that \mathfrak{q} is finer than \mathfrak{p}, or equivalently that \mathfrak{p} is coarser than \mathfrak{q}. Going one step further, we can make \mathfrak{P}(X) into a lattice, where \max(\mathfrak{p},\mathfrak{q}) is the coarsest partition (weakly) finer than both \mathfrak{p} and \mathfrak{q}, and \min(\mathfrak{p},\mathfrak{q}) is the finest partition (weakly) coarser than both \mathfrak{p} and \mathfrak{q}.

There is a natural mapping from the lattice of partitions of X to the lattice of subalgebras of \mathcal{F}(X) — for each \mathfrak{p} \in \mathfrak{P}(X), declare \mathcal{A}(\mathfrak{p}) to be the set of all functions in \mathcal{F}(X) which are constant on the blocks of \mathfrak{p}.

Problem 4.1. Prove that \mathcal{A}(\mathfrak{p}) really is a subalgebra of \mathcal{A}(X), and moreover that the mapping \mathfrak{p} \to \mathcal{A}(\mathfrak{p}) is injective.

Problem 4.2. Prove that \mathfrak{p} \leq \mathfrak{q} implies \mathcal{A}(\mathfrak{p}) \leq \mathcal{A}(\mathfrak{q}), meaning that our mapping \mathfrak{P}(X) \to \mathcal{F}(X) is an order homomorphism.

Problem 4.3. Work a bit harder and show that the mapping \mathfrak{p} \mapsto \mathcal{A}(\mathfrak{p}) is a lattice homomorphism.

Problem 4.4. Prove that \mathcal{A}(\mathfrak{p}) is isomorphic to \mathcal{F}(\mathfrak{p}).

We can make use of Problem 4.4 as follows. Let us say that a function A \in \mathcal{A}(\mathfrak{p}) separates the blocks of \mathfrak{p} if, for any two distinct blocks P and Q of \mathfrak{p}, the constant functions A|_P and A|_Q are distinct. This generalizes Definition 4.1, which is the case where \mathfrak{p} is the partition of X with |X| blocks. The corresponding generalization of Theorem 4.2 is the following.

Theorem 4.3. If \mathcal{A} is a subalgebra of \mathcal{A}(\mathfrak{p}) which separates the blocks of \mathfrak{p}, then \mathcal{A}=\mathcal{A}(\mathfrak{p}).

Proof: By Problem 4.4, this reduces to Theorem 4.2. (Make sure you understand this). \square

At this point the following classification theorem is essentially complete.

Theorem 4.4. The lattice of subalgebras of \mathcal{F}(X) is isomorphic to the lattice of partitions of X.

Proof: It remains only to show that our method of assigning subalgebras to partitions is surjective. Let \mathcal{A} be an arbitrary subalgebra of \mathcal{F}(X). Define an equivalence relation on X by

x \sim y \quad \iff \quad A(x)=A(y) \text{ for all }A \in \mathcal{A},

and let \mathfrak{p} be the partition of X whose blocks are the corresponding equivalence classes. Then, by definition of \mathfrak{p} we have \mathcal{A} \leq \mathcal{A}(\mathbf{p}). Furthermore, by definition of \mathfrak{p} we have that A separates the blocks of \mathfrak{p}. By Theorem 4.3, we have \mathcal{A}=\mathcal{A}(\mathfrak{p}). \square

Math 202B: Lecture 3

Definition 3.1. A linear transformation \Phi \colon \mathcal{A} \to \mathcal{B} from one algebra to another is said to be an algebra homomorphism if it respects conjugation,

\Phi(A^*)=\Phi(A)^*, \quad A \in \mathcal{A},

respects multiplication,

\Phi(A_1A_1) = \Phi(A_1)\Phi(A_2), \quad A_1,A_2 \in \mathcal{A},

and is unital,

\Phi(I_\mathcal{A})=I_\mathcal{B}.

We can now define the category of algebras.

Definition 3.2. The category \mathbf{Alg} has algebras as its objects and algebra homomorphisms as its morphisms.

Going forward, we will almost exclusively work in the full subcategory \mathbf{FAlg} of \mathbf{Alg} whose objects are finite-dimensional algebras. In order to lighten the terminology, when we say “algebra” we will mean a finite-dimensional algebra, and when dealing with infinite-dimensional objects we will explicitly say “infinite-dimensional algebra.”

Now we come to a basic class of algebras attached to finite sets: function algebras. These will be our model examples of commutative algebras.

Definition 3.3. The function algebra \mathcal{F}(X) of a finite set X is the vector space of functions A \colon X \to \mathbb{C} with conjugation and multiplication defined by

A^*(x)=\overline{A(x)} \quad\text{and}\quad [AB](x)=A(x)B(x).

We are already quite familiar with \mathcal{F}(X) as a vector space, since when equipped with the scalar product

\langle A,B \rangle = \sum\limits_{x\in X} \overline{A(x)}B(x)

it is the model example of a Hilbert space from Math 202A. In particular, we already know that the set \{E_x \colon x \in X\} consisting of the elementary functions E_x(y) = \delta_{xy} forms an orthonormal basis of the Hilbert space \mathcal{F}(X). Now we are taking the next step of equipping \mathcal{F}(X) with a vector product as well as a scalar product, hence promoting our Math 202A quantization functor to a functor

\mathcal{F} \colon \mathbf{FSet} \longrightarrow \mathbf{FAlg}.

When viewing \mathcal{F}(X) as an algebra rather than a Hilbert space, we can ask for the classification of selfadjoint elements and unitary elements (since \mathcal{F}(X) is commutative, all elements are normal). It is straightforward to see that A \in \mathcal{F}(X) is selfjadjoint if and only if it is real-valued, and unitary if and only if it is circle-valued (meaning |A(x)|=1 for all x \in X). It is also clear that the group I(\mathcal{F}(X)) of invertible elements in \mathcal{F}(X) consists of non-vanishing functions on X.

From the algebra point of view, the elementary basis of \mathcal{F}(X) is orthogonal with respect to the vector product (rather than the scalar product) in the sense that

E_xE_y = \delta_{xy}E_x.

In particular, the elementary functions are idempotent in \mathcal{F}(X),

E_x^2=E_x.

Being real-valued, the elementary functions are also selfadjoint. For abstract algebras, a basis with these properties has a special name.

Definition 3.4. A Fourier basis in an algebra \mathcal{A} is a basis \{F^\lambda \colon \lambda \in \Lambda\} of selfadjoint orthogonal idempotents,

(F^\lambda)^*=F^\lambda \quad\text{and}\quad F^\lambda F^\mu = \delta_{\lambda\mu}F^\lambda.

Thanks once again to Lani for paying close attention to definitions in real time and pointing out that selfadjointness should be built into this definition.

Clearly, a necessary condition for an algebra \mathcal{A} to admit a Fourier basis is that it is commutative. However, this condition is not sufficient, and algebras which do admit a Fourier basis are characterized by the following simple but important theorem.

Theorem 3.5. An algebra \mathcal{A} admits a Fourier basis if and only if it is isomorphic to a function algebra.

Proof: We have already seen that a function algebra admits a Fourier basis, namely its elementary basis. Thus, if \mathcal{A} is isomorphic to \mathcal{F}(X) for some set X via an isomorphism \Phi, then \{\Phi^{-1}(E_x) \colon x \in X\} gives a Fourier basis of \mathcal{A}.

Conversely, suppose \mathcal{A} admits a Fourier basis \{F^\lambda \colon \lambda \in \Lambda\}, and consider the vector space isomorphism

\Phi \colon \mathcal{A} \longrightarrow \mathcal{F}(\Lambda)

defined by

\Phi(F^\lambda) = E_\lambda, \quad \lambda \in \Lambda.

We need to check that this vector space isomorphism is an algebra homomorphism. First,

\Phi(F^\lambda F^\mu) = \Phi(\delta_{\lambda\mu}F^\lambda)=\delta_{\lambda\mu}E_\lambda = E_\lambda E_\mu=\Phi(F^\lambda)\Phi(F^\mu),

so \Phi respects multiplication. Second,

\Phi((F^\lambda)^*)=\Phi(F^\lambda)=E_\lambda=E_\lambda^*=\Phi(F^\lambda)^*,

so \Phi respects conjugation. Third, note that in the function algebra \mathcal{F}(\Lambda) the multiplicative identity is

I_{\mathcal{F}(\Lambda)} = \sum\limits_{\lambda \in \Lambda} E_\lambda.

We leave it as an exercise to show that any Fourier basis is necessarily a partition of unity, meaning that

I_\mathcal{A}=\sum\limits_{\lambda \in \Lambda} F^\lambda.

\square.

The algebra isomorphism \Phi constructed in the proof of the preceding theorem is called the Fourier transform on \mathcal{A}. It exists precisely when \mathcal{A} admits a basis of selfadjoint orthogonal idempotents. Thus, if we are given a commutative algebra \mathcal{A} whose multiplication is defined in some horribly convoluted way, if we can construct a Fourier basis in \mathcal{A} then we are able to recognize that this convoluted structure is actually no more complicated than pointwise multiplication of functions. We will soon do this for convolutions algebras of finite abelian groups.

Math 202B: Lecture 2

In Lecture 1, we classified commutative algebras in terms of normal elements. In this lecture, we move beyond the commutative/noncommutative dichotomy and introduce a way to quantify the degree of commutativity of a given algebra \mathcal A. This method is based on measuring the dimension of a certain subalgebra of \mathcal A, so we begin by defining subalgebras.

Definition 2.1 A subspace \mathcal B \subseteq \mathcal A is called a subalgebra if it is closed under multiplication and conjugation, and contains I=I_\mathcal{A}.

A subalgebra \mathcal B of \mathcal A is by definition a subspace, so it contains the additive identity 0_{\mathcal A} and is closed under taking linear combinations. However, being a subalgebra is a strictly stronger condition than being a subspace, just as being an algebra is stronger than being a vector space: Definition 2.1 is equivalent to saying that \mathcal B is a subspace of \mathcal A which is itself an algebra. In particular, the zero subspace {0} is not a subalgebra; the smallest subalgebra of \mathcal A is

\mathbb C I = \{\alpha I\colon \alpha \in \mathbb C\}.

Problem 2.1 Prove that if \mathcal B is a one-dimensional subalgebra of \mathcal A, then \mathcal B = \mathbb C I.

The minimal subalgebra \mathbb C I is commutative, and in fact each of its elements commutes with every element of \mathcal A. We can consider the set of all elements in \mathcal{A} which have the “commutes with everything” property.

Definition 2.2 The center of \mathcal A is

Z(\mathcal A) = \{ Z \in \mathcal A : AZ = ZA \text{ for all } A \in \mathcal A \}.

Clearly, \mathbb C I_{\mathcal A} \subseteq Z(\mathcal A).

Proposition 2.3 Z(\mathcal A) is a subalgebra of \mathcal A.

Definition 2.4 The commutativity index of an algebra is the dimension of its center.

A minimally commutative/maximally noncommutative algebra \mathcal{A} has \dim Z(\mathcal{A})=1, which forces Z(\mathcal{A})=\mathbb{C}I. At the other extreme, \dim Z(\mathcal{A})=\dim \mathcal{A} forces Z(\mathcal{A})=\mathcal{A} provided \mathcal{A} is finite-dimensional, so this measurement of commutativity is most useful in the category of finite-dimensional algebras. We have not yet restricted to finite-dimensional algebras, but we will soon do so.

Definition 2.5 The centralizer of \mathcal B in \mathcal A is

Z(\mathcal B,\mathcal A) = \{ A \in \mathcal A : AB = BA \text{ for all } B \in \mathcal B \},

the set of all elements of \mathcal{A} which commute with every element of \mathcal{B}.

Problem 2.2 Prove that Z(\mathcal B,\mathcal A) is a subalgebra of \mathcal A. Show that if \mathcal B \subseteq \mathcal C, then Z(\mathcal C,\mathcal A) \subseteq Z(\mathcal B,\mathcal A).

We now consider the set of all subalgebras of \mathcal A, ordered by inclusion:

\mathcal B \le \mathcal C \iff \mathcal B \subseteq \mathcal C.

This is a sub-poset of the lattice \mathcal L(\mathcal A) of all subspaces of \mathcal A, which only sees the vector-space structure. The poset of subalgebras of \mathcal{A} is strictly smaller than the lattice of subspaces, because we lose the zero-dimensional subspace and all one-dimensional subspaces except \mathbb{C}I. Furthermore, if we wish to promote the poset of subalgebras to a lattice we must work a bit harder.

Problem 2.3 Let \mathfrak F be a family of subalgebras of \mathcal A.

  1. Show that \bigcap_{\mathcal B \in \mathfrak F} \mathcal B is a subalgebra.
  2. Show that if all \mathcal B \in \mathfrak F are commutative, then so is the intersection.

For any subset X \subseteq \mathcal A, let

\mathcal F(X) = \{ \mathcal B \subseteq \mathcal A : X \subseteq \mathcal B \text{ and } \mathcal B \text{ is an algebra} \}.

Definition 2.6 The subalgebra generated by X is

\mathrm{alg}(X) = \bigcap\limits_{\mathcal B \in \mathcal F(X)} \mathcal B.

For subalgebras \mathcal B, \mathcal C \subseteq \mathcal A, define:

  • \min(\mathcal B,\mathcal C) = \mathcal B \cap \mathcal C,
  • \max(\mathcal B,\mathcal C) = \mathrm{alg}(\mathcal B \cup \mathcal C).

These operations make the poset of subalgebras into a lattice.

We can restrict even further to the poset of commutative subalgebras of \mathcal{A}, ordered by inclusion. If \mathcal{A} is itself commutative this restriction is vacuous, but if not then things change substantially. In particular, the poset of commutative subalgebras of \mathcal{A} no longer contains \mathcal{A} as a maximal element.

Definition 2.7 A maximal abelian subalgebra of \mathcal{A}, or MASA, is a commutative subalgebra \mathcal{B} which is not contained in any strictly larger commutative subalgebra: if \mathcal{C} is a commutative subalgebra with \mathcal{B} \leq \mathcal{C} then \mathcal{B}=\mathcal{C}.

In the context of commutative subalgebras we can modify Definition 2.6 as follows. Given any set X \subseteq \mathcal{A}, we consider the family

\mathcal F(X) = \{ \mathcal C \subseteq \mathcal A : X \subseteq \mathcal C \text{ and } \mathcal C \text{ is a commutative algebra} \}.

Definition 2.8 The commutative subalgebra of \mathcal{A} generated by X is

\mathrm{calg}(X) = \bigcap\limits_{\mathcal{C} \in \mathfrak{F}(X)} \mathcal{C}.

Returning to MASAs, the following characterization is very useful.

Problem 2.4. Prove that \mathcal{B} is a MASA in \mathcal{A} if and only if Z(\mathcal{B},\mathcal{A})=\mathcal{B}. Furthermore, show that any two MASAs are incomparable.

Math 202B: Lecture 1

Definition 1.1. An algebra is a complex vector space \mathcal{A} of positive dimension equipped with an, associative, bilinear, unital multiplication, and an antilinear, antimultiplicative, involutive conjugation.

Let us unpack this definition.

Vector space structure. First of all, \mathcal{A} is a complex vector space. We will denote vectors in this space by uppercase Roman letters

A,B,C,\dots

and use lowercase Greek letters

\alpha,\beta,\gamma,\dots

for scalars in \mathbb{C}.

Multiplication. Multiplication is a map

\mathcal{A}\times\mathcal{A}\to\mathcal{A}

whose values are denoted by concatenating its arguments,

(A,B)\mapsto AB.

Associativity means that the symbol ABC is unambiguous, because its two possible interpretations coincide:

(AB)C=A(BC).

We do not assume that multiplication is commutative – there may exist elements A,B\in\mathcal{A} such that AB\neq BA. When no such elements exist, we say that \mathcal{A} is a commutative algebra.

Bilinearity means that multiplication interacts with the vector space structure according to the rule

(\alpha_1A_1+\alpha_2A_2)(\beta_1B_1+\beta_2B_2)=\alpha_1\beta_1A_1B_1+\alpha_1\beta_2A_1B_2+\alpha_2\beta_1A_2B_1]+\alpha_2\beta_2A_2B_2.

Problem 1.1 Let 0_\mathcal{A} denote the zero vector in \mathcal{A}. Prove that for all A\in\mathcal{A} we have

A0_\mathcal{A}=0_\mathcal{A}A=0_\mathcal{A}.

Unital means that there exists an element I\in\mathcal{A} such that IA=AI=A for all A\in\mathcal{A}. Any such element is called a multiplicative unit. Since \dim\mathcal{A}>0, the multiplicative unit is distinct from the additive unit 0_\mathcal{A}. Moreover, the multiplicative unit is unique.

Problem 1.2 Let I,J\in\mathcal{A} be multiplicative units. Prove that I=J.

Henceforth, we write I_\mathcal{A} for the unique multiplicative unit in \mathcal{A}. When no confusion is possible, we simply write I.

An element A\in\mathcal{A} is said to be invertible if there exists B\in\mathcal{A} such that AB=BA=I_\mathcal{A}. When this holds, we say that B is the inverse of A, and write B=A^{-1} and A=B^{-1}.

Problem 1.3 Suppose A,B,C\in\mathcal{A} satisfy AB=BA=I and AC=CA=I. Prove that B=C.

Multiplication in \mathcal{A} can be concrete and numerical. Let

{E_x:x\in X}

be a vector space basis of \mathcal{A} indexed by a nonempty set X. Any elements A,B\in\mathcal{A} can be written as linear combinations

A=\sum_{x\in X}\alpha_xE_x,\qquad B=\sum_{y\in X}\beta_yE_y

with all but finitely many terms equal to 0_\mathcal{A}. By bilinearity,

AB=\sum_{x,y\in X}\alpha_x\beta_yE_xE_y.

Each product of basis vectors can be expanded as

E_xE_y=\sum_{z\in X}\gamma_{xyz}E_z

for uniquely determined scalars \gamma_{xyz}\in\mathbb{C}. Thus multiplication in \mathcal{A} is completely determined by the scalars

\gamma_{xyz}, \quad x,y,z\in X,

which are called the connection coefficients or structure constants of the basis \{E_x \colon x \in X\}. From a computational perspective, it is desirable to choose a basis for which many of these coefficients vanish. This idea underlies Strassen’s algorithm for matrix multiplication.

An element P \in \mathcal{P} is said to be idempotent if P^2=P. This is equivalent to saying that the coefficients in the expansion

P=\sum\limits_{x \in X} \pi_x E_x

satisfy

\sum\limits_{x,y \in X} \pi_x\pi_y\gamma_{xyz}=\pi_z

for each z \in X.

Problem 1.4 Prove that a two-dimensional algebra must be commutative.



Conjugation. Conjugation is a function \mathcal{A}\to\mathcal{A} denoted by A\mapsto A^*. Antilinearity means that conjugation and the vector space structure interact according to the rule

(\alpha A+\beta B)^*=\overline{\alpha}A^+\overline{\beta}B^*.

Antimultiplicativity means that conjugation and multiplication interact according to the rule

(AB)^*=B^*A^*.

Involutive means that conjugation is two-periodic,

(A^*)^*=A.

Let I(\mathcal{A}) denote the set of invertible elements in \mathcal{A}.

Problem 1.5 Prove that:

  1. I(\mathcal{A}) is a group under multiplication.
  2. A is invertible if and only if A^* is invertible.
  3. (A^*)^{-1}=(A^{-1})^*.

Problem 1.6 Let \{E_x:x\in X\} be a vector space basis of \mathcal{A}. Prove that \{E_x^*:x\in X\} is also a vector space basis of \mathcal{A}.

Special classes of elements. There are three special classes of elements in \mathcal{A}.

An element X\in\mathcal{A} is said to be selfadjoint if X^*=X. The set S(\mathcal{A}) of selfadjoint elements in \mathcal{A} forms a real vector space.

An element U\in\mathcal{A} is said to be unitary if it is invertible and U^{-1}=U^*. The set of unitary elements is denoted U(\mathcal{A}) and is called the unitary group of \mathcal{A}.

Problem 1.7 Prove that U(\mathcal{A}) is a subgroup of I(\mathcal{A}).

An element A\in\mathcal{A} is said to be normal if it commutes with its conjugate: A^*A=AA^*.

Problem 1.8 Prove that every A\in\mathcal{A} can be uniquely expressed in the form A=X+iY, where X,Y\in S(\mathcal{A}). The selfadjoint elements X and Y are called the real and imaginary parts of A, respectively.

Problem 1.9 Prove that A\in\mathcal{A} is normal if and only if its real and imaginary parts commute.

Problem 1.10 Prove that \mathcal{A} is commutative if and only if all its elements are normal.

Math 202B: Lecture 0

Welcome to Math 202B at UCSD, Winter quarter 2026. Here is a New Year’s problem you can keep in the back of your mind over the course of the course. Of course, let me know if you solve it.

Problem 0: Prove that 26 is the only positive integer nestled between a square and a cube.

The basic parameters of Math 202B are the same as those in Math 202A: weekly problem sets due on Sundays at 23:59 via GradeScope together with a final exam, with a 70/30 split. The final exam is scheduled for 03/20 at 15:00, and if you cannot sit for the exam at the appointed time you should not enroll in the course.

Math 202A began in the classical computational category \mathbf{FSet} whose objects are finite sets with morphisms being functions. This is in contrast to the quantum computational category \mathbf{FHil} whose objects are finite-dimensional Hilbert spaces with linear transformations as morphisms. We considered the quantization functor \mathcal{F} from \mathbf{FSet} to \mathbf{FHil} which sends a finite set X to the Hilbert space \mathcal{F}(X) of complex-valued functions on X with the pointwise operations and the L^2-scalar product. We then found a miraculous tool, the Singular Value Decomposition, which completely describes all morphism in \mathrm{Hom}(\mathcal{F}(X),\mathcal{F}(Y)) for any two finite sets X and Y. In the case X=Y, the SVD gave us the Spectral Theorem for normal operators in \mathcal{E}(X)=\mathrm{End}\mathcal{F}(X)=\mathrm{Hom}(\mathcal{F}(X),\mathcal{F}(X)).

In retrospect, we now recognize that Math 202A was actually about two quantization functors departing from the classical computational category, the Schroedinger functor \mathcal{F} and the Heisenberg functor \mathcal{E}, both of which land in \mathbf{FHil}. In Math 202B, we recognize something new as well: both quantizations \mathcal{F}(X) and \mathcal{E}(X) come with vector products as well as scalar products. The product of two functions in \mathcal{F}(X) is defined pointwise, and the product of two endomorphisms in \mathcal{E}(X) is defined by composing them.

We now recognize that Schroedinger and Heisenberg are actually telling us to think about a subcategory of \mathbf{FHil}, namely the category \mathbf{FAlg} of finite-dimensional algebras. Math 202B is all about the category \mathbf{FAlg}. We will begin by defining algebras precisely and developing their basic theory axiomatically. Given an algebra \mathcal{A} in this category, two natural goals are to classify its subalgebras and measure its commutativity, and we will formulate these goals rigorously.

Our two basic examples, \mathcal{F}(X) and \mathcal{E}(X), are two opposite extremes: the former is fully commutative and the latter is maximally noncommutative. The classification of subalgebras of \mathcal{F}(X) is elementary, while a complete description of subalgebras of \mathcal{E}(X) is more involved and requires the development of new linear algebraic concepts and methods you may not have seen before.

Once we have said everything there is to say about \mathcal{F}(X) and \mathcal{E}(X), we will consider the question of what lies between these two extremes. In this range we find a beautiful class of algebras constructed from finite groups. Namely, if X carries a group law, then it becomes possible to associate a third algebra to it, the convolution algebra \mathcal{C}(X). As Hilbert spaces, \mathcal{C}(X)=\mathcal{F}(X), but as algebras the two are very different: multiplication of functions in \mathcal{C}(X) is given by convolution rather than pointwise product. The commutativity index of \mathcal{C}(X) is the number of conjugacy classes X contains, so \mathcal{C}(X) can be as commutative as \mathcal{F}(X) but cannot be as noncommutative as \mathcal{E}(X).

When X is an abelian group, \mathcal{C}(X) is isomorphic to \mathcal{F}(X) via an extremely useful map called the Discrete Fourier Transform (DFT), which is perhaps the most widely applied algebra isomorphism there is. When X is nonabelian, \mathcal{C}(X) is isomorphic to a subalgebra of \mathcal{E}(X) via a noncommutative generalization of the DFT whose construction will occupy much of the course and lead us into a new realm where linear algebra and group theory interact in many remarkable ways.

There is no official textbook, but as we move through the course you should regularly consult the following texts:

  1. Algebras of Linear Transformations by Farenick, for the structure theory of finite-dimensional operator algebras.
  2. Linear Representations of Finite Groups by Serre, for the conceptual backbone of representation theory.
  3. Representation Theory of the Symmetric Groups by Ceccherini-Silberstein et al, for a detailed treatment of symmetric groups as the fundamental nonabelian example.
  4. The Symmetric Group by Sagan, for combinatorial viewpoints which make the algebra we encounter more concrete.
DATETOPICMODALITY
01/05Algebras IIn Person
01/07Algebras IIIn Person
01/09Algebras IIIIn Person
01/12Function Algebras IIn Person
01/14Function Algebras IIIn Person
01/16NONENO Class
01/19NONENO CLASS
01/21Operator Algebras IIn Person
01/23Operator Algebras IIIn Person
01/26Operator Algebras IIIIn Person
01/28Operator Algebras IVIn Person
01/30Operator Algebras VIn Person
02/02Operator Algebras VIIn Person
02/04Group Algebras In Person
02/06Class AlgebrasIn Person
02/09States and traces In Person
02/11Review IOnline
02/13Review IIOnline
02/16NONENO CLASS
02/18Fourier IIn Person
02/20Fourier IIIn Person
02/23Fourier IIIIn Person
02/25Representations IIn Person
02/27Representations IIIn Person
03/02Representations IIIIn Person
03/04Representations IVIn Person
03/06Symmetric Group IIn Person
03/09Symmetric Group IIIn Person
03/11Symmetric Group IIIIn Person
03/13Symmetric Group IVIn Person
Schedule subject to change, check back regularly.