Math 202B: Lecture 1

This course is the second quarter of Math 202, a three-quarter graduate course sequence in applied algebra at UCSD. Briefly, the 202 sequence is arranged as follows.

Math 202A (Fall): vectors and transformations.

Math 202B (Winter): algebras and representations.

Math 202C (Spring): tensors and invariants.

In Math 202B, the term “vector space” will always mean a finite-dimensional complex vector space.

Definition 1.1. An algebra is a vector space \mathcal{A} of positive dimension equipped with an associative, bilinear, unital multiplication and an antilinear, antimultiplicative, involutive conjugation.

The prototypical example of an algebra is \mathcal{A}=\mathbb{C}, the complex number system, elements of which are called scalars and denoted by lower-case Greek letters,

\alpha,\beta,\gamma, \dots, \omega.

There are some exceptions: integers like zero and one are denoted 0 and 1 as usual, and we write i for the imaginary unit. Elements of a general algebra \mathcal{A} are denoted by upper-case Roman letters,

A,B,C,\dots,Z.

Multiplication in \mathcal{A} is a function \mathcal{A} \times \mathcal{A} \to \mathcal{A} whose values are denoted by concatenating its arguments: (A,B) \mapsto AB. Associativity means that the symbol ABC is unambiguous because its two possible meanings coincide:

(AB)C = A(BC).

Bilinearity means that multiplication in \mathcal{A} interacts with its vector space structure according to the rule

(\alpha_1A_1+\alpha_2A_2)(\beta_1B_1+\beta_2B_2) = \alpha_1\beta_1A_1B_1+\alpha_1\beta_2A_1B_2 + \alpha_2\beta_1A_2B_1+\alpha_2\beta_2A_2B_2.

We do not assume multiplication is commutative.

Problem 1.1. Let 0_\mathcal{A} denote the zero vector in \mathcal{A}. Prove that A0_\mathcal{A}=0_\mathcal{A}A=0_\mathcal{A} for all A \in \mathcal{A}.

Later, when we are more familiar with algebras and there is less chance of confusion, we will sometimes omit the subscript and write 0 for the zero vector in a general algebra \mathcal{A}, as it will generally be clear from context whether this symbol represents a scalar or a vector.

Unital means that there exists a vector I \in \mathcal{A} such that

IA=AI=A

for all A \in \mathcal{A}. Any such vector is called a multiplicative unit. Note that because the dimension of \mathcal{A} is positive, any multiplicative unit I is distinct from the additive unit 0_\mathcal{A}. In fact, there is only one multiplicative unit.

Problem 1.2. Let I,J be multiplicative units in \mathcal{A}. Prove that I=J.

Henceforth we write I_\mathcal{A} for the unique multiplicative unit. Later on, we may omit the subscript and simply write I for the multiplicative unit if it causes no confusion to do so. An element A \in \mathcal{A} is said to be invertible if there exists B \in \mathcal{A} such that AB=BA=I_\mathcal{A}.

Problem 1.3. Suppose A,B,C \in \mathcal{A} are such that AB=BA=I_\mathcal{A} and AC=CA=I_\mathcal{A}. Prove that B=C.

When AB=BA=I_\mathcal{A} we say that B is the inverse of A, and that A is the inverse of B. This is written B=A^{-1} and A=B^{-1}.

Multiplication in an algebra can be described numerically as follows. Let \{E_x \colon x \in X\} be a vector space basis of \mathcal{A} indexed by the points of some finite nonempty set X. Then, A,B \in \mathcal{A} can be represented as a linear combinations

A = \sum\limits_{x \in X} \alpha_x E_x\quad\text{and}\quad B = \sum\limits_{x \in X}\beta_x E_x.

According to bilinearity we have

AB = \sum\limits_{x,y \in X} \alpha_x\beta_y E_xE_y.

Each product of basis vectors can also be resolved into a linear combination of basis vectors,

E_xE_y = \sum\limits_{z \in X} \gamma_{xyz} E_z.

As the indices x,y,z range over X we get a three dimensional array [\gamma_{xyz}] of complex numbers called the multiplication tensor of \mathcal{A} relative to the basis E_x, x \in X. The elements of this three-tensor are called the connection coefficients of \mathcal{A} relative to the basis \{E_x \colon x \in X\}. This set of (\dim V)^3 numbers completely determines multiplication in \mathcal{A}, since

AB = \sum\limits_{x,y,z \in X} \alpha_x\beta_y\gamma_{xyz}E_z.

From a practical perspective, one would like to find a vector space basis of \mathcal{A} such that the corresponding multiplication tensor is sparse, i.e. many connection coefficients are zero, so that the computational cost of performing multiplication is minimized – this is the basic idea behind Strassen’s algorithm for matrix multiplication.

Problem 1.4. Prove that a two-dimensional algebra must be commutative.

Conjugation is a function \mathcal{A} \to \mathcal{A} whose values are denoted by a superscript asterisk: A \mapsto A^*. Antilinearity means that conjugation interacts with the vector space operations according to the rule

(\alpha A +\beta B)^* = \overline{\alpha}A^* + \overline{\beta}B^*.

Antimultiplicativity means that conjugation interacts with multiplication according to the rule

(AB)^*=B^*A^*.

Involutive means that conjugation is 2-periodic,

(A^*)^*=A.

Just like multiplication, conjugation in \mathcal{A} can be described with respect to a linear basis \{E_x \colon x \in X\}. Indeed, for each basis vector we can write its conjugate as

E_x^* = \sum\limits_{y \in X}\eta_{xy}E_y.

This gives a two-dimensional array which completely describes conjugation in \mathcal{A}, the conjugation tensor [\eta_{xy}] relative to the the basis \{E_x \colon x \in X\}. For any

A=\sum\limits_{x \in X} \alpha_x E_x,

then

A^*=\sum\limits_{x,y \in X}\overline{\alpha}_x \eta_{xy}E_y.

Problem 1.5. Prove that the set I(\mathcal{A}) of invertible elements in an algebra \mathcal{A} is a multiplicative group. Moreover, prove that I(\mathcal{A}) is closed under conjugation: A is invertible if and only if A^* is invertible, and in fact (A^*)^{-1} = (A^{-1})^*.

In any algebra \mathcal{A}, we define the following element classes:

  • Selfadjoint: X^*=X.
  • Idempotent: P^2=P.
  • Unitary: U^*U=UU^*=I.
  • Normal : A^*A=AA^*.

Problem 1.6. Prove the the set H(\mathcal{A}) of all selfadjoint elements in an algebra \mathcal{A} is an additive group, and in fact a real vector space. Show that every A \in \mathcal{A} can be written uniquely in the form A= X+iY with X,Y selfadjoint. We say that X is the real part of A, and that Y is its imaginary part.

Definition 1.2. A nonzero selfadjoint idempotent P \in \mathcal{A} is called a projection. Projections P,Q \in \mathcal{A} are said to be orthogonal if PQ=0_\mathcal{A}.

Sets of pairwise orthogonal projections play an important role in the study of algebras.

Theorem 1.1. Any set of pairwise orthogonal projections in an algebra \mathcal{A} is linearly independent.

Proof: Let \{E_x \colon x \in X\} be a set of pairwise orthogonal projections in \mathcal{A} indexed by the elements of some set X. Thus E_x \neq 0 are such that E_xE_y = \delta_{xy}E_x, where \delta_{xy} is the Kronecker delta. Let

A = \sum\limits_{x \in X} \alpha_x E_x

be a vector in the span of \{E_x \colon x \in X\}. Then, for any y \in X we have

AE_y = \sum\limits_{x \in X}\alpha_xE_xE_y =\alpha_yE_y.

Thus if A=0_\mathcal{A}, we must have \alpha_x = 0 for each x \in X.

-QED

According to Theorem 1.1, the maximum cardinality of a set of pairwise orthogonal projections in \mathcal{A} is \dim \mathcal{A}.

Definition 1.3. A basis of \mathcal{A} consisting of pairwise orthogonal projections is called a Fourier basis.

If \mathcal{A} admits a Fourier basis, it is a commutative algebra, and the corresponding conjugation and multiplication tensors are the two- and three-dimensional identity matrices. In this sense, algebras which admit a Fourier basis are the simplest algebras.

Theorem 1.2. Let \{E_x \colon x \in X\} be a Fourier basis of \mathcal{A}. Then,

I_\mathcal{A}= \sum\limits_{x \in X} E_x.

Proof: Take any A \in \mathcal{A} and let

A = \sum\limits_{x \in X} \alpha_x E_x

be its expansion in the given basis. Then, we have

\left(\sum_{x \in X} E_x\right)A = \sum\limits_{x,y \in X}\alpha_yE_xE_y = \sum\limits_{x \in X}\alpha_xE_x=A

and

A\left(\sum\limits_{y \in X} E_y\right) = \sum\limits_{x,y \in X} \alpha_xE_xE_y = \sum\limits_{x \in X} \alpha_x = A.

By uniqueness of the multiplicative unit in \mathcal{A}, we conclude that \sum\limits_{x \in X} E_x = I_\mathcal{A}.

-QED

Just as selfadjoint elements in \mathcal{A} are analogous to real numbers, unitary elements in \mathcal{A} are analogous to complex numbers of modulus one.

Problem 1.8. Prove that the set U(\mathcal{A}) of all unitary elements in \mathcal{A} is a subgroup of I(\mathcal{A}). We call U(\mathcal{A}) the unitary group of \mathcal{A}.

As for normal elements, these are in bijection with pairs of commuting selfadjoint elements.

Theorem 1.3. Given A \in \mathcal{A}, let A=X+iY be its decomposition into real and imaginary parts. Then A is normal if and only if X and Y commute.

Proof: Suppose first that X and Y are commuting selfadjoint elements. We will prove that A=X+iY is normal. We have

A^*A = (X+iY)^*(X+iY) = (X-iY)(X+iY) = XX +iXY-iYX+YY

and

AA^*= (X+iY)(X+iY)^* = (X+iY)(X-iY) = XX-iXY+iYX+YY,

so

A^*A-AA^*=i(XY-YX)-i(YX-XY)=0.

Now suppose that A=X+iY is a normal element. We have

XY = \frac{A+A^*}{2}\frac{A-A^*}{2i} = \frac{AA-AA^*+A^*A-A^*A^*}{4i} = \frac{AA-A^*A^*}{4i}

and

YX = \frac{A-A^*}{2i}\frac{A+A^*}{2}=\frac{AA+AA^*-A^*A-A^*A^*}{4i} = \frac{AA-A^*A^*}{4i}.

The two expressions agree: XY=YX.

-QED

Commutativity of real and imaginary parts characterizes normalcy at the level of elements. Normalcy itself characterizes commutativity at the level of algebras.

Theorem 1.4. An algebra is commutative if and only if all its elements are normal.

Proof: One direction is obvious: if \mathcal{A} is a commutative algebra, then certainly every element commutes with its conjugate.

Conversely, suppose that every element of \mathcal{A} is normal. Let X,Y \in \mathcal{A} be any two selfadjoint elements, and set A=X+iY. Then, since A is normal, we have

A^*A-AA^* =2i(XY-YX)=0,

which shows that XY=YX. Since X,Y were arbitrary selfadjoint elements of \mathcal{A}, we have shown that any two selfadjoint elements of \mathcal{A} commute. It remains to show that A_1,A_2 \in \mathcal{A} commute even if they are not selfadjoint. Then we can write A_1=X_1+iY_1 and A_2=X_2+iY_2 where X_1,Y_1,X_2,Y_2 are selfadjoint and thus commute with one another. Thus,

A_1A_2=(X_1+iY_1)(X_2+iY_2) = (X_1X_2-Y_1Y_2)+i(X_1Y_2+Y_1X_2)

and

A_2A_1=(X_2+iY_2)(X_1+iY_1)=(X_2X_1-Y_2Y_1)+i(X_2Y_1+Y_2X_1)

are equal.

-QED

Now let us consider functions between possibly different algebras \mathcal{A} and \mathcal{B}.

Definition 1.4. A linear transformation \mathsf{T} \colon \mathcal{A} \to \mathcal{B} is said to be an algebra homomorphism if

\mathsf{T}(I_\mathcal{A}) = I_\mathcal{B}

and

\mathsf{T}(A_1A_2)=\mathsf{T}(A_1)\mathsf{T}(A_2), \quad \text{for all }A_1,A_2 \in \mathcal{A},

and moreover

\mathsf{T}(A^*)=\mathsf{T}(A)^*, \quad \text{for all }A \in \mathcal{A}.

We say that \mathcal{A} and \mathcal{B} are isomorphic if there is \mathsf{T} \colon \mathcal{A} \to \mathcal{B} which is both a vector space isomorphism and an algebra homomorphism; such a map is called an algebra isomorphism.

The word “isomorphic” means “same shape” in Greek. Two objects which have the same shape need not be the same in all ways, and similarly saying that two algebras are isomorphic should not be taken to mean that they are the same set. To emphasize this distinction, one writes \mathcal{A} \simeq \mathcal{B} to indicate that \mathcal{A} and \mathcal{B} are isomorphic algebras.

Problem 1.9. Prove that every one-dimensional algebra \mathcal{A} is isomorphic to the complex number system \mathbb{C}.

As stipulated above, all vector spaces (and hence all algebras) in Math 202B are defined over \mathbb{C}. You may wonder about algebras with real scalars, and as we now explain these can be naturally included in our framework. Let \mathcal{B} be a real algebra, i.e. a finite-dimensional vector space over \mathbb{R} together with an associative, bilinear, unital multiplication and a linear, involutive conjugation.

Definition 1.5. The complexification of \mathcal{B} is the algebra \mathcal{A} whose elements A are ordered pairs of elements X,Y \in \mathcal{B}. We write A=(X,Y) as A=X+iY and define algebraic operations in \mathcal{A} from those in \mathcal{B} as follows: for \alpha,\beta \in \mathbb{R} and X_1,X_2,Y_1,Y_2 \in \mathcal{B} we declare

(X_1+iY_1)+(X_2+iY_2) = (X_1+X_2) + i(Y_1+Y_2),

(\alpha + i\beta)(X+iY) = (\alpha X-\beta Y)+i(\beta X + \alpha Y),

(X_1+iY_1)(X_2+iY_2) = (X_1X_2-Y_1Y_2) + i(X_1Y_2+Y_1X_2)

(X+iY)^*=X^*-iY^*.

Problem 1.10. Prove that Definition 1.5 does indeed define an algebra in the sense of Definition 1.1.

We say that an element of the complexificiation \mathcal{A} of a real algebra \mathcal{B} is real if it has the form A = X +i0_\mathcal{B} for some X \in \mathcal{B}.

Theorem 1.5. The complexification \mathcal{A} of a real algebra \mathcal{B} is commutative if every real element of \mathcal{A} is selfadjoint.

Proof: Let A_1=X_1+i0_\mathcal{B} and A_2=X_2+i0_\mathcal{B} be real elements of \mathcal{A}. Then, the product A_1A_2 = X_1X_2+i0_\mathcal{B} is also a real element of \mathcal{A}. By hypothesis, A_1,A_2, and A_1A_2 are selfadjoint elements of \mathcal{A}, and therefore

A_1A_2=(A_1A_2)* =A_2^*A_1^*=A_2A_1.

Now writing A=A_1+iA_2, we have that the real part of A \in \mathcal{A} is A_1 and its imaginary part is A_2. Since A_1,A_2 commute, A is normal, by Theorem 1.3. Thus every element of \mathcal{A} is normal, hence \mathcal{A} is commutative by Theorem 1.4.

-QED

1 Comment

Leave a Reply