Math 202B: Lecture 1

Definition 1.1. An algebra is a complex vector space \mathcal{A} of positive dimension equipped with an, associative, bilinear, unital multiplication, and an antilinear, antimultiplicative, involutive conjugation.

Let us unpack this definition.

Vector space structure. First of all, \mathcal{A} is a complex vector space. We will denote vectors in this space by uppercase Roman letters

A,B,C,\dots

and use lowercase Greek letters

\alpha,\beta,\gamma,\dots

for scalars in \mathbb{C}.

Multiplication. Multiplication is a map

\mathcal{A}\times\mathcal{A}\to\mathcal{A}

whose values are denoted by concatenating its arguments,

(A,B)\mapsto AB.

Associativity means that the symbol ABC is unambiguous, because its two possible interpretations coincide:

(AB)C=A(BC).

We do not assume that multiplication is commutative – there may exist elements A,B\in\mathcal{A} such that AB\neq BA. When no such elements exist, we say that \mathcal{A} is a commutative algebra.

Bilinearity means that multiplication interacts with the vector space structure according to the rule

(\alpha_1A_1+\alpha_2A_2)(\beta_1B_1+\beta_2B_2)=\alpha_1\beta_1A_1B_1+\alpha_1\beta_2A_1B_2+\alpha_2\beta_1A_2B_1]+\alpha_2\beta_2A_2B_2.

Problem 1.1 Let 0_\mathcal{A} denote the zero vector in \mathcal{A}. Prove that for all A\in\mathcal{A} we have

A0_\mathcal{A}=0_\mathcal{A}A=0_\mathcal{A}.

Unital means that there exists an element I\in\mathcal{A} such that IA=AI=A for all A\in\mathcal{A}. Any such element is called a multiplicative unit. Since \dim\mathcal{A}>0, the multiplicative unit is distinct from the additive unit 0_\mathcal{A}. Moreover, the multiplicative unit is unique.

Problem 1.2 Let I,J\in\mathcal{A} be multiplicative units. Prove that I=J.

Henceforth, we write I_\mathcal{A} for the unique multiplicative unit in \mathcal{A}. When no confusion is possible, we simply write I.

An element A\in\mathcal{A} is said to be invertible if there exists B\in\mathcal{A} such that AB=BA=I_\mathcal{A}. When this holds, we say that B is the inverse of A, and write B=A^{-1} and A=B^{-1}.

Problem 1.3 Suppose A,B,C\in\mathcal{A} satisfy AB=BA=I and AC=CA=I. Prove that B=C.

Multiplication in \mathcal{A} can be concrete and numerical. Let

{E_x:x\in X}

be a vector space basis of \mathcal{A} indexed by a nonempty set X. Any elements A,B\in\mathcal{A} can be written as linear combinations

A=\sum_{x\in X}\alpha_xE_x,\qquad B=\sum_{y\in X}\beta_yE_y

with all but finitely many terms equal to 0_\mathcal{A}. By bilinearity,

AB=\sum_{x,y\in X}\alpha_x\beta_yE_xE_y.

Each product of basis vectors can be expanded as

E_xE_y=\sum_{z\in X}\gamma_{xyz}E_z

for uniquely determined scalars \gamma_{xyz}\in\mathbb{C}. Thus multiplication in \mathcal{A} is completely determined by the scalars

\gamma_{xyz}, \quad x,y,z\in X,

which are called the connection coefficients or structure constants of the basis \{E_x \colon x \in X\}. From a computational perspective, it is desirable to choose a basis for which many of these coefficients vanish. This idea underlies Strassen’s algorithm for matrix multiplication.

An element P \in \mathcal{P} is said to be idempotent if P^2=P. This is equivalent to saying that the coefficients in the expansion

P=\sum\limits_{x \in X} \pi_x E_x

satisfy

\sum\limits_{x,y \in X} \pi_x\pi_y\gamma_{xyz}=\pi_z

for each z \in X.

Problem 1.4 Prove that a two-dimensional algebra must be commutative.



Conjugation. Conjugation is a function \mathcal{A}\to\mathcal{A} denoted by A\mapsto A^*. Antilinearity means that conjugation and the vector space structure interact according to the rule

(\alpha A+\beta B)^*=\overline{\alpha}A^+\overline{\beta}B^*.

Antimultiplicativity means that conjugation and multiplication interact according to the rule

(AB)^*=B^*A^*.

Involutive means that conjugation is two-periodic,

(A^*)^*=A.

Let I(\mathcal{A}) denote the set of invertible elements in \mathcal{A}.

Problem 1.5 Prove that:

  1. I(\mathcal{A}) is a group under multiplication.
  2. A is invertible if and only if A^* is invertible.
  3. (A^*)^{-1}=(A^{-1})^*.

Problem 1.6 Let \{E_x:x\in X\} be a vector space basis of \mathcal{A}. Prove that \{E_x^*:x\in X\} is also a vector space basis of \mathcal{A}.

Special classes of elements. There are three special classes of elements in \mathcal{A}.

An element X\in\mathcal{A} is said to be selfadjoint if X^*=X. The set S(\mathcal{A}) of selfadjoint elements in \mathcal{A} forms a real vector space.

An element U\in\mathcal{A} is said to be unitary if it is invertible and U^{-1}=U^*. The set of unitary elements is denoted U(\mathcal{A}) and is called the unitary group of \mathcal{A}.

Problem 1.7 Prove that U(\mathcal{A}) is a subgroup of I(\mathcal{A}).

An element A\in\mathcal{A} is said to be normal if it commutes with its conjugate: A^*A=AA^*.

Problem 1.8 Prove that every A\in\mathcal{A} can be uniquely expressed in the form A=X+iY, where X,Y\in S(\mathcal{A}). The selfadjoint elements X and Y are called the real and imaginary parts of A, respectively.

Problem 1.9 Prove that A\in\mathcal{A} is normal if and only if its real and imaginary parts commute.

Problem 1.10 Prove that \mathcal{A} is commutative if and only if all its elements are normal.

Leave a Reply