This course is the second quarter of Math 202, a three-quarter graduate course sequence in applied algebra at UCSD. Briefly, the 202 sequence is arranged as follows.
Math 202A (Fall): vectors and transformations.
Math 202B (Winter): algebras and representations.
Math 202C (Spring): tensors and invariants.
In Math 202B, the term “vector space” will always mean a finite-dimensional complex vector space.
Definition 1.1. An algebra is a vector space of positive dimension equipped with an associative, bilinear, unital multiplication and an antilinear, antimultiplicative, involutive conjugation.
The prototypical example of an algebra is , the complex number system, elements of which are called scalars and denoted by lower-case Greek letters,
There are some exceptions: integers like zero and one are denoted and
as usual, and we write
for the imaginary unit. Elements of a general algebra
are denoted by upper-case Roman letters,
Multiplication in is a function
whose values are denoted by concatenating its arguments:
. Associativity means that the symbol
is unambiguous because its two possible meanings coincide:
Bilinearity means that multiplication in interacts with its vector space structure according to the rule
We do not assume multiplication is commutative.
Problem 1.1. Let denote the zero vector in
. Prove that
for all
.
Later, when we are more familiar with algebras and there is less chance of confusion, we will sometimes omit the subscript and write for the zero vector in a general algebra
as it will generally be clear from context whether this symbol represents a scalar or a vector.
Unital means that there exists a vector such that
for all . Any such vector is called a multiplicative unit. Note that because the dimension of
is positive, any multiplicative unit
is distinct from the additive unit
In fact, there is only one multiplicative unit.
Problem 1.2. Let be multiplicative units in
. Prove that
.
Henceforth we write for the unique multiplicative unit. Later on, we may omit the subscript and simply write
for the multiplicative unit if it causes no confusion to do so. An element
is said to be invertible if there exists
such that
.
Problem 1.3. Suppose are such that
and
. Prove that
.
When we say that
is the inverse of
, and that
is the inverse of
This is written
and
.
Multiplication in an algebra can be described numerically as follows. Let be a vector space basis of
indexed by the points of some finite nonempty set
Then,
can be represented as a linear combinations
According to bilinearity we have
Each product of basis vectors can also be resolved into a linear combination of basis vectors,
As the indices range over
we get a three dimensional array
of complex numbers called the multiplication tensor of
relative to the basis
The elements of this three-tensor are called the connection coefficients of
relative to the basis
This set of
numbers completely determines multiplication in
, since
From a practical perspective, one would like to find a vector space basis of such that the corresponding multiplication tensor is sparse, i.e. many connection coefficients are zero, so that the computational cost of performing multiplication is minimized – this is the basic idea behind Strassen’s algorithm for matrix multiplication.
Problem 1.4. Prove that a two-dimensional algebra must be commutative.
Conjugation is a function whose values are denoted by a superscript asterisk:
. Antilinearity means that conjugation interacts with the vector space operations according to the rule
Antimultiplicativity means that conjugation interacts with multiplication according to the rule
Involutive means that conjugation is 2-periodic,
Just like multiplication, conjugation in can be described with respect to a linear basis
Indeed, for each basis vector we can write its conjugate as
This gives a two-dimensional array which completely describes conjugation in , the conjugation tensor
relative to the the basis
For any
then
Problem 1.5. Prove that the set of invertible elements in an algebra
is a multiplicative group. Moreover, prove that
is closed under conjugation:
is invertible if and only if
is invertible, and in fact
.
In any algebra , we define the following element classes:
- Selfadjoint:
.
- Idempotent:
.
- Unitary:
.
- Normal :
.
Problem 1.6. Prove the the set of all selfadjoint elements in an algebra
is an additive group, and in fact a real vector space. Show that every
can be written uniquely in the form
with
selfadjoint. We say that
is the real part of
, and that
is its imaginary part.
Definition 1.2. A nonzero selfadjoint idempotent is called a projection. Projections
are said to be orthogonal if
Sets of pairwise orthogonal projections play an important role in the study of algebras.
Theorem 1.1. Any set of pairwise orthogonal projections in an algebra is linearly independent.
Proof: Let be a set of pairwise orthogonal projections in
indexed by the elements of some set
Thus
are such that
where
is the Kronecker delta. Let
be a vector in the span of Then, for any
we have
Thus if we must have
for each
-QED
According to Theorem 1.1, the maximum cardinality of a set of pairwise orthogonal projections in is
Definition 1.3. A basis of consisting of pairwise orthogonal projections is called a Fourier basis.
If admits a Fourier basis, it is a commutative algebra, and the corresponding conjugation and multiplication tensors are the two- and three-dimensional identity matrices. In this sense, algebras which admit a Fourier basis are the simplest algebras.
Theorem 1.2. Let be a Fourier basis of
Then,
Proof: Take any and let
be its expansion in the given basis. Then, we have
and
By uniqueness of the multiplicative unit in , we conclude that
-QED
Just as selfadjoint elements in are analogous to real numbers, unitary elements in
are analogous to complex numbers of modulus one.
Problem 1.8. Prove that the set of all unitary elements in
is a subgroup of
. We call
the unitary group of
As for normal elements, these are in bijection with pairs of commuting selfadjoint elements.
Theorem 1.3. Given , let
be its decomposition into real and imaginary parts. Then
is normal if and only if
and
commute.
Proof: Suppose first that and
are commuting selfadjoint elements. We will prove that
is normal. We have
and
so
Now suppose that is a normal element. We have
and
The two expressions agree:
-QED
Commutativity of real and imaginary parts characterizes normalcy at the level of elements. Normalcy itself characterizes commutativity at the level of algebras.
Theorem 1.4. An algebra is commutative if and only if all its elements are normal.
Proof: One direction is obvious: if is a commutative algebra, then certainly every element commutes with its conjugate.
Conversely, suppose that every element of is normal. Let
be any two selfadjoint elements, and set
. Then, since
is normal, we have
which shows that Since
were arbitrary selfadjoint elements of
, we have shown that any two selfadjoint elements of
commute. It remains to show that
commute even if they are not selfadjoint. Then we can write
and
where
are selfadjoint and thus commute with one another. Thus,
and
are equal.
-QED
Now let us consider functions between possibly different algebras and
Definition 1.4. A linear transformation is said to be an algebra homomorphism if
and
and moreover
We say that and
are isomorphic if there is
which is both a vector space isomorphism and an algebra homomorphism; such a map is called an algebra isomorphism.
The word “isomorphic” means “same shape” in Greek. Two objects which have the same shape need not be the same in all ways, and similarly saying that two algebras are isomorphic should not be taken to mean that they are the same set. To emphasize this distinction, one writes to indicate that
and
are isomorphic algebras.
Problem 1.9. Prove that every one-dimensional algebra is isomorphic to the complex number system
As stipulated above, all vector spaces (and hence all algebras) in Math 202B are defined over You may wonder about algebras with real scalars, and as we now explain these can be naturally included in our framework. Let
be a real algebra, i.e. a finite-dimensional vector space over
together with an associative, bilinear, unital multiplication and a linear, involutive conjugation.
Definition 1.5. The complexification of is the algebra
whose elements
are ordered pairs of elements
We write
as
and define algebraic operations in
from those in
as follows: for
and
we declare
Problem 1.10. Prove that Definition 1.5 does indeed define an algebra in the sense of Definition 1.1.
We say that an element of the complexificiation of a real algebra
is real if it has the form
for some
Theorem 1.5. The complexification of a real algebra
is commutative if every real element of
is selfadjoint.
Proof: Let and
be real elements of
Then, the product
is also a real element of
. By hypothesis,
and
are selfadjoint elements of
and therefore
Now writing we have that the real part of
is
and its imaginary part is
Since
commute,
is normal, by Theorem 1.3. Thus every element of
is normal, hence
is commutative by Theorem 1.4.
-QED
1 Comment