Math 31AH: Lecture 26

For a long time, we have been skirting around the issue of whether or not it is possible to multiply vectors in a general vector space \mathbf{V}. We gave two answers to this question which are not really answers at all: we discussed ways to multiply vectors to get products which do not lie in the same vector space as their factors.

First, we showed that it is possible to multiply vectors in \mathbf{V} in such a way that the product of any two vectors \mathbf{v},\mathbf{w} is a number \langle \mathbf{v},\mathbf{w} \rangle. This sort of multiplication is what we termed a “bilinear form” on \mathbf{V}. The best bilinear forms are those which satisfy the scalar product axioms, because these allow us to talk about lengths of vectors and angles between vectors in \mathbf{V}. However, the bilinear form concept doesn’t answer the original question about multiplying vectors, because the product of \mathbf{v} and \mathbf{w} belongs to the vector space \mathbb{R}, which is probably not \mathbf{V}.

Second, we found that it is possible to multiply vectors in \mathbf{V} in such a way that the product of any two vectors \mathbf{v}_1,\mathbf{v}_2 is a tensor, namely the tensor \mathbf{v}_1 \otimes \mathbf{v}_2. This is useful because it ultimately led us to a related product, the wedge product \mathbf{v} \wedge \mathbf{w}, which allowed us to efficiently characterize linear independence and to introduce a notion of volume in \mathbf{V}. However, it again doesn’t answer the original question about multiplying vectors, because the product of \mathbf{v}_1 and \mathbf{v}_2 belongs to the vector space \mathbf{V} \otimes \mathbf{V}, which is definitely not \mathbf{V}.

Today, we will finally investigate the question of how to multiply two vectors to get a vector in the same space. We now have the tools to discuss this quite precisely.

Defintion 1: Given a vector space \mathbf{V}, a multiplication in \mathbf{V} is a linear transformation

M \colon \mathbf{V} \otimes \mathbf{V} \to \mathbf{V}.

It is reasonable to refer to an arbitrary linear transformation M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V}) as a multiplication because every such M possesses the fundamental property of multiplication that we refer to as bilinearity: it satisfies the FOIL identity

M\left((x_1\mathbf{v}_1 + y_1\mathbf{w}_1) \otimes (x_2\mathbf{v}_2 + y_2\mathbf{w}_2)\right)\\ =x_1x_2M(\mathbf{v}_1 \otimes \mathbf{v}_2) + x_1y_2M(\mathbf{v}_1\otimes \mathbf{w}_2) + x_2y_1M(\mathbf{v}_2 \otimes \mathbf{w}_1) + x_2y_2M(\mathbf{v}_2 \otimes \mathbf{w}_2).

Indeed, this is true precisely because \mathbf{V} \otimes \mathbf{V} was constructed as the vector space of all “unevaluated” products of vectors multiplied according to an unspecified bilinear multiplication \otimes, and the linear transformation M performs the missing evaluation.

We now see that there are many ways to multiply vectors — too many. Indeed, suppose \mathbf{V} is an n-dimensional vector space, and let E=\{\mathbf{e}_1,\dots,\mathbf{e}_n\} be a basis in \mathbf{V}. Then, a basis for \mathbf{V} is given by E \otimes E = \{\mathbf{e}_i \otimes \mathbf{e}_j \colon 1 \leq i,j \leq n\}, and hence every multiplication M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V}) uniquely corresponds to an n \times n^2 table of numbers, namely the matrix [M]_{E \otimes E,E}. But not all of these make for interesting multiplication rules. For example, we could choose M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) to be the zero transformation, which sends every tensor in \mathbf{V} \otimes \mathbf{V} to the zero vector \mathbf{0}_\mathbf{V} in \mathbf{V}. This is a rule for multiplying vectors in \mathbf{V}, but it is accurately described as “trivial.” We would like to find nontrivial multiplication rules which mimic our experience multiplying real numbers.

Defintion 2: A normed division algebra is a pair (\mathbf{V},M) consisting of a Euclidean space \mathbf{V} together with a multiplication M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) which has the following properties:

  1. There is a vector \mathbf{1} \in \mathbf{V} such that M(\mathbf{1} \otimes \mathbf{v})= M(\mathbf{v} \otimes \mathbf{1}) = \mathbf{v} for all \mathbf{v} \in \mathbf{V}.
  2. For every \mathbf{v} \in \mathbf{V}\backslash \{\mathbf{0}\}, there is a corresponding \mathbf{w} \in \mathbf{V} such that M(\mathbf{v} \otimes \mathbf{w})=M(\mathbf{w} \otimes \mathbf{v}) = \mathbf{1}.
  3. For every \mathbf{v}_1,\mathbf{v}_2 \in \mathbf{V}, we have \|M(\mathbf{v}_1 \otimes \mathbf{v}_2)\| = \|\mathbf{v}_1\| \|\mathbf{v}_2\|.

The axioms above are very natural, and reflect familiar properties of the real number system. The first stipulates that a normed division algebra should contain a multiplicative unit \mathbf{1} analogous to the real number 1 in the sense that multiplication by it does nothing. The second says that any nonzero element in our algebra should have a multiplicative inverse: multiplying an element by its inverse produces the unit element \mathbf{1}. The first says that our algebra has a norm analogous to the absolute value of a real number, in that the norm of a product of two vectors is the product of their norms.

Example 1: Let \mathbf{V} be a one-dimensional Euclidean space with orthonormal basis E=\{\mathbf{1}\}. Let M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) be the linear transformation uniquely determined by M(\mathbf{1} \otimes \mathbf{1})=\mathbf{1}. Then (\mathbf{V},M) is a normed division algebra (very easy exercise: check that the axioms are satisfied).

Further examining Example 1, we see that the multiplication of arbitrary vectors \mathbf{v}_1,\mathbf{v}_2 \in \mathbf{V} is given by

M(\mathbf{v}_1 \otimes \mathbf{v}_2) = M\left( (\langle \mathbf{1},\mathbf{v}_1\rangle \mathbf{1}) \otimes (\langle \mathbf{1},\mathbf{v}_2\rangle \mathbf{1})\right) = \langle \mathbf{1},\mathbf{v}_1\rangle\langle \mathbf{1},\mathbf{v}_2\rangle M(\mathbf{1} \otimes \mathbf{1}) = \langle \mathbf{1},\mathbf{v}_1\rangle\langle \mathbf{1},\mathbf{v}_2\rangle\mathbf{1}.

So, to multiply two vectors in \mathbf{V}, we simply multiply their coordinates relative to the basis E=\{\mathbf{1}\} using multiplication of real numbers. Thus \mathbf{V} is essentially the same as \mathbb{R}, with the unit vector \mathbf{1} playing the role of the number 1. More precisely, the linear transformation T \colon \mathbf{V} \to \mathbb{R} uniquely determined by T(\mathbf{1})=1 is a vector space isomorphism which respects multiplication, i.e. an algebra isomorphism. In fact, thinking a bit more about this example, we find that every one-dimensional normed division algebra is isomorphic to \mathbb{R}.

Now we construct something new: a two-dimensional normed division algebra. Let \mathbf{V} be a 2-dimensional Euclidean space with orthonormal basis E=\{\mathbf{1},\mathbf{i}\}. Let M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) be the linear transformation defined by

M(\mathbf{1} \otimes \mathbf{1}) = \mathbf{1},\     M(\mathbf{1} \otimes \mathbf{i}) = \mathbf{i},\ M(\mathbf{i} \otimes \mathbf{1}) = \mathbf{i},\ M(\mathbf{i} \otimes \mathbf{i}) = -\mathbf{1}.

Thus for any two vectors \mathbf{v}_1 = x_1\mathbf{1} + y_1\mathbf{i} and \mathbf{v}_2 = x_2\mathbf{1} + y_2\mathbf{i} we have

M(\mathbf{v}_1 \otimes \mathbf{v}_2) \\ = M\left((x_1\mathbf{1} + y_1\mathbf{i}) \otimes (x_2\mathbf{1} + y_2\mathbf{i}) \right) \\ =x_1x_2 M(\mathbf{1} \otimes \mathbf{1}) + x_1y_2 M(\mathbf{1} \otimes \mathbf{i}) + x_2y_1 M(\mathbf{i} \otimes \mathbf{1}) + y_1y_2 M(\mathbf{i} \otimes \mathbf{i}) \\ = x_1x_2\mathbf{1} + x_1y_2 \mathbf{i} + x_2y_1 \mathbf{i} - y_1y_2\mathbf{1} \\ = (x_1x_2- y_1y_2)\mathbf{1} + (x_1y_2+x_2y_1)\mathbf{i}.

One nice aspect of M that is clear from the above computation is that M(\mathbf{v}_1 \otimes \mathbf{v}_2) = M(\mathbf{v}_2 \otimes \mathbf{v}_1), meaning that M defines a commutative multiplication (this is an extra property not required by the normed division algebra axioms).

Theorem 1: The algebra (\mathbf{V},M) constructed above is a normed division algebra.

Proof: We have to check the axioms. First, for any vector \mathbf{v}=x\mathbf{1}+y\mathbf{i}, we directly compute that

M(\mathbf{1} \otimes \mathbf{v}) = \mathbf{v},

so that \mathbf{1} is a multiplicative identity. Second, we have to show that \mathbf{v} has a multiplicative inverse, provided \mathbf{v} \neq \mathbf{0}. Let \mathbf{v}^*= x\mathbf{1}-y\mathbf{i}. We then have

M(\mathbf{v} \otimes \mathbf{v}^*) = (x^2+y_2)\mathbf{1} = \|\mathbf{v}\|^2\mathbf{1}.

Now \|\mathbf{v}\| \neq 0 since \mathbf{v} \neq \mathbf{0}, and hence we have that

M(\mathbf{v} \otimes \frac{1}{\|\mathbf{v}\|^2}\mathbf{v}^*) = \mathbf{1},

which shows that \mathbf{v} has the multiplicative inverse \frac{1}{\|\mathbf{v}\|^2}\mathbf{v}^*. Third and finally, we have

\|M(\mathbf{v}_1 \otimes \mathbf{v}_2)\|^2 \\ = \|(x_1x_2- y_1y_2)\mathbf{1} + (x_1y_2+x_2y_1)\mathbf{i}\|^2\\ = (x_1x_2- y_1y_2)^2 + (x_1y_2+x_2y_1)^2 \\ = x_1^2y_1^2 - 2x_1x_2y_1y_2 + y_1^2y_2^2 + x_1^2y_2^2 + 2x_1x_2y_1y_2 + x_2^2y_1^2 \\ = (x_1^2+x_2^2)(y_1^2+y_2^2) \\= \|\mathbf{v}_1\|^2\|\mathbf{v}_2\|^2,


\|M(\mathbf{v}_1 \otimes \mathbf{v}_2)\|= \|\mathbf{v}_1\|\|\mathbf{v}_2\|.

— Q.E.D.

You have probably recognized by now that the above construction has produced the algebra of complex numbers (it is fine if you were not previously familiar with this term). Indeed, taking our Euclidean space \mathbf{V} to be \mathbf{V}=\mathbb{R}^2 with orthonormal basis \mathbf{1}=(1,0) and \mathbf{i}=(0,1) gives a simple visualization of this algebra as a rule for multiplying vectors in the Euclidean plane. The complex number system contains and enlarges the real number system, in the sense that \mathbf{R}^2 contains the 1-dimensional subspace

\mathrm{Span}\{\mathbf{1}\} = \{(x,0) \colon x \in \mathbb{R}\},

which is isomorphic to \mathbb{R}. In this context one usually uses the symbol \mathbb{C} instead of \mathbb{R}^2 to indicate that we are considering \mathbb{R}^2 to be not just a vector space, but a normed division algebra with the multiplication described above.

It makes a lot of sense to recalibrate your understanding of the word “number” so that it means “element of \mathbb{C}. Indeed, complex numbers behave just like ordinary real numbers in all the ways that matter: you can add, subtract, multiply, and divide complex numbers in just the way you do real numbers. In order to psychologically prime ourselves for thinking of complex numbers as numbers rather than vectors, we follow the usual notational tradition of un-bolding them. So we just write z \in \mathbb{C} to indicate that z is a complex number, and we write z=x+yi where x,y are ordinary real numbers and i is the “imaginary” unit. Technically, all these symbols mean exactly what they meant above, they’ve just been un-bolded. So, the product of two complex numbers z_1=x_1+y_1i and $z_2=x_2+y_2i$ is

z_1z_2 = (x_1x_2-y_1y_2) + (x_1y_1+x_2y_1)i.

It’s also customary to denote the norm of a complex number using just single lines, and never to calling it “absolute value:”

|z| = |x+yi| = \sqrt{x^2+y^2}.

Once we enlarge our understanding of numbers from real to complex, it is becomes natural to modify our concept of vector space accordingly. Namely, a complex vector space \mathbf{V} is a set together with two operations, vector addition and scalar multiplication, which satisfy exactly the same axioms as Definition 1 in Lecture 1, except with \mathbb{C} replacing \mathbb{R}. We will discuss further consequences of the passage from real vector spaces to complex vector spaces in the next lecture.

Before finishing this lecture, let us briefly consider a natural question which, historically, was one of the main motivating questions in the development algebra: what other normed division algebras might exist? This question was first considered in detail by the Irish mathematician William Rowan Hamilton in the 1800s. In modern terms, Hamilton’s goal was the following: given a 3-dimensional Euclidean space \mathbf{V}, he wanted to find a multiplication rule M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) which would turn \mathbf{V} into a normed division algebra. The three-dimensional case is of clear interest due to the three physical dimensions of our world; Hamilton was looking for what he called “spatial numbers.” Unfortunately, he wasn’t able to find what he was looking for, because it doesn’t exist. After a long period of trying without results, in 1843 he suddenly realized that his desired construction could be performed in four dimensions, which led him to a new normed division algebra which he called the quaternions.

To construct the quaternions, let \mathbf{V} be a 4-dimensional Euclidean space with orthonormal basis E=\{\mathbf{1},\mathbf{i},\mathbf{j},\mathbf{k}\}, and let M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) be the multiplication defined by the table

Hamilton’s multiplication table.

In this table, the first row and column contain the basis vectors, and the internal cells contain the result of applying M to the tensor product of the corresponding tensor product of basis vectors. This turns out to give a normed division algebra; however, as you can see from the above table, this algebra is noncommutative. It is denoted \mathbb{H}, in Hamilton’s honor (and also because the symbol \mathbb{Q} is already taken).

It turns out that, in addition to \mathbb{R},\mathbb{C},\mathbb{H}, there is only one more normed division algebra. This algebra is called the octonions, because it consists of a multiplication rule for eight dimensional vectors; it is traditionally denoted \mathbb{O}. It was proved by Adolf Hurwitz that these four constitute the complete list of normed division algebras.

Every time we move up the list of normed division algebras, we lose something. In passing from \mathbb{R} to \mathbb{C}, we lose the fact that the real numbers are ordered: for any two distinct real numbers, it makes sense to say which is smaller and which is larger, but this doesn’t make sense for complex numbers. When we move from the complex numbers to the quaternions, we lose commutativity. When we move from quaternions to the octonions, things get even worse and we lose associativity. This means the following. You may notice that in our definition of algebras, we have only talked about multiplying two vectors. Of course, once we can multiply two, we’d like to multiply three, and four, etc. A multiplication M \in \mathrm{Hom}(\mathbf{V} \otimes \mathbf{V},\mathbf{V}) is said to be associative if

M(M(\mathbf{v}_1 \otimes \mathbf{v}_2),\mathbf{v}_3) = M(\mathbf{v}_1,M(\mathbf{v}_1 \otimes \mathbf{v}_2)).

For associative algebras, unambiguously defining the product of any finite number of vectors is not a problem. However, for octonions, this is not the case.

1 Comment

Leave a Reply