For a long time, we have been skirting around the issue of whether or not it is possible to multiply vectors in a general vector space We gave two answers to this question which are not really answers at all: we discussed ways to multiply vectors to get products which do not lie in the same vector space as their factors.

First, we showed that it is possible to multiply vectors in in such a way that the product of any two vectors is a number This sort of multiplication is what we termed a “bilinear form” on The best bilinear forms are those which satisfy the scalar product axioms, because these allow us to talk about lengths of vectors and angles between vectors in . However, the bilinear form concept doesn’t answer the original question about multiplying vectors, because the product of and belongs to the vector space which is probably not

Second, we found that it is possible to multiply vectors in in such a way that the product of any two vectors is a tensor, namely the tensor This is useful because it ultimately led us to a related product, the wedge product which allowed us to efficiently characterize linear independence and to introduce a notion of volume in . However, it again doesn’t answer the original question about multiplying vectors, because the product of and belongs to the vector space which is definitely not

Today, we will finally investigate the question of how to multiply two vectors to get a vector in the same space. We now have the tools to discuss this quite precisely.

**Defintion 1:** Given a vector space , a **multiplication** in is a linear transformation

It is reasonable to refer to an arbitrary linear transformation as a multiplication because every such possesses the fundamental property of multiplication that we refer to as bilinearity: it satisfies the FOIL identity

Indeed, this is true precisely because was constructed as the vector space of all “unevaluated” products of vectors multiplied according to an unspecified bilinear multiplication and the linear transformation performs the missing evaluation.

We now see that there are many ways to multiply vectors — too many. Indeed, suppose is an -dimensional vector space, and let be a basis in Then, a basis for is given by and hence every multiplication uniquely corresponds to an table of numbers, namely the matrix But not all of these make for interesting multiplication rules. For example, we could choose to be the zero transformation, which sends every tensor in to the zero vector in This is a rule for multiplying vectors in but it is accurately described as “trivial.” We would like to find nontrivial multiplication rules which mimic our experience multiplying real numbers.

**Defintion 2:** A **normed division algebra** is a pair consisting of a Euclidean space together with a multiplication which has the following properties:

- There is a vector such that for all
- For every there is a corresponding such that
- For every we have

The axioms above are very natural, and reflect familiar properties of the real number system. The first stipulates that a normed division algebra should contain a multiplicative unit analogous to the real number in the sense that multiplication by it does nothing. The second says that any nonzero element in our algebra should have a multiplicative inverse: multiplying an element by its inverse produces the unit element The first says that our algebra has a norm analogous to the absolute value of a real number, in that the norm of a product of two vectors is the product of their norms.

**Example 1:** Let be a one-dimensional Euclidean space with orthonormal basis . Let be the linear transformation uniquely determined by Then is a normed division algebra (very easy exercise: check that the axioms are satisfied).

Further examining Example 1, we see that the multiplication of arbitrary vectors is given by

So, to multiply two vectors in we simply multiply their coordinates relative to the basis using multiplication of real numbers. Thus is essentially the same as with the unit vector playing the role of the number More precisely, the linear transformation uniquely determined by is a vector space isomorphism which respects multiplication, i.e. an algebra isomorphism. In fact, thinking a bit more about this example, we find that every one-dimensional normed division algebra is isomorphic to

Now we construct something new: a two-dimensional normed division algebra. Let be a -dimensional Euclidean space with orthonormal basis Let be the linear transformation defined by

Thus for any two vectors and we have

One nice aspect of that is clear from the above computation is that meaning that defines a commutative multiplication (this is an extra property not required by the normed division algebra axioms).

**Theorem 1:** The algebra constructed above is a normed division algebra.

*Proof:* We have to check the axioms. First, for any vector we directly compute that

so that is a multiplicative identity. Second, we have to show that has a multiplicative inverse, provided Let We then have

Now since and hence we have that

which shows that has the multiplicative inverse Third and finally, we have

whence

— Q.E.D.

You have probably recognized by now that the above construction has produced the algebra of complex numbers (it is fine if you were not previously familiar with this term). Indeed, taking our Euclidean space to be with orthonormal basis and gives a simple visualization of this algebra as a rule for multiplying vectors in the Euclidean plane. The complex number system contains and enlarges the real number system, in the sense that contains the -dimensional subspace

which is isomorphic to In this context one usually uses the symbol instead of to indicate that we are considering to be not just a vector space, but a normed division algebra with the multiplication described above.

It makes a lot of sense to recalibrate your understanding of the word “number” so that it means “element of Indeed, complex numbers behave just like ordinary real numbers in all the ways that matter: you can add, subtract, multiply, and divide complex numbers in just the way you do real numbers. In order to psychologically prime ourselves for thinking of complex numbers as numbers rather than vectors, we follow the usual notational tradition of un-bolding them. So we just write to indicate that is a complex number, and we write where are ordinary real numbers and is the “imaginary” unit. Technically, all these symbols mean exactly what they meant above, they’ve just been un-bolded. So, the product of two complex numbers and $z_2=x_2+y_2i$ is

It’s also customary to denote the norm of a complex number using just single lines, and never to calling it “absolute value:”

Once we enlarge our understanding of numbers from real to complex, it is becomes natural to modify our concept of vector space accordingly. Namely, a **complex vector space** is a set together with two operations, vector addition and scalar multiplication, which satisfy exactly the same axioms as Definition 1 in Lecture 1, except with replacing We will discuss further consequences of the passage from real vector spaces to complex vector spaces in the next lecture.

Before finishing this lecture, let us briefly consider a natural question which, historically, was one of the main motivating questions in the development algebra: what other normed division algebras might exist? This question was first considered in detail by the Irish mathematician William Rowan Hamilton in the 1800s. In modern terms, Hamilton’s goal was the following: given a -dimensional Euclidean space he wanted to find a multiplication rule which would turn into a normed division algebra. The three-dimensional case is of clear interest due to the three physical dimensions of our world; Hamilton was looking for what he called “spatial numbers.” Unfortunately, he wasn’t able to find what he was looking for, because it doesn’t exist. After a long period of trying without results, in 1843 he suddenly realized that his desired construction could be performed in four dimensions, which led him to a new normed division algebra which he called the quaternions.

To construct the quaternions, let be a -dimensional Euclidean space with orthonormal basis and let be the multiplication defined by the table

1 | i | j | k | |

1 | 1 | i | j | k |

i | i | –1 | k | –j |

j | j | –k | –1 | i |

k | k | j | –i | –1 |

In this table, the first row and column contain the basis vectors, and the internal cells contain the result of applying to the tensor product of the corresponding tensor product of basis vectors. This turns out to give a normed division algebra; however, as you can see from the above table, this algebra is noncommutative. It is denoted in Hamilton’s honor (and also because the symbol is already taken).

It turns out that, in addition to there is only one more normed division algebra. This algebra is called the octonions, because it consists of a multiplication rule for eight dimensional vectors; it is traditionally denoted It was proved by Adolf Hurwitz that these four constitute the complete list of normed division algebras.

Every time we move up the list of normed division algebras, we lose something. In passing from to we lose the fact that the real numbers are ordered: for any two distinct real numbers, it makes sense to say which is smaller and which is larger, but this doesn’t make sense for complex numbers. When we move from the complex numbers to the quaternions, we lose commutativity. When we move from quaternions to the octonions, things get even worse and we lose associativity. This means the following. You may notice that in our definition of algebras, we have only talked about multiplying two vectors. Of course, once we can multiply two, we’d like to multiply three, and four, etc. A multiplication is said to be associative if

For associative algebras, unambiguously defining the product of any finite number of vectors is not a problem. However, for octonions, this is not the case.