In this lecture we continue the study of Euclidean spaces. Let be a vectors space, and let be a scalar product on as defined in Lecture 4. The following definition generalizes the concept of perpendicularity to the setting of an arbitrary Euclidean space.

**Definition 1:** Vectors are said to be **orthogonal** if More generally, we say that is an **orthogonal set** if for all

Observe that the zero vector is orthogonal to every vector by the third scalar product axiom. Let us check that orthogonality of nonzero abstract vectors does indeed generalize perpendicularity of geometric vectors.

**Proposition 1:** Two nonzero vectors are orthogonal if and only if the angle between them is

Proof: By definition, the angle between nonzero vectors and is the unique number which solves the equation

If the angle between and is then

Conversely, if then

Since are nonzero, we have and and we can divide through by to obtain

The unique solution of this equation in the interval is — Q.E.D.

In Lecture 4, we proved that any two nonzero vectors separated by a nonzero angle are linearly independent. This is not true for three or more vectors: for example, if are the vectors respectively, then

but So, separation by a positive angle is generally not enough to guarantee the linear independence of a given set of vectors. However, orthogonality is.

**Proposition 2: **If be an orthogonal set of nonzero vectors, then is linearly independent.

*Proof:* Let be scalars such that

Let us take the scalar product with on both sides of this equation, to get

Using the scalar product axioms, we thus have

Now, since is an orthogonal set, all terms on the left hand side are zero except for the first term, which is We thus have

Now, since we have and thus we can divide through by in the above equation to get

Repeating the above argument with in place of yields In general, using the same argument for each we get for all Thus is a linearly independent set. — Q.E.D.

One consequence Proposition 1 is that, if is an -dimensional vector space, and is an orthogonal set of nonzero vectors in then is a basis of In general, a basis of a vector space which is also an orthogonal set is called an **orthogonal basis.** In many ways, orthogonal bases are better than bases which are not orthogonal sets. One manifestation of this is the very useful fact that coordinates relative to an orthogonal basis are easily expressed as scalar products.

**Proposition 2:** Let be an orthogonal basis in For any the unique representation of as a linear combination of vectors in is

Equivalently, we have

where, for each is the angle between and

*Proof: *Let be any vector, and let

be its unique representation as a linear combination of vectors from Taking the inner product with the basis vector on both sides of this decomposition, we get

Using the scalar product axioms, we can expand the right hand side as

where is the Kronecker delta, which equals if and equals if We thus have

Now, since is a linearly independent set, and hence Solving for the coordinate we thus have

Since where is the angle between and the basis vector this may equivalently be written

which completes the proof. — Q.E.D.

The formulas in Proposition 2 become even simpler if is an orthogonal basis in which every vector has length i.e.

Such a basis is called an **orthonormal basis**. According to Proposition 2, if is an orthonormal basis in then for any we have

or equivalently

The first of these formulas is important in that it gives an algebraically efficient way to calculate coordinates relative to an orthonormal basis: to calculate the coordinates of a vector just compute its scalar product with each of the basis vectors. The second formula is important because it provides geometric intuition: it says that the coordinates of relative to an orthonormal basis are the lengths of the *orthogonal projections* of onto the lines (i.e one-dimensional subspaces) spanned by each of the basis vectors. Indeed, thinking of the case where and are geometric vectors, the quantity is the length of the orthogonal projection of the vector onto the line spanned by as in the figure below.

An added benefit of orthonormal bases is that they reduce abstract scalar products to the familiar dot product of geometric vectors. More precisely, suppose that is an orthonormal basis of Let be vectors in and let

be their representations relative to Then, we may evaluate the scalar product of and as

In words, the scalar product equals the dot product of the coordinate vectors of and relative to an orthonormal basis of .

This suggests the following definition.

**Definition 2:** Euclidean spaces and are said to be **isomorphic** if there exists an isomorphism which has the additional feature that

Our calculation above makes it seem likely that any two -dimensional Euclidean spaces and are isomorphic, just as any two -dimensional vector spaces and are. Indeed, we can prove this immediately if we can claim that both and contain orthonormal bases. In this case, let be an orthonormal basis in let be an orthonormal basis in and define to be the unique linear transformation that transforms into for each Then is an isomorphism of vector spaces by the same argument as in Lecture 2, and it also satisfies (make sure you understand why).

But, how can we be sure that every -dimensional Euclidean space actually does contain an orthonormal basis? Certainly, we know that contains a basis , but this basis might not be orthonormal. Luckily, there is a fairly simple algorithm which takes as input a finite linearly independent set of vectors, and outputs a linearly independent orthogonal set of the same size, which we can then “normalize” by dividing each vector in the output set by its norm. This algorithm is called the Gram-Schmidt algorithm, and you are encouraged to familiarize yourself with it — it’s not too complicated, and is based entirely on material covered in this lecture. In this course, we only need to know that the Gram-Schmidt algorithm exists, so that we can claim any finite-dimensional Euclidean space has an orthonormal basis. We won’t bother analyzing the internal workings of the Gram-Schmidt algorithm, and will treat it as a black box to facilitate geometric thinking in abstract Euclidean spaces. More on this in Lecture 6.

## 1 Comment