# Author Archives: Jonathan Novak

## Math 31AH: Lecture 8

In Lecture 4, we introduced the notion of a Euclidean space, which is a vector space together with a scalar product defined on . In a Euclidean space, we can use the scalar product to define a notions of vector length, distance between two vectors, and angle between two vectors. In short, while a vector space alone is a purely algebraic object, we can do Euclidean geometry in a vector space with a scalar product. This realization is extremely useful since it gives us a way to think geometrically about vectors which may not be at all like vectors in For example, they could be functions, as in Assignment 3.

For better or worse, it turns out that Euclidean geometry, as useful as it is in this generalized setup, is not sufficient to describe the world around us. Mathematically, this means that we must sometimes think about non-Euclidean geometry. At the level of linear algebra, this comes down to opening ourselves up to thinking about general bilinear forms, which extend the scalar product concept in that they might fail to satisfy the symmetry and positivity axioms. An important example is the geometry of special relativity. In this physical theory, the vector space is taken to model spacetime, with the first three coordinates of a vector corresponding to its position in space, and the last coordinate being its position in time. It turns out that the geometry of spacetime is governed by a “fake” scalar product, called the Lorentz form, which is defined by

So, physicists are telling us that in order to understand the geometry of spacetime we have to think about a strange version of the usual dot product on which is made by taking the usual dot product of the spatial coordinates, and then subtracting the product of the time coordinates — typical, they always do this kind of thing. The Lorentz form is definitely not a scalar product, since the length of a vector can be negative:

Still, mathematically, there’s no reason we can’t consider such fake scalar products as a legitimate generalization of the scalar product concept.

**Definition 1:** A function is said to be a **bilinear form** if, for all vectors and all scalars we have

So, a bilinear form is just a “weak” scalar product on which might fail two out of three of the scalar product axioms.

In this lecture, we will see that the set of all bilinear forms that can be defined on an -dimensional vector space can be viewed as the set of all tables of real numbers with rows and columns, or in other words matrices. In fact, it is not difficult to come to this realization — we just have to pick a basis in in in order to describe a given bilinear form as a matrix. Things however get a bit tricky when we want to compare the two matrices which describe the same bilinear form relative to different bases.

Let’s start with something easier.

**Definition 2**: A function is said to be a **linear form** if, for all vectors and all scalars we have

Now suppose that is a linear form on an -dimensional vector space Then, in order to be able to compute the number for any vector it is sufficient to know how to calculate the numbers

where is a basis of Indeed, in order to compute from this information, we simply write as a linear combination of basis vectors,

and then compute

Note that this has a very simple description in terms of the usual dot product in namely

Equivalently, the number is computed as the product of a matrix and an matrix:

We can write this more succinctly as the matrix equation

where and are the only things they could be based on context. The matrix in this equation is referred to as the matrix of the linear form relative to the basis and its entries are just the values of the form on each of the basis vectors. Not too complicated.

Essentially the same idea works for bilinear forms: in order to know how to compute the number for any two vectors it is sufficient to know how to compute the numbers

relative to a basis Indeed, if we have access to this table of numbers, then to compute for given we first write these vectors as linear combinations of basis vectors,

and then calculate using bilinearity:

Once again, the result of this calculation can be expressed in terms of matrices, namely as the product three matrices: an matrix, an matrix, and an matrix. Here’s how this looks:

This formula is often written

where the symbols are the only things they could possibly be based on context. In particular, the matrix is referred to as the matrix of the bilinear form relative to the basis

Now we come to the issue of dependence on the choice of basis. This is easily worked out for linear forms, but is a little more complex for bilinear forms.

Let and be two bases in the same vector space and let be a linear form on Let

be the matrices which represent the form relative to the two bases We want to discern the relationship between these two matrices. We follow the same Marie Kondo approved out with the old, in with the new strategy as in Lecture 4: we write vectors of the “old” basis as linear combinations of the vectors of the new basis

Now we evaluate the linear form on both sides of each of these vector equations, to get the scalar equations

These scalar equations can be written as the single matrix equation

or more briefly as

where And that’s it — that’s change of basis for linear forms.

Although the end result is slightly more complicated, the strategy for working out the relationship between the matrices and representing the same bilinear form relative to two (possibly) different bases and is the same: out with the old, in with the new. Just as in the case of a linear form, the first step is to write

Now we consider the numbers We have

Although it may take a little bit of experimentation (try it out for ), the above is fairly easily seen to be equivalent to the matrix equation

where is the transpose of the matrix

That’s it for this lecture, and next time we will do more interesting things with bilinear forms, aka generalized scalar products. Although the above change of basis formulas are presented in any standard course in linear algebra, my personal opinion is that they aren’t too important. If you find them easy to remember, excellent; more important is the ability to re-derive them whenever you want, since this means you understand why they are what they are. My hope is that you will understand the meaning of linear and bilinear forms conceptually, which doesn’t require calculating their matrices relative to a particular basis.

To drive the above point home, let us close this lecture by remarking that there’s no need to stop at bilinear forms. Why not keep going to trilinear forms? Indeed, for any one may define a -linear form on a given vector space to be any function real-valued function of arguments on

which is a linear function of each argument. Conceptually, this isn’t any more complicated than a bilinear form. However, to represent such a function we need to use a -dimensional array of numbers, which is often referred to as a $k$-dimensional tensor. In particular, a -dimensional tensor is a list, and a -dimensional tensor is a matrix. In general, change of basis formulas for -dimensional tensors are quite messy and not very meaningful.

## Math 31AH: Lecture 7

In Lecture 5, we considered the question of how a vector in a Euclidean space can be represented as a linear combination of the vectors in an orthonormal basis of We worked out the answer to this question: the coordinates of are given by taking the scalar product with each vector in the the orthonormal basis:

Equivalently, using our algebraic definition of the angle between two vectors in a Euclidean space, this can be written as

where is the angle between and This lead us to think of the vector as the “projection” of onto the one-dimensional subspace of In what sense is the vector the “projection” of the vector onto the “line” ? Our geometric intuition concerning projections suggests that this construction should have two properties: first, the vector should be the element of which is closest to and second, the vector should be orthogonal to (This would be a good time to draw yourself a diagram, or to consult the diagram in Lecture 5). We want to prove that these two features, which characterize the geometric notion of projection, actually hold in the setting of an arbitrary Euclidean space. Let us consider this in the following slightly more general setup, where the line is replaced by an arbitrary finite-dimensional subspace. Here’s a motivating and suggestive picture.

We first develop some general features of subspaces of Euclidean spaces, which amount to the statement that they always come in complementary pairs. More precisely, let us consider the subset of consisting of all those vectors in which are perpendicular to every vector in the subspace

**Proposition 1:** is a subspace of

*Proof:* Since the zero vector is orthogonal to everything, we have It remains to demonstrate that is closed under taking linear combinations. For any any and any we have

—Q.E.D.

**Proposition 2:** We have

*Proof: *Since both and contain the zero vector (because they’re subspaces), their intersection also contains the zero vector. Now let Then, is orthogonal to itself, i.e. By the scalar product axioms, the only vector with this property is

— Q.E.D.

Propositions 1 and 2 make no assumption on the dimension of the Euclidean space — it could be finite-dimensional, or it could be infinite-dimensional. The same is true of the subspace At this point, we restrict to the case that is an -dimensional vector space, and keep this restriction in place for the rest of the lecture.

Let be an -dimensional subspace of the -dimensional subspace If then as proved on Assignment 1. Suppose and let be an orthonormal basis of Since there is a vector which is not in In particular, the vector

is not the zero vector. This vector is orthogonal to each of the vectors and hence two things are true: first, and second, is an orthogonal set of nonzero vectors. Thus, if the set is an orthogonal basis of If then there is a vector which is not in the span of . We set

to obtain a nonzero vector orthogonal to all vectors in the set . In particular, If then is an orthogonal basis of If we repeat the same process. After iterations of this process, we have generated an orthogonal basis

such that is an orthonormal basis of and is an orthogonal basis of which can be normalized to get an orthonormal basis of

We now come orthogonal projections in general. Let be a subspace of and let be its orthogonal complement. Invoking the above construction, let be an orthonormal basis of such that is an orthonormal basis of and is an orthonormal basis of The function defined by

is called the **orthogonal projector** of on For any vector the vector is called the **orthogonal projection** of onto Observe that if

**Proposition 1:** The function is a linear transformation.

*Proof:* First, let us check that sends the zero vector of to the zero vector of Note that, since is a subspace of they have the same zero vector, we denote it simply instead of using two different symbols and for this same vector. We have

Now we check that respects linear combinations. Let be two vectors, and let be two scalars. We then have

— Q.E.D.

**Proposition 2:** The linear transformation satisfies

*Proof:* The claim is that for all Let us check this. First, observe that for any vector in the orthogonal basis of we have

Note also that since is a basis of the above calculation together with Proposition 1 tells us that for all which is to be expected: the projection of a vector already in onto should just be Now to finish the proof, we apply this calculation:

— Q.E.D.

**Proposition 3:** The linear transformation has the property that for any

*Proof:* For any two vectors we have

— Q.E.D

**Proposition 4:** For any and we have

*Proof:* Before reading the proof, draw yourself a diagram to make sure you can visualize what this proposition is saying. The proof itself follows easily from Proposition 3: we have

— Q.E.D.

**Proposition 5:** For any we have

*Proof:* Let us write

Now observe that the vector lies in since it is the difference of two vectors in this subspace. Consequently, and are orthogonal vectors, by Proposition 4. We may thus apply the Pythagorean theorem (Assignment 2) to obtain

where

— Q.E.D.

Proposition 5 says that is the vector in which is closest to which matches our geometric intuition concerning projections. Equivalently, we can say that $P_\mathbf{W}\mathbf{v}$ is the vector in which best approximates and this perspective makes orthogonal projections very important in applications of linear algebra to statistics, data science, physics, engineering, and more. However, Proposition 5 also has purely mathematical importance. Namely, we have constructed the linear transformation using an arbitrarily chosen orthonormal basis in If we had used a different orthonormal basis the same formula gives us a possibly different linear transformation

defined by

Propositions 1-5 above all apply to as well, and in fact this forces so that it really is correct to speak of *the* orthogonal projection of onto To see why these two transformations must be the same, let us suppose they are not. This means that there is a vector such that Thus by Proposition 5 we have

while also by Proposition 5 we have

a contradiction. So, in the construction of the transformation it does not matter which orthonormal basis of we use.

## Math 31AH: Lecture 6

## Math 31AH: Lecture 5

In this lecture we continue the study of Euclidean spaces. Let be a vectors space, and let be a scalar product on as defined in Lecture 4. The following definition generalizes the concept of perpendicularity to the setting of an arbitrary Euclidean space.

**Definition 1:** Vectors are said to be **orthogonal** if More generally, we say that is an **orthogonal set** if for all

Observe that the zero vector is orthogonal to every vector by the third scalar product axiom. Let us check that orthogonality of nonzero abstract vectors does indeed generalize perpendicularity of geometric vectors.

**Proposition 1:** Two nonzero vectors are orthogonal if and only if the angle between them is

Proof: By definition, the angle between nonzero vectors and is the unique number which solves the equation

If the angle between and is then

Conversely, if then

Since are nonzero, we have and and we can divide through by to obtain

The unique solution of this equation in the interval is — Q.E.D.

In Lecture 4, we proved that any two nonzero vectors separated by a nonzero angle are linearly independent. This is not true for three or more vectors: for example, if are the vectors respectively, then

but So, separation by a positive angle is generally not enough to guarantee the linear independence of a given set of vectors. However, orthogonality is.

**Proposition 2: **If be an orthogonal set of nonzero vectors, then is linearly independent.

*Proof:* Let be scalars such that

Let us take the scalar product with on both sides of this equation, to get

Using the scalar product axioms, we thus have

Now, since is an orthogonal set, all terms on the left hand side are zero except for the first term, which is We thus have

Now, since we have and thus we can divide through by in the above equation to get

Repeating the above argument with in place of yields In general, using the same argument for each we get for all Thus is a linearly independent set. — Q.E.D.

One consequence Proposition 1 is that, if is an -dimensional vector space, and is an orthogonal set of nonzero vectors in then is a basis of In general, a basis of a vector space which is also an orthogonal set is called an **orthogonal basis.** In many ways, orthogonal bases are better than bases which are not orthogonal sets. One manifestation of this is the very useful fact that coordinates relative to an orthogonal basis are easily expressed as scalar products.

**Proposition 2:** Let be an orthogonal basis in For any the unique representation of as a linear combination of vectors in is

Equivalently, we have

where, for each is the angle between and

*Proof: *Let be any vector, and let

be its unique representation as a linear combination of vectors from Taking the inner product with the basis vector on both sides of this decomposition, we get

Using the scalar product axioms, we can expand the right hand side as

where is the Kronecker delta, which equals if and equals if We thus have

Now, since is a linearly independent set, and hence Solving for the coordinate we thus have

Since where is the angle between and the basis vector this may equivalently be written

which completes the proof. — Q.E.D.

The formulas in Proposition 2 become even simpler if is an orthogonal basis in which every vector has length i.e.

Such a basis is called an **orthonormal basis**. According to Proposition 2, if is an orthonormal basis in then for any we have

or equivalently

The first of these formulas is important in that it gives an algebraically efficient way to calculate coordinates relative to an orthonormal basis: to calculate the coordinates of a vector just compute its scalar product with each of the basis vectors. The second formula is important because it provides geometric intuition: it says that the coordinates of relative to an orthonormal basis are the lengths of the *orthogonal projections* of onto the lines (i.e one-dimensional subspaces) spanned by each of the basis vectors. Indeed, thinking of the case where and are geometric vectors, the quantity is the length of the orthogonal projection of the vector onto the line spanned by as in the figure below.

An added benefit of orthonormal bases is that they reduce abstract scalar products to the familiar dot product of geometric vectors. More precisely, suppose that is an orthonormal basis of Let be vectors in and let

be their representations relative to Then, we may evaluate the scalar product of and as

In words, the scalar product equals the dot product of the coordinate vectors of and relative to an orthonormal basis of .

This suggests the following definition.

**Definition 2:** Euclidean spaces and are said to be **isomorphic** if there exists an isomorphism which has the additional feature that

Our calculation above makes it seem likely that any two -dimensional Euclidean spaces and are isomorphic, just as any two -dimensional vector spaces and are. Indeed, we can prove this immediately if we can claim that both and contain orthonormal bases. In this case, let be an orthonormal basis in let be an orthonormal basis in and define to be the unique linear transformation that transforms into for each Then is an isomorphism of vector spaces by the same argument as in Lecture 2, and it also satisfies (make sure you understand why).

But, how can we be sure that every -dimensional Euclidean space actually does contain an orthonormal basis? Certainly, we know that contains a basis , but this basis might not be orthonormal. Luckily, there is a fairly simple algorithm which takes as input a finite linearly independent set of vectors, and outputs a linearly independent orthogonal set of the same size, which we can then “normalize” by dividing each vector in the output set by its norm. This algorithm is called the Gram-Schmidt algorithm, and you are encouraged to familiarize yourself with it — it’s not too complicated, and is based entirely on material covered in this lecture. In this course, we only need to know that the Gram-Schmidt algorithm exists, so that we can claim any finite-dimensional Euclidean space has an orthonormal basis. We won’t bother analyzing the internal workings of the Gram-Schmidt algorithm, and will treat it as a black box to facilitate geometric thinking in abstract Euclidean spaces. More on this in Lecture 6.

## Math 31AH: Lecture 4

Let be a vector space. In Lecture 2, we proved that is -dimensional if and only if every basis in consists of vectors. Suppose that is a basis of Then, every vector can be represented as a linear combination of vectors in

and this representation is unique. A natural question is then the following: if is a second basis of and

is the representation of as a linear combination of vectors in what is the relationship between the numbers and the numbers ? Since these two lists of numbers are the coordinates of the same vector but with respect to (possibly) different bases, it is reasonable to expect that they should be related to one another in a structured way. We begin this lecture by working out this relationship precisely.

We follow a strategy which would be acceptable to Marie Kondo: out with the old, in with the new. Let us call the “old” basis, and the “new” basis. Let us do away with the old basis vectors by expressing them in terms of the new basis, writing

where, for each ,

is the coordinate vector of the old basis vector relative to the new basis

We now return to the first equation above, which expresses our chosen vector in terms of the old basis. Replacing the vectors of the old basis with their representations relative to the new basis, we have

which we can compress even more if we use Sigma notation twice:

Now, since the representation

of relative to is unique, we find that

This list of formulas answers our original question: it expresses the “new” coordinates in terms of the “old” coordinates A good way to remember these formulas is to rewrite them using the familiar dot product of geometric vectors in In terms of the dot product, the above formulas become

Usually, this collection of formulas is packaged as a single matrix equation:

In fact, this process of changing from the old coordinates of a vector relative to the old basis to the new coordinates of this same vector relative to the new basis explains why the product of an matrix and an matrix is defined in the way that it is: the definition is made so that we can write

with the matrix whose -entry is

Let us summarize the result of the above calculation. We have a vector belonging to a finite-dimensional vector space and we have two bases and of Let be the matrix whose entries are the coordinates of relative to the old basis and let denote the matrix whose entries are the coordinates of this same vector relative to the new basis We want to write down an equation which relates the matrices and The equation is

where

is the “transition matrix” whose th column is the matrix consisting of the coordinates of the old basis vector relative to the new basis

Let’s look at a two-dimensional example. In the standard basis is where and Suppose now that we wish to get creative and write the vectors of in terms of the alternative basis where but This corresponds to using coordinate axes which, instead of being a pair of perpendicular lines, are a pair of lines at a angle to one another — pretty wild. What are the coordinates of a given vector in when we use these tilted axes? Let us answer this question using the above recipe. We need to express the vectors of the old basis in terms of the vectors of the new basis This is easy: by inspection, we have

This means that our transition matrix is the matrix

We conclude that the coordinates of in the new basis are given by

In the course of the above discussion, we have seen that the familiar dot product of geometric vectors is useful in the context of general vector spaces. This raises the question of whether the dot product itself can be generalized. The answer is yes, and the concept which generalizes the dot product by capturing its basic features is the following.

**Definition 1:** Let be a vector space. A **scalar product** on is a function

which satisfies:

- For any and we have
- For any we have
- For any we have with equality if and only if

Let us consider why the operation introduced in Defintion 1 is called a “scalar product.” First, it’s called a “product” because it takes two vectors and produces from them the new entity Second, this new entity is not a vector, but a scalar — hence, is the “scalar product” of and What about the axioms? These are obtained by extracting the basic features of the dot product of geometric vectors: it is “bilinear,” which means that one has the usual FOIL identity

for expanding brackets; it is “symmetric,” in the sense that

and it is “positive definite,” meaning that

with equality if and only if is the zero vector. Definition 1 takes these properties and lifts them to the setting of an abstract vector space to form the scalar product concept, of which the dot product becomes a special case.

**Definition 2:** A pair consisting of a vector space together with a scalar product is called a **Euclidean space**.

Why is a vector space equipped with a scalar product called a Euclidean space? In the familiar vector space the basic notions of Euclidean geometry — length and angle — can be expressed algebraically, in terms of the dot product. More precisely, the length of a vector is given by

where denotes the nonnegative square root of a nonnegative real number, and the angle between two vectors and is related to the dot product via

We can mimic these algebraic formulas to define the concepts of length and angle in an abstract Euclidean space — we define the length of a vector by the formula

and we define the angle between two vectors to be the number determined by the formula

Let us examine these definitions more carefully. First, the quantity which generalizes the length of a geometric vector is usually called the “norm” of in order to distinguish it from the original notion of length, which it generalizes. If the vector norm is a good generalization of geometric length, then it should have some of the main properties of the original concept; in particular, it should be nonnegative, and the only vector of length zero should be the zero vector. In order for these properties to hold in every possible Euclidean space, we must be able to deduce them solely from the axioms defining the scalar product.

**Proposition 1:** Let be a Euclidean space. For any vector , we have and equality holds if and only if

*Proof: *From the definition of vector norm and the first scalar product axiom, we have that

is the square root of a nonnegative number, and hence is itself nonnegative. Moreover, in order for to hold for a nonnegative real number it must be the case that , and from the second scalar product axiom we have if and only if — Q.E.D.

Now we consider the algebraic definition of the angle between two vectors in a Euclidean space As you are aware, for any number we have Thus, for our definition of angle to be valid, we need the following proposition — which is known as the Cauchy-Schwarz inequality — to follow from the scalar product axioms.

**Proposition 2:** Let be a Euclidean space. For any we have

*Proof:* We begin by noting that the claimed double inequality is equivalent to the single inequality

which is in turn equivalent to

We will prove that this third form of the claimed inequality is true.

Let be any two vectors in If either of or is the zero vector, then by the third scalar product axiom (positive definiteness) both sides of the above inequality are zero, and we get the true expression It remains to prove the inequality in the case that neither nor is the zero vector.

Consider the function of a variable defined by

We can expand this using the first scalar product axiom (bilinearity), and we get

Using the second scalar product axiom (symmetry), this simplifies to

We see that the function is a polynomial of degree two, i.e. it has the form

with

Note that we can be sure because Thus the graph of the function is an upward-opening parabola. Moreover, since

this parabola either lies strictly above the horizontal axis, or is tangent to it. Equivalently, the quadratic equation

has either no real roots (parabola strictly above the horizontal axis), or two identical real roots (parabola tangent to the horizontal axis). We can differentiate between the two cases using the discriminant of this quadratic equation, i.e. the number

which is the square root part of the familiar quadratic formula

More precisely, if the discriminant is negative the corresponding quadratic equation has no real solutions, and if it is zero then the equation has a unique solutions. In the case we get

which gives us the inequality

which verifies the inequality we’re trying to prove in this case. In the case, we get instead

So, in all cases the claimed inequality

holds true. — Q.E.D.

The upshot of the above discussion is that the concepts of length and angle are now well-defined in the setting of a general Euclidean space So, even though the vectors in such a space need not be geometric vectors, we can use geometric intuition and analogies when thinking about them. A simple example is the following natural proposition, which generalizes the fact that a pair of nonzero geometric vectors are linearly dependent if and only if they point in the same direction or opposite directions.

**Proposition 3: **Let be a Euclidean space, and let be nonzero vectors in The set is linearly dependent if and only if the angle between and is or

*Proof:* You will prove on Assignment 2 that equality holds in the Cauchy-Schwarz inequality if and only if the vectors involved are linearly dependent. Thus, Proposition 3 is equivalent to the statement that any two nonzero vectors satisfy the equation

if and only if the angle between them is or Let us prove this statement.

By definition of the angle between two vectors in a Euclidean space, the above equation is equivalent to

and dividing both sides by the nonzero number this becomes

which holds for if and only if is or — Q.E.D.

In Lecture 5, we will consider further ramifications of geometrical thinking in vector spaces.

## Math 31AH: Lecture 3

## Math 31AH: Lecture 2

Let be a vector space. Recall from Lecture 1 (together with Assignment 1, Problem 2) that to say is -dimensional means that contains a linearly independent set of size but does not contain a linearly independent set of size Also recall from Lecture 1 that a basis of is a subset of which is linearly independent and spans The main goal of this lecture is to prove the following theorem.

**Theorem 1:** A vector space is -dimensional if and only if it contains a basis of size

Before going on to the proof of this theorem, let us pause to consider its ramifications. Importantly, Theorem 1 gives a method to calculate the dimension of a given vector space : all we have to do is find a basis of and then count the number of elements in that basis. As an example, let us compute the dimension of This vector space is typically used to model the world in which we live, which consists of four physical dimensions: three for space and one for time. Let us verify that our mathematical definition of vector space dimension matches our physical understanding of dimension.

Let be a vector in Then, we have

where

This shows that the set spans Let us check that is linearly independent. This amounts to performing the above manipulation in reverse. If are numbers such that

then we have

which means that Thus is a linearly independent set which spans i.e. it is a basis of Since has size we conclude from Theorem 1 that

Now let us prove Theorem 1. First, observe that we have already proved that if is -dimensional then it contains a basis of size — this is Theorem 1 from Lecture 1. It remains to prove the converse, which we now state as a standalone result for emphasis and ease of reference.

**Theorem 2:** If contains a basis of size then is -dimensional.

The proof of Theorem 2 is quite subtle. In order to make it easier to understand, it is helpful to first prove the following lemma, in which the main difficulty is concentrated.

**Lemma 1:** If is a linearly independent set in a vector space and is a basis of then

*Proof:* Suppose this were false, i.e. that there exists in the vector space a linearly independent set and a basis such that We will see that this leads to a contradiction. The strategy is the following: we wish to demonstrate that the assumption implies that we can replace of the vectors in with the vectors in in such a way that the resulting set

is linearly independent. If we can show this, we will have obtained the desired contradiction: cannot possibly be independent, because is a basis. In particular, each of the remaining vectors is a linear combination of the vectors in

We will pursue the following strategy to show that the existence of the set as above follows from the hypothesis For each consider the following proposition: there exists a linearly independent set in of the form

where is a subset of of size and Let us call this proposition Now, if we can prove that is true, and that

then we can conclude that is true for all Indeed, we would then have

This is one version of a proof technique known as mathematical induction. But the statement true is exactly what we want, since it gives us a linearly independent set consisting of some number of vectors from together with all vectors in which results in the contradiction explained above.

Let us now implement the above strategy. The first step is to prove that is true. To see that it is, consider the set where and This set is of the required form, and since is linearly independent so is

It remains to prove that if and is true, then is true. Given that is true, there exists a linearly independent set in of the form with

an -element subset of and

Consider the set

If is linearly independent, then so is any subset of it, so in particular the subset

is linearly independent and is true. Now suppose that is linearly dependent. Then, because is linearly independent, must be a linear combination of the vectors in i.e

for some Moreover, there exists a number such that else would be a linear combination of which is impossible because is a linearly independent set. We now claim that the set

is linearly independent. Indeed, if were linearly dependent, then we would have

where not all the numbers are zero. This means that the number since otherwise the above would say that a subset of is linearly dependent, which is false because is linearly independent. Now, if we substitute the representation of as a linear combination of elements given above, this becomes a vanishing linear combination of the vectors in in which the coefficient of is which contradicts the linear independence of So, must be linearly independent. — Q.E.D.

Let us note the following corollary of Lemma 1.

**Corollary 1: **If and are two bases of a vector space then

*Proof: *Since is linearly independent and is a basis, we have by Lemma 1. On the other hand, since is linearly independent and is a basis, we also have by Lemma 1. Thus — Q.E.D.

We now have everything we need to prove Theorem 2.

*Proof of Theorem 2:* Let be a finite-dimensional vector space which contains a basis of size We will prove that is -dimensional using the definition of vector space dimension (Definition 4 in Lecture 1) and Lemma 1. First, since is a linearly independent set of size we can be sure that contains a linearly independent set of size It remains to show that does not contain a linearly independent set of size This follows from Lemma 1: since is a basis of size every linearly independent set in must have size less than or equal to — Q.E.D.

Another very important consequence of Theorem 2 is the following: it reveals that any two vector spaces of the same dimension can more or less be considered the same. Note that this is fundamentally new territory for us; so far, we have only considered one vector space at a time, but now we are going to compare two vector spaces.

To make the above precise, we need to consider functions between vector spaces. In fact, it is sufficient to limit ourselves to the consideration of functions which are compatible with the operations of vector addition and scalar multiplication. This leads to the definition of a linear transformation from a vector space to another vector space If vector spaces are the foundation of linear algebra, then linear transformations are the structures we want to build on these foundations.

**Defintion 1: **A function is said to be a **linear transformation** if it has the following properties:

- where is the zero vector in and is the zero vector in
- For any vectors and any scalars we have

So, a linear transformation is a special kind of function from one vector space to another. The name “linear transformation” comes from the fact that these special functions are generalizations of lines in which pass through the origin More precisely, every such line is the graph of a linear transformation Indeed, a line through the origin in of slope is the graph of the linear transformation defined by

The one exception to this is the vertical line in passing through which has slope . This line is not the graph of any function from to which should be clear if you remember the vertical line test.

To reiterate, a linear transformation from a vector space to is a special kind of function For linear transformations, it is common to write instead of and this shorthand is often used to implicitly indicate that is a linear transformation. In the second half of the course, we will discuss linear transformations in great detail. At present, however, we are only concerned with a special type of linear transformation called an “isomorphism.”

**Defintion 2:** A linear transformation is said to be an **isomorphism** if there exists a linear transformation such that

and

The word “isomorphism” comes from the Greek “iso,” which means “same,” and “morph” which means “shape.” It is not always the case that there exists an isomorphism from to ; when an isomorphism does exist, one says that and are **isomorphic**. Isomorphic vector spaces and have the “same shape” in the sense that there is both a linear transformation which transforms every vector into a vector and an inverse transformation which “undoes” by transforming back into To understand this, it may be helpful to think of isomorphic vector spaces and as two different languages. Any two human languages, no matter how different they may seem, are “isomorphic” in the sense that they describe exactly the same thing, namely the totality of human experience. The isomorphism translates every word in the language to the corresponding word in which means the same thing. The inverse isomorphism translates back from language to language On the other hand, if is a human language and is the language of an alien civilization, then and are not isomorphic, since the experience of membership in human society is fundamentally different from the experience of membership in a non-human society, and this difference is not merely a matter of language.

**Theorem 2**: Any two -dimensional vector spaces and are isomorphic.

*Proof: *Since is -dimensional, it contains a basis by Theorem 1. Likewise, since is -dimensional, it contains a basis Now, since is a basis in for every there exist unique scalars such that

Likewise, since since is a basis in for every there exist unique scalars such that

We may thus define functions

by

and

We claim that is a linear transformation from to To verify this, we must demonstrate that has the two properties stipulated by Defintion 1. First, we have

which verifies the first property. Next, let be any two vectors, and let be any two scalars. Let

be the unique representations of and as linear combinations of the vectors in the basis We then have

which verifies the second property. In the same way, one checks that is a linear transformation from to

To prove that the linear transformation is an isomorphism, it remains only to prove that the linear transformation undoes To see this, let

be an arbitrary vector in expressed as a linear combination of the vectors in the basis We then have

This completes the proof that and are isomorphic vector spaces. — Q.E.D.

To continue with our linguistic analogy, Theorem 2 says that any two -dimensional vector spaces are just different languages expressing the same set of concepts. From this perspective, it is desirable to choose a “standard language” into which everything should be translated. In linguistics, such a language is called a lingua franca, a term which reflects the fact that this standard language was once the historical predecessor of modern French (these days the lingua franca is English, but this too may eventually change).

The lingua franca for -dimensional vector spaces is and it is unlikely that this will ever change. Let be an -dimensional vector space, and let be a basis in . Consider the basis of in which

To be perfectly clear, for each the vector has the number in position and a zero in every other position. The basis is called the **standard basis** of and you should immediately stop reading and check for yourself that it really is a basis. Assuming you have done so, we proceed to define an isomorphism

as follows. Given a vector let

be the unique representation of as a linear combination of vectors in Now set

where the symbol “” means “equal by definition.” The fact that this really is an isomorphism follows from the proof of Theorem 2 above. The isomorphism is called the **coordinate isomorphism** relative to the basis and the geometric vector is called the **coordinate vector** of relative to the basis Because of the special form of the standard basis of we may write the coordinate vector of more concisely as

When working with a given -dimensional vector space, it is often convenient to choose a basis in and use the corresponding coordinate isomorphism to work in the standard -dimensional vector space rather than working in the original vector space . For example, someone could come to you with a strange -dimensional vector space and claim that this vector space is the best model for spacetime. If you find that you disagree, you can choose a convenient basis and use the corresponding coordinate isomorphism to transform their model into the standard model of spacetime.

## Math 31AH: Lecture 1

Assuming familiarity with the geometric concept of a vector, we introduce the notion of a vector space: a set containing objects which can be added to one another and scaled by numbers in a manner which is formally consistent with the addition and scaling of geometric vectors. The vector space concept is the bedrock of linear algebra.

**Definition 1:** A vector space is a triple consisting of a set together with two functions

and

where denotes the Cartesian product of sets and denotes the set of real numbers. The functions and are required to have the following properties:

- For all we have
- For all we have
- There exists such that for all
- For every there exists such that
- For every we have
- For every and every we have
- For every and every we have
- For every and every we have

This is the definition of a vector space. It is logically precise and totally unambiguous, or in other words mathematically rigorous. However, it is difficult to have any intuitive feeling for this construction. In order to make the definition of a vector space more relatable, we will rewrite it using words and symbols that are more familiar and evocative.

First, let us call the elements of the set “vectors.” When we use this word, we are reminded of geometric vectors, which are familiar objects. However, this is only an analogy — we make no assumption on the nature of the elements of . Indeed, we shall soon see that many rather different mathematical objects — including numbers, polynomials, functions, and of course geometric vectors — can all be viewed as vectors.

Second, let us call the function in Definition 1 “vector addition,” and write in place of If we use this alternative notation, then the axioms governing the function in Definition 1 become

- For all we have
- For all we have
- There exists such that for all
- For every there exists such that

This makes things intuitively clear: the axioms above say that the operation of vector addition behaves the way addition does in other contexts we are familiar with, such as the addition of numbers or the addition of geometric vectors. Although conceptually helpful, this comes at a cost, and that cost is ambiguity: we are now using the symbol in two different ways, since it can mean either addition of numbers in or vectors in and these operations are not the same. However, writing is so much more natural than writing that we decide to do this going forward despite the ambiguity it introduces. This is called abuse of notation.

Third and last, we are going to do the same thing with the function — give it a name, and write it in a more intuitive but more ambiguous way. The usual name given to is “scalar multiplication,” which indicates that is an abstraction of the familiar operation of scaling a geometric vector by a number. The usual notation for scalar multiplication is simply juxtaposition: we write in place of Adding on the axioms prescribing the behavior of the function , we now have

- For all we have
- For all we have
- There exists such that for all
- For every there exists such that
- For every , we have
- For every and every we have

Again, this makes things much more intuitive: for example, Axiom 5 now says that scaling any vector by the number does nothing, and Axiom 6 says that scaling a vector by a number and then scaling by produces the same result as scaling by the number Written in this way, the axioms governing scalar multiplication become clear and natural, and are compatible with our experience of scaling geometric vectors. However, we are again abusing notation, since two different operations, multiplication of real numbers and scaling of vectors, are being denoted in exactly the same way. Finally, if we incorporate the axioms dictating the way in which vector addition and scalar multiplication are required to interact with one another, we arrive at the following reformulation of Definition 1.

**Definition 2:** A **vector space** is a set whose elements can be added together and scaled by real numbers in such a way that the following axioms hold:

- For all we have
- For all we have
- There exists such that for all
- For every there exists such that
- For every , we have
- For every and every we have
- For every and every we have
- For every and every we have

We also make the following definition, which formalizes the notion that a vector space may contain a smaller vector space

**Definition 3:** A subset of a vector space is said to be a subspace of if it is itself a vector space when equipped with the operations of vector addition and scalar multiplication inherited from .

From now on, we are going to use Definition 2 as our definition of a vector space, since it is much more convenient and understandable to write things in this way. However, it is important to comprehend that Definition 2 is not completely precise, and to be aware that the pristine and unassailable definition of a vector space given by Definition 1 is what is actually happening under the hood.

**Excercise 1:** Write down as many concrete examples of vector spaces as you can. You should be able to exhibit quite a few specific vector spaces which are significantly different from one another.

This brings us to an important question: why are we doing this? We understand familiar vector spaces like and pretty well, so why not just analyze these as standalone objects instead of viewing them as particular instances of the general notion of a vector space? There are many answers to this question, some quite philosophical. Here is a practical answer: if we are able to prove theorems about an abstract vector space, then these theorems will be universal: they will apply to all specific instances of vector spaces which we encounter in the wild.

We now begin to develop this program: we seek to identify properties that every object which satisfies the axioms laid out in Definition 2 must have. What should these properties be? In addressing this question, it is helpful to rely on the intuition gained from experience working with geometric vectors. For example, vectors in are just pairs of real numbers, and we have concrete and specific formulas for vector addition and scalar multiplication in if and then

and

Thus for example we can see that, in Axiom 3 in Definition 2 is fulfilled by the vector

since

In fact, we can say more: is the *only* vector in which has the property required by Axiom 3, because is the only number such that for every number We can similarly argue that, in the only vector which fulfills Axiom 3 is So, we might suspect that in *any* vector space the vector whose existence is required by Axiom 3 is actually unique. This claim is formulated as follows.

**Proposition 1:** There is a unique vector such that for all

In order to prove that the claim made by Proposition 1 is true, we must deduce it using nothing more than the vector space axioms given in Definition 2. This is Problem 1 on the first homework assignment. The propositions below give more properties which hold true for every vector space In every case, proving such a proposition means deducing its truth using no information apart from the axioms in Definition 2, and propositions which have already been proved using these axioms.

**Proposition 2:** For every there exists a unique such that

*Proof:* Let be any vector. By Axiom 4 in Definition 2, we know that there exists a vector such that It remains to prove that this vector is unique. Suppose that is another vector such that We then have

Adding the vector to both sides of this equation, we get

Since by Axiom 1, and since by hypothesis, the above equation implies

By Axiom 1 this is equivalent to

and by Axiom 3 this implies

as required. — Q.E.D.

Now that we have proved that the vector which cancels out is unique, it is appropriate to denote it by Thus Axiom 4 becomes

which we agree to write more simply as

More generally, for any two vectors we write as shorthand for

**Proposition 2:** For any we have That is, scaling the zero vector by any number produces the zero vector.

*Proof:* By Axiom 3, we have

Let be arbitrary. Multiplying both sides of the above equation by we get

Using Axiom 8 on the left hand side of this equation, we get

Now, subtracting from both sides of the above equation, we get

which simplifies to

as required.

**Proposition 3:** For any we have That is, scaling any vector by the number zero produces the zero vector.

*Proof:* Let be any vector. We have

where we used Axiom 7 to obtain the first equality and Axiom 5 to obtain the second equality. On the other hand, the left hand side of the above equation is

where the first equality is the fact that adding the number and the number produces the number and the second inequality is Axiom 5 again. So, we have that

Since the vector was chosen arbitrarily, we have shown that the vector has the property that

for any We thus have

by Proposition 1. — Q.E.D.

**Proposition 4:** If and then That is, scaling a nonzero vector by a nonzero number produces a nonzero vector.

*Proof: *Suppose there exists a nonzero number and a nonzero vector such that

Since the real number is well-defined. Multiplying both sides of the above equation by we obtain

This gives

Using Axiom 5 on the left hand side and Proposition 2 on the right hand side, this becomes

However, this is false, since Since the statement that leads to the false statement it must itself be false, and we conclude that — Q.E.D.

**Proposition 5:** If and then That is, if two scalar multiples of the same nonzero vector are the same, then the scaling factors are the same.

*Proof: *Subtracting from both sides of the equation yields

where we used Axiom 7 on the left hand side. If this contradicts Proposition 4, so it must be the case that —Q.E.D.

**Proposition 6:** Every vector space contains either one vector, or infinitely many vectors.

*Proof:* Let be a vector space. Then, by Axiom 4, contains at least one vector, namely It is possible that this is the only vector in i.e. we have However, if contains another vector then it also contains the vector for all By Proposition 4, each of these vectors is different from and by Proposition 5 they are all different from one another. —Q.E.D.

**Exercise 2:** Try to prove more propositions about vector spaces suggested by your familiarity with and If you discover something interesting, consider posting about your findings on Piazza.

We now embark on an ambitious project: using nothing more than Definition 2 and the Propositions we have already deduced from it, we want to define a meaningful notion of dimension for vector spaces. The first step on this road is the following definition.

**Definition 4:** Let be a vector space, and let be a finite subset of We say that is **linearly dependent** if there exist numbers not all equal to zero, such that

If no such numbers exist, then is said to be **linearly independent**.

It will be convenient to extend Definition 3 to the case where is a set of size zero. There is only one such set, namely the empty set By fiat, we declare the empty set to be a linearly independent set.

The fundamental feature of a linearly dependent set in a vector space is that at least one vector in is a linear combination of the other vectors in meaning that it can be represented as a sum of scalar multiples of these other vectors. For example, suppose that is a linearly dependent set. Then, by Definition 3, there exist numbers not all equal to zero, such that

We thus have

If we can divide both sides of this equation by obtaining

This expresses the vector as a linear combination of and However, if we cannot divide through by as we did above. Instead, we use the fact that one of is nonzero. If then we have

while if we have

So, no matter what, the fact that is a linearly dependent set implies that at least one vector in this set is a linear combination of the other two. Conversely, if it were the case that was a linearly *independent* set, then *no* vector in would be a linear combination of the other two.

**Definition 5:** Let be a vector space and let be a nonnegative integer. We say that is **-dimensional** if it contains a linearly independent set of size for each integer and does not contain a linearly independent set of size for any integer If no such number exists, then is said to be **infinite-dimensional.**

**Proposition 7:** Suppose that is a vector space which is both -dimensional and -dimensional. Then

*Proof:* Let us suppose without loss of generality that Then either or Suppose it were the case that Then, since is -dimensional, any subset of of size is linearly dependent. But this is false, since the fact that is -dimensional means that contains a linearly independent set of size Consequently, it must be the case that —Q.E.D.

In view of Proposition 7, the concept of vector space dimension introduced by Definition 4 is well-defined; if is -dimensional for some nonnegative integer , then is the unique number with this property. We may therefore refer to as the dimension of and write If is a vector space which is not -dimensional for any nonnegative integer then it is infinite-dimensional, as per Definition 4. In this case it is customary to write

One way to make the concept of vector space dimension more relatable is to think of it as the critical value at which a phase transition between possible linear independence and certain linear dependence occurs. That is, if one samples a set of vectors of size less than or equal to from it is possible that is linearly independent; however, if one samples a set of more than from , then is necessarily linearly dependent. To say that is to say that this transition never occurs.

Let us use Definition 4 to calculate the dimension of , a vector space containing only one vector. Observe that the only two subsets of are and and these sets have sizes and respectively. Now, is linearly independent by definition (see the paragraph immediately following Definition 3), so contains a linearly independent set of size zero. Moreover, is linearly dependent since for any choice of by Proposition 2, and thus any subset of of size bigger than zero is linearly dependent. We have thus shown that

As another example, let us calculate the dimension of the number line. A linearly independent subset of of size zero is given by the empty set A linearly independent set of size one is given by or any other set containing a single non-zero real number. Consider now an arbitrary subset of size two. Since at least one of is not equal to zero. Suppose without loss in generality that If then we have with so that is linearly dependent in this case. If then we have with So, we have shown that any set of two real numbers is linearly dependent, and by one of the problems on Assignment 1 this implies that any set of more than two real numbers is linearly dependent. We thus conclude that

At this point, vector spaces may seem like impossibly complicated objects which are impossible to analyze in general. However, it turns out that for many purposes understanding a given vector space can be reduced to understanding a well-chose finite subset of The first step in this direction is the following theorem.

**Theorem 1:** Let be an -dimensional vector space, and let be a linearly independent set of vectors in Then, every vector in can be uniquely represented as a linear combination of the vectors in

*Proof:* Let be any vector. Consider the set Since the dimension of is the set must be linearly dependent. Thus there exist numbers not all of which are equal to zero, such that

We claim that Indeed, if it were the case that then the above would read

where are not all equal to zero. But this is impossible, since is a linearly independent set, and thus it cannot be the case that Now, since we can write

which shows that is a linear combination of the vectors Since was arbitrary, we have shown that every vector in can be represented as a linear combination of vectors from

Now let us prove uniqueness. Let be a vector, and suppose that

are two representations of as a linear combination of the vectors in Subtracting the second of these equations from the first, we obtain the equation

Since is a linearly independent set, we have that for all which means that for all We thus conclude that any two representations of any vector as a linear combination of the vectors in fact coincide. —Q.E.D.

A subset of a vector space which has the property that every can be written as a linear combination of vectors in is said to **span** the vector space If moreover is a linearly independent set, then is called a **basis** of and in this case the above argument shows that every vector in can be written as a unique linear combination of the vectors in In Theorem 1, we have proven that, in an -dimensional vector space any linearly independent set of size is a basis. We will continue to study the relationship between the dimension of a vector space and its bases in Lecture 2.

## Math 31AH: Lecture 0

Welcome to Math 31AH, the first quarter of a three-quarter honors integrated linear algebra/multivariable calculus sequence for well-prepared students. Math 31AH focuses on linear algebra, meaning the study of vectors and linear transformations.

Before saying anything else, I want to draw your attention to the following remark which accompanies the registrar’s listing for this course:

The honors calculus courses 31AH-BH-CH are unusually demanding and are intended for students with strong aptitude and deep interest in Mathematics.

– UCSD Registrar’s Office

If you’re enrolled in this class, I want you to be aware of this: Math 31AH is an unusually demanding course. If the prospect of an intellectual challenge appeals to you, you’ve come to the right place; otherwise, this is not the course for you, and you should instead consider Math 18 and the Math 20 course sequence. It is not the case that the Math 31 course sequence is “better” than the Math 18/20 course sequence — but it is different. The Math 31 sequence is more theoretical and rigorous, meaning that there is a strong emphasis on precise definitions and clear logical reasoning, as opposed to simply learning how to use formulas. This does not mean that you are not expected to master formulas in Math 31AH, rather it means that the process of translating theoretical knowledge into the ability to do concrete calculations is largely left to the student — there is a large “make your own examples” component.

Now let us discuss the basic parameters of the course. First of all, we are presently in a highly unusual situation, and will not be able to meet for standard in-person lectures. The course will instead be organized as follows.

This blog will serve as the textbook for the course: the entire course content will be made available here, in the form of lecture posts. These posts will be of a quality unmatched by any textbook, and will only be available to students enrolled in Math 31AH. There will be two posts per week, each of which will contain content corresponding to what would be covered in a rather ambitious in-person lecture. In addition to written text and links to additional material, each of the two weekly lecture posts will be accompanied by a video coda consisting of a discussion of the post together with illuminating examples, worked problems, and sometimes additional content. These videos will be posted to the media gallery section of the Math 31AH Canvas page. Originally I had planned to embed each such video in the corresponding lecture post, but this proved problematic (an example of an embedded video is included at the bottom of this post). Each video coda will be presented in a way which assumes familiarity with the lecture post containing it. The two weekly lecture posts will be made available prior to our designated Monday and Wednesday 15:00-15:50 lecture slots.

That leaves the Friday lecture slot, which will consist of two parts. The first part will occupy the 15:00-15:50 lecture slot, and will be conducted in the style of a flipped classroom in order to cement your understanding of material you have already absorbed. This may take the form of an interactive problem solving session, a recap of course material presented in the lecture posts, or a more free-form discussion of the course material intended to deepen and broaden your understanding. The flipped classroom session will segue into an additional 70 minutes corresponding to “office hours,” which will be driven primarily by student questions (many students find it worthwhile to attend office hours even if they don’t plan to ask questions). The combined flipped classroom and office hours endeavor makes for a two hour live Zoom session every Friday, 15:00-17:00. For those of you not already aware, Zoom is a freely available videoconferencing platform. These Friday Zoom sessions will be recorded and made available on Canvas.

In addition to the course content delivered by Professor Novak in the manner described above, the Math 31AH teaching assistant, Finley McGlade, will be running live discussion sections via Zoom every Thursday, from 11:00-11:50 for students in section A01, and from 12:00-12:50 for students in section A02. Mr McGlade will post further details concerning the discussion sections on Piazza. There will be no discussion sections on October 1.

Piazza is an online platform facilitating online discussion of all aspects of the course, from logistics to course content. If you are enrolled in Math 31AH, you should have already received an email containing Piazza signup instructions. Links to the lecture posts will be made available under the “Resources” section, where you will also find a syllabus link leading back to this post. Both Professor Novak and Mr McGlade will be active on Piazza. We also expect that Piazza will serve as a form for students to discuss the course content with each other, and endeavor to answer each others questions. Please use Piazza as the default mechanism for asking questions about the course, and refrain from using email unless absolutely necessary.

Now that we have discussed how the course content will be disseminated to students, let us discuss the content to be created by students and submitted for evaluation. Students in Math 31AH will generate two types of content: solutions to weekly problem sets, and solutions to exams.

We will aim for a total of nine weekly problem sets in this course. Problem sets will be posted to Piazza on Sundays before 24:00, and the corresponding solutions will be due the following Sunday before 24:00. Your solutions will be both submitted by you and returned to you using Gradescope. This process will be managed by Mr McGlade, and any questions or concerns related to Gradescope should be directed to him. The problem sets are a very important part of the course, and accordingly make up 50% of the total grade. While you may submit handwritten solutions, it is recommended that you typeset your solutions using LaTeX, the professional standard for the preparation of scientific documents. Learning how to prepare scientific documents of professional quality is a valuable skill that will serve you well in your university career, and beyond. In order to help with this, the problem sets will be typeset in LaTeX, and the source files will be posted along with their PDF output. Problem set solutions which have been typeset using LaTeX will receive an automatic 5% bonus.

There will be no problem set due on November 13, because this is the date of the midterm exam. The midterm exam will count for 20% of your course grade. The details of how the midterm exam will be written and submitted are not yet available, but will be soon, and I will keep you updated.

The final exam for the course is set for December 18, with a scheduled time of 15:00-17:59. This date and time slot is set by the university registrar. The final exam will count for 30% of your course grade. The details of how the final exam will be written and submitted are not yet available, but will be soon, and I will keep you updated.

Our first live Zoom session will take place on October 2, 15:00-17:00. I expect you will have many questions, so this meeting will be purely logistical (no math). The schedule below indicates the expected mathematical content of each subsequent lecture. This schedule is subject to change, and may be updated during the quarter.

10/02 | Lecture 0 | Course logistics. |

10/05 | Lecture 1 | Vector spaces, basis and dimension. |

10/07 | Lecture 2 | Linear transformations, isomorphism, coordinates. |

10/09 | Lecture 3 | Flipped classroom, office hour. |

10/12 | Lecture 4 | Change of basis, Euclidean spaces, Cauchy-Schwarz inequality. |

10/14 | Lecture 5 | Orthogonal bases; Gram-Schmidt algorithm. |

10/16 | Lecture 6 | Flipped classroom, office hour. |

10/19 | Lecture 7 | Orthogonal projection, approximation. |

10/21 | Lecture 8 | Linear forms, bilinear forms. |

10/23 | Lecture 9 | Flipped classroom, office hour. |

10/26 | Lecture 10 | Change of basis, quadratic forms, polarization. |

10/28 | Lecture 11 | Reduction of a quadratic form to a sum of squares. |

10/30 | Lecture 12 | Flipped classroom, office hour. |

11/02 | Lecture 13 | Law of inertia for quadratic forms. |

11/04 | Lecture 14 | Linear transformations and matrices, rank and nullity. |

11/06 | Lecture 15 | Flipped classroom, office hour. |

11/09 | Lecture 16 | Adjoint, symmetric and orthogonal transformations. |

11/13 | Lecture 17 | MIDTERM EXAM. |

11/16 | Lecture 18 | Invariant subspaces, eigenvalues and eigenvectors. |

11/18 | Lecture 19 | Diagonalization of symmetric transformations. |

11/20 | Lecture 20 | Flipped classroom, office hour. |

11/23 | Lecture 20 | Tensor product. |

11/25 | Lecture 21 | Symmetric and antisymmetric tensors. |

11/30 | Lecture 22 | Wedge product and oriented volume. |

12/02 | Lecture 23 | Wedge product and determinants. |

12/04 | Lecture 24 | Flipped classroom, office hour. |

12/07 | Lecture 25 | Characteristic polynomial. |

12/09 | Lecture 26 | Complex linear algebra. |

12/11 | Lecture 27 | Flipped classroom, office hour. |