This post is a little shorter than the previous ones because I went back and re-worked parts of Lecture 3 substantially to add more detail and clarity, and I also added some additional material which upon reflection is best housed in Lecture 3. So, as part of reading this post, you should go back and make a second pass through Lecture 3.

What we have succeeded in doing so far is defining limits and continuity for functions which map between Euclidean spaces, so we have the first two core notions of calculus squared away (we saw a fairly involved example, the eigenvalue mapping on symmetric operators, which showed that understanding continuity for functions which are hard to describe explicitly can be hard).

The remaining core notions of calculus are the big ones: differentiation and integration. In Math 31BH we develop differential calculus for vector-valued functions of vectors, and in Math 31CH you will concentrate on the development of integral calculus for such functions.

Let us begin with the familiar setting of functions . First of all, we want do consider functions which may not be defined on all of , but only on some subset , the domain of . For example, the square root function has domain being the set of nonnegative numbers, while the logarithm function has domain being the set of positive numbers. So in our general setup we are considering functions where is a subset of a Euclidean space which does not necessarily exhaust that space. This extension is completely non-problematic, as is the extended notion of image of a function.

**Definition 1: **The **image** of a function is the set of outputs of in ,

Now let us talk about graphs of functions, the precise definition of which involves the direct product of Euclidean spaces, a concept introduced in Lecture 3.

**Proposition 1:** If and , then .

*Proof:* If is an orthonormal basis of and is an orthonormal basis of , then the set

spans . This follows readily from the fact that spans and spans ; make sure you understand why. Moreover, we have that

which vanishes unless and . Thus is an orthogonal set, and in particular it is a linearly independent set in .

Q.E.D.

**Definition 2:** The **graph** of a function is the set of all input-output pairs for , i.e. the subset of defined by

This agrees with the informal definition of a graph you have known for a long time as a drawing of on a piece of paper: for functions , we have In the general case, the graph of is a harder object to understand, and this is not just because and are abstract Euclidean spaces. Indeed, even if we work in coordinates as described in Lecture 3, meaning that we consider the associated function

has graph

which may be difficult to visualize if

Now we come to the real sticking point, the Newton quotient. If is an open set and is a function, then for any the ratio

is well-defined for any sufficiently small number . Moreover, this number has an immediate intuitive meaning as a secant line for the graph , i.e. it is the slope of the line in passing through the points We then say that is differentiable at the point if the limit

exists, in which case the number is called the derivative of at ; it is the slope of the tangent line to at the point

Generalizing the definition of the derivative to functions which map vectors to vectors is problematic from the outset. Let be an open set in a Euclidean space , and let be a function defined on . For any , we have for sufficiently small , so that the difference

makes sense for any with . However, when we attempt to form the corresponding difference quotient, we get the fraction

which is problematic since at no time in Math 31AH up til now have we defined what it means to **divide** two vectors in a vector space . As we discussed in Lecture 2, a notion of vector division in some sense only exists for in which case vectors are real numbers, and , in which case can be identified with the complex numbers , for which division is meaningful. The former case gives us back the usual calculus derivative, and the latter gives us a notion of derivative for functions , which is the starting point of the subject known as complex analysis. Complex analysis is a beautiful and useful subject, but our world is not two-dimensional, and we would like to have access to calculus in dimensions higher than two. Moreover, we want to consider functions where is distinct from the Euclidean space containing the domain of . In such a setting the Newton quotient becomes even more heretical, since it involves division of a vector in by a vector in

We will have to work hard to resolve the philosophical impediments to differentiation of vector-valued functions of vectors. However, there is a natural starting point for this quest, namely the differentiation of vector-valued functions of scalars. Indeed, if is an open set of real numbers and is a function from into a Euclidean space then for and sufficiently small the Newton quotient

makes perfectly good sense: it is the vector scaled by the number So vector-valued functions of scalars are a good place to start.

We will work in the setting where is a function whose domain is a closed interval in . In this case, the image of is said to be a curve in , and by abuse of language we may refer to itself as a curve in ; it may be thought of as the path described by a particle located at the point at time , and located at the point at time .

**Definition 3:** A function is said to be **differentiable** at a point if the limit

exists. In this case, the vector is said to be the **derivative** of at . In full detail, this means that is a vector with the following property: for any , there exists a corresponding such that

where is the Euclidean norm in .

Note that the component functions of a curve relative to an orthonormal basis of are scalar-valued functions of the scalar “time variable” i.e.

This is part of what makes curves easier to study than general vector-valued functions: they are just -tuples of functions , for which we already have a well-developed calculus at our disposal.

**Theorem 1:** Let be a curve, and let be an orthonormal basis of . Then is differentiable at time if and only if its component functions relative to are differentiable at time , and in this case we have

.

*Proof:* The components of the vector-valued Newton quotient

relative to the basis are the scalar-valued Newton quotients

The statement now follows from Proposition 1 in Lecture 3.

Q.E.D.

Example 1: Let be a -dimensional Euclidean spaces with orthonormal basis Consider the function defined by

It is hopefully immediately apparent to you that the image of in is the unit circle in this Euclidean space,

The graph

is a helix. The component functions of in the basis of are

and as you know from elementary calculus these are differentiable functions with derivatives

Thus, the curve is differentiable, and its derivative is

Equivalently, the coordinate vector of the vector in the basis is