## Math 31BH: Lecture 6

We are in the process of developing differential calculus for vector-valued functions. Presently, we are focusing on the special case of scalar inputs, where the derivative is easy to define. Let us review the basic setup

Let $[a,b] \subseteq \mathbb{R}$ be an interval, and let $\mathbf{W}$ be a Euclidean space. We shall consider continuous functions

$f \colon [a,b] \longrightarrow \mathbf{V}.$

The image of $[a,b]$ under the mapping $f$, i.e. the subset of $\mathbf{W}$ defined by

$\mathrm{Im}(f) = \{ f(t) \colon t \in [a,b]\}$

is called a curve in $\mathbf{W}$. It is commonplace to refer to the function $f$ itself as a curve, though strictly speaking this is abuse of language because different functions may have the same image. For example, consider the functions

$f,g \colon [0,2\pi] \longrightarrow \mathbb{R}^2$

defined by

$f(t) = (\cos t,\sin t) \quad\text{ and }\quad g(t) = (\cos 2t, \sin 2t).$

These functions are different, $f_1 \neq f_2,$ because they do not produce the same outputs for all inputs $t \in [0,2\pi],$ but their images are the same curve in $\mathbb{R}^2,$ and this curve is the unit circle

$\{\mathbf{v} \in \mathbb{R}^2 \colon \|\mathbf{v}\|=1\}.$

Definition 1: For any fixed $t\in [a,b]$, the Newton quotient of $f$ at $t$ is the function

$\Delta_t \colon [a-t,b-t]\backslash \{0\} \longrightarrow \mathbf{W}$

defined by

$\Delta_t(h) = \frac{1}{h}(f(t+h) - f(t)).$

Here are a few remarks on the Newton quotient of the function $f$ at the time $t$.

First, the domain of $\Delta_t$ consists of all nonzero numbers $h$ such that $t \pm |h|$ is contained in the domain $[a,b]$ of $f$. For example, the Newton quotient $\Delta_a$ of $f$ at the left endpoint $a$ of the domain $[a,b]$ has domain $(0,b-a],$ and the Newton quotient $\Delta_b$ of $f$ at the left endpoint has domain $[a-b,0).$

Second, $\Delta_t(h)$ is a vector in $\mathbf{W}$: this vector may be visualized as the directed line segment from the point $f(t)$ to the point $f(t+h)$, scaled by the factor $\frac{1}{h}$, as in the schematic figures below.

Third, if we view $f(t)$ as the position of a particle moving in $\mathbf{W}$ at time $t$, the Newton quotient $\Delta_h(t)$ has a physical interpretation as the average velocity of the particle over the time period $[t,t+h]$ if $h>0$, or over $[t-h,t]$ if $h<0.$ The orientation of the vector $\Delta_t(h)$ is compatible with the flow of time. The idea of the derivative is that, if the curve $f$ is sufficiently smooth, then for $|h|$ small the vectors $\Delta_t(-h)$ and $\Delta_t(h)$ should be nearly identical, and as we shrink $|\hbar|$ they become an exact match: the instantaneous velocity of the particle at time $t.$

Definition 2: A function $f \colon [a,b] \to \mathbf{V}$ is said to be differentiable at $t \in [a,b]$ if there exists a vector $f'(t) \in \mathbf{V}$ such that

$\lim\limits_{h \to 0} \Delta_t(h)=f'(t),$

and when this is the case $f'(t)$ is called the derivative of $f$ at $t$, or the tangent vector to $f$ at $t$. The curve $f$ is said to be smooth if it is differentiable for all $t \in [a,b]$ and the derivative $f' \colon [a,b] \to \mathbb{R}$ is continuous.

The main technical tool for checking differentiability and calculating derivatives is component functions. Recall that if $\mathcal{C}=\{\mathbf{c}_1,\dots,\mathbf{c}_m\}$ is a basis of $\mathbf{W}$, then the corresponding component functions

$f_1,\dots,f_m \colon [a,b] \longrightarrow \mathbb{R}$

are determined by the condition

$f(t) = f_1(t)\mathbf{c}_1 + \dots + f_m(t)\mathbf{c}_m \quad\forall t \in [a,b].$

We have the following useful theorem at our disposal.

Theorem 1: The curve $f \colon [a,b] \to \mathbf{W}$ is differentiable at time $t \in [a,b]$ if and only if each of its component functions $f_1,\dots,f_m \colon [a,b] \to \mathbb{R}$ is differentiable at $t$, and in this case the tangent vector $f'(t)$ is given by

$f'(t) = f_1'(t) \mathbf{c}_1 + \dots + f_m'(t) \mathbf{c}_m.$

Theorem 1 is useful for proving theorems about vector-valued functions, since it reduces the differentiation of a vector-valued function to the differentiation of its component functions, which map scalars to scalars, $f_i \colon [a,b] \to \mathbb{R}$. This is exactly what makes curves easier to handle then general functions $f \colon D \subseteq \mathbf{V} \to \mathbb{W}$, whose component functions are instead mappings $f_i \colon D \to \mathbb{R}$ (as yet we do not even know how to define the derivative for functions whose domain is not one-dimensional, since this involves the ill-defined concept of dividing by a vector). But curves are relatively non-problematic because we can use Theorem 1 to reduce many statements to single variable calculus. For example, we have the following.

Propostion 1: A differentiable curve $f \colon [a,b] \to \mathbf{W}$ is continuous.

Proof: Since $f$ is differentiable, its component functions $f_1,\dots,f_m$ relative to any basis of $\mathbf{W}$ are differentiable functions $[a,b] \to \mathbb{R}.$ But we know from single variable calculus that a differentiable function is continuous, so each of the component functions $f_1,\dots,f_m$ is continuous. Now, since $f$ is continuous if and only if all of its component functions are continuous, we may conclude that $f$ is continuous.

Q.E.D.

Another one.

Proposition 2: The function $f \colon [a,b] \to \mathbf{W}$ is constant if and only if $f'(t)=\mathbf{0}_\mathbf{W}$ for all $t \in [a,b].$

Proof: Suppose first that $f$ is constant, i.e. there exists a number $c \in \mathbb{R}$ such that $f(t) = c$ for all $t \in [a,b].$ In this case it is clear that the Newton quotient of $f$ at any $t \in [a,b]$ is identically zero on its domain, so $f'(t)$ exists and equals the zero vector in $\mathbf{W}.$

Next, suppose that $f$ is differentiable on $[a,b]$ with $f'(t)=\mathbf{0}_\mathbf{W}$. Then, for each $t \in [a,b]$ we have that

$f'(t) =f_1(t)\mathbf{c}_1 + \dots + f_1(t)\mathbf{c}_m = \mathbf{0}_\mathbf{W},$

and since $\mathcal{C}=\{\mathbf{c}_1,\dots,\mathbf{c}_m\}$ is a linearly independent set this forces

$f_1'(t) = \dots = f_m'(t) = 0.$

According to single-variable calculus, a function whose derivative is identically zero is a constant function. Therefore, there are constants $c_1,\dots,c_m \in \mathbb{R}$ such that $f_i(t) = c_i$ for all $t \in [a,b]$, and consequently

$f(t) = c_1\mathbf{c}_1 + \dots + c_m\mathbf{c}_m$

for all $t \in [a,b].$

Q.E.D.

Using exactly the same reasoning, one easily establishes the following properties of differentiable curves.

Proposition 3: Let $f,g \colon [a,b] \to \mathbf{W}$ be curves, and suppose both are differentiable at time $t \in [a,b].$ Then the sum $f+g$ and the product $fg$ are differentiable at $t,$ and their tangent vectors at $t$ are given by $f'(t) + g'(t)$ and $\langle f'(t),g(t)\rangle+\langle f(t),g'(t)\rangle,$ respectively.

Theorem 1 is also useful for doing concrete computations. For example, let us apply it to the functions $f,g \colon [0,2\pi] \rightarrow \mathbb{R}^2$ defined by

$f(t) = (\cos t,\sin t) \quad\text{ and }\quad g(t)=(\cos 2t, \sin 2t).$

Relative to the standard basis of $\mathbb{R}^2,$ the component functions of $f$ are

$f_1(t) = \cos t \quad\text{ and }\quad f_2(t) = \sin t.$

From single variable calculus, we know that

$f_1'(t) = -\sin t \quad\text{ and }\quad f_2'(t) = \cos t,$

and therefore Theorem 1 allows us to conclude that the curve $f$ is differentiable for all times $t \in [0,2\pi],$ with tangent vector at time $t$ given by

$f'(t) = (-\sin t,\cos t).$

Observe that

$\|f'(t)\|=\sqrt{f'(t) \cdot f'(t)} = \sqrt{\sin^2t+\cos^2t}=1$

and

$f(t) \cdot f'(t) = -\cos t \sin t + \sin t \cos t =0;$

these equations say that the tangent vector $f'(t)$ to the unit circle is a unit vector, and that it is orthogonal to the vector joining the origin to the point $f(t)$. This orthogonality is the basic ingredient in, for example, ancient weapons like the sling which David used to slay Goliath, or the mangonel developed in ancient China during the Warring States Period, which evolved into the medieval catapult that used counterweights rather than direct human strength.

Let us repeat the above computations for the curve $g.$ The component functions of $g$ relative to the standard basis of $\mathbb{R}^2$ are

$g_1(t) = \cos 2t \quad\text{ and }\quad g_2(t)=\sin 2t,$

so that by the chain rule from scalar calculus we have

$g_1'(t) = -2 \sin 2t \quad\text{ and }\quad g_2'(t) = 2 \cos 2t$

for all $t \in [0,2\pi].$ This means that $g(t)$ is a differentiable curve with tangent vector at time $t$ given by

$g'(t) = 2(-\sin 2t, \cos 2t).$

Observe that we still have orthogonality,

$g(t) \cdot g'(t) 2(-\cos 2t \sin 2t + \sin 2t \cos 2t) = 0,$

but that $g(t)$ is not a unit vector,

$\|g'(t)\| = \sqrt{4(\sin^2 2t+\cos^2 2t)}= 2.$

Physically, this is because $g(t)$ describes the motion of a particle which is traveling twice as fast at the particle whose motion is described by $g(t)$: it traverses the unit circle counterclockwise twice as time flows from $t=0$ to $t=2\pi$, rather than just once. Indeed, in physics the tangent vector to a curve is known as the instantaneous velocity, and its norm is the instantaneous speed. The above calculations show that a sling swung twice as fast produces a projectile which travels twice as fast in exactly the same direction.

## Math 31BH: Lecture 4

This post is a little shorter than the previous ones because I went back and re-worked parts of Lecture 3 substantially to add more detail and clarity, and I also added some additional material which upon reflection is best housed in Lecture 3. So, as part of reading this post, you should go back and make a second pass through Lecture 3.

What we have succeeded in doing so far is defining limits and continuity for functions $f \colon \mathbf{V} \to \mathbf{W}$ which map between Euclidean spaces, so we have the first two core notions of calculus squared away (we saw a fairly involved example, the eigenvalue mapping on symmetric operators, which showed that understanding continuity for functions which are hard to describe explicitly can be hard).

The remaining core notions of calculus are the big ones: differentiation and integration. In Math 31BH we develop differential calculus for vector-valued functions of vectors, and in Math 31CH you will concentrate on the development of integral calculus for such functions.

Let us begin with the familiar setting of functions $f \colon \mathbb{R} \to \mathbb{R}$. First of all, we want do consider functions which may not be defined on all of $\mathbb{R}$, but only on some subset $D \subseteq \mathbb{R}$, the domain of $f$. For example, the square root function $f(x)=\sqrt{x}$ has domain $D$ being the set of nonnegative numbers, while the logarithm function $f(x)$ has domain $D$ being the set of positive numbers. So in our general setup we are considering functions $f \colon D \to \mathbf{W}$ where $D$ is a subset of a Euclidean space $\mathbf{V}$ which does not necessarily exhaust that space. This extension is completely non-problematic, as is the extended notion of image of a function.

Definition 1: The image of a function $f \colon \mathbf{V} \to \mathbf{W}$ is the set of outputs of $f$ in $\mathbf{W}$,

$\mathrm{Im}(f) = \{\mathbf{w} \in \mathbf{W} \colon f(\mathbf{v})=\mathbf{w} \text{ for some } \mathbf{v} \in \mathbf{V}\}.$

Now let us talk about graphs of functions, the precise definition of which involves the direct product of Euclidean spaces, a concept introduced in Lecture 3.

Proposition 1: If $\dim \mathbf{V} = n$ and $\dim \mathbf{W}=m$, then $\dim(\mathbf{V} \times \mathbf{W})=n+m$.

Proof: If $\mathcal{B} =\{\mathbf{b}_1,\dots,\mathbf{b}_n\}$ is an orthonormal basis of $\mathbf{V}$ and $\mathcal{C}= \{\mathbf{c}_1,\dots,\mathbf{v}_n\}$ is an orthonormal basis of $\mathbf{W}$, then the set

$\mathcal{B} \times \mathcal{C} = \{(\mathbf{b}_i,\mathbf{c}_j) \colon (i,j) \in \{1,\dots,n\} \times \{1,\dots,m\}\}$

spans $\mathbf{V} \times \mathbf{W}$. This follows readily from the fact that $\mathcal{B}$ spans $\mathbf{V}$ and $\mathcal{C}$ spans $\mathbf{W}$; make sure you understand why. Moreover, we have that

$\langle (\mathbf{b}_i,\mathbf{c}_j),(\mathbf{b}_k,\mathbf{c}_l)\rangle = \langle \mathbf{b}_i,\mathbf{b}_k\rangle + \langle \mathbf{c}_j,\mathbf{c}_l\rangle,$

which vanishes unless $i=k$ and $j=l$. Thus $\mathcal{B} \times \mathcal{C}$ is an orthogonal set, and in particular it is a linearly independent set in $\mathbf{V} \times \mathbf{W}$.

Q.E.D.

Definition 2: The graph of a function $f \colon \mathbf{V} \to \mathbf{W}$ is the set of all input-output pairs for $f$, i.e. the subset of $\mathbf{V} \times \mathbf{W}$ defined by

$\Gamma(f) = \{(\mathbf{v},f(\mathbf{V})) \colon \mathbf{v} \in D\}.$

This agrees with the informal definition of a graph you have known for a long time as a drawing of $f$ on a piece of paper: for functions $f \colon \mathbf{R} \to \mathbf{R}$, we have $\Gamma(f) \subset \mathbb{R} \times \mathbb{R} = \mathbb{R}^2.$ In the general case, the graph of $f \colon \mathbf{V} \to \mathbf{W}$ is a harder object to understand, and this is not just because $\mathbf{V}$ and $\mathbf{W}$ are abstract Euclidean spaces. Indeed, even if we work in coordinates as described in Lecture 3, meaning that we consider the associated function

$f_{\mathcal{BC}} \colon \mathbb{R}^n \times \mathbb{R}^m$

has graph

$\Gamma(f_{\mathcal{BC}}) \subset \mathbb{R}^n \times \mathbb{R}^m = \mathbb{R}^{n+m},$

which may be difficult to visualize if $\max(n,m)>1.$

Now we come to the real sticking point, the Newton quotient. If $D \subseteq \mathbb{R}$ is an open set and $f \colon \mathbb{D} \to \mathbb{R}$ is a function, then for any $x \in D$ the ratio

$\Delta_hf(x) = \frac{f(x+h)-f(x)}{x+h-x} = \frac{f(x+h)-f(x)}{h}$

is well-defined for any sufficiently small number $h$. Moreover, this number has an immediate intuitive meaning as a secant line for the graph $\Gamma(f) \subset \mathbb{R}^2$, i.e. it is the slope of the line in $\mathbb{R}^2$ passing through the points $(x,f(x)),(x+h,f(x+h)) \in \Gamma(f).$ We then say that $f(x)$ is differentiable at the point $x \in D$ if the limit

$f'(x) = \lim\limits_{h \to 0} \Delta_h f(x)$

exists, in which case the number $f'(x)$ is called the derivative of $f$ at $x$; it is the slope of the tangent line to $\Gamma(f)$ at the point $(x,f(x)) \in \Gamma(f).$

Generalizing the definition of the derivative to functions which map vectors to vectors is problematic from the outset. Let $D$ be an open set in a Euclidean space $\mathbf{V}$, and let $f \colon D \to \mathbf{V}$ be a function defined on $D$. For any $\mathbf{v} \in D$, we have $B_\delta(\mathbf{v}) \subseteq D$ for sufficiently small $\delta>0$, so that the difference

$f(\mathbf{v}+\mathbf{h}) - f(\mathbf{v})$

makes sense for any $h \in \mathbf{V}$ with $\|\mathbf{h}\|< \delta$. However, when we attempt to form the corresponding difference quotient, we get the fraction

$\Delta_\mathbf{h}f(\mathbf{v})= \frac{f(\mathbf{v}+\mathbf{h})-f(\mathbf{v})}{\mathbf{h}},$

which is problematic since at no time in Math 31AH up til now have we defined what it means to divide two vectors in a vector space $\mathbf{V}$. As we discussed in Lecture 2, a notion of vector division in some sense only exists for $\mathbf{V}=\mathbb{R},$ in which case vectors are real numbers, and $\mathbf{V} = \mathbb{R}^2$, in which case $\mathbb{R}^2$ can be identified with the complex numbers $\mathbb{C}$, for which division is meaningful. The former case gives us back the usual calculus derivative, and the latter gives us a notion of derivative for functions $f \colon \mathbb{C} \to \mathbb{C}$, which is the starting point of the subject known as complex analysis. Complex analysis is a beautiful and useful subject, but our world is not two-dimensional, and we would like to have access to calculus in dimensions higher than two. Moreover, we want to consider functions $f \colon D \to \mathbf{W},$ where $\mathbf{W}$ is distinct from the Euclidean space $\mathbf{V}$ containing the domain $D$ of $f$. In such a setting the Newton quotient becomes even more heretical, since it involves division of a vector in $\mathbf{W}$ by a vector in $\mathbf{V}.$

We will have to work hard to resolve the philosophical impediments to differentiation of vector-valued functions of vectors. However, there is a natural starting point for this quest, namely the differentiation of vector-valued functions of scalars. Indeed, if $D \subseteq \mathbb{R}$ is an open set of real numbers and $f \colon D \to \mathbf{W}$ is a function from $D$ into a Euclidean space $\mathbf{W},$ then for $t \in D$ and $h \in \mathbb{R}$ sufficiently small the Newton quotient

$\Delta_h f(t) = \frac{f(t+h)-f(t)}{h}$

makes perfectly good sense: it is the vector $f(t+h)-f(t) \in \mathbf{W}$ scaled by the number $\frac{1}{h}.$ So vector-valued functions of scalars are a good place to start.

We will work in the setting where $f \colon [a,b] \to \mathbf{W}$ is a function whose domain is a closed interval in $\mathbb{R}$. In this case, the image of $f$ is said to be a curve in $\mathbf{W}$, and by abuse of language we may refer to $f$ itself as a curve in $\mathbf{W}$; it may be thought of as the path described by a particle located at the point $f(a) \in \mathbf{W}$ at time $t=a$, and located at the point $f(b) \in \mathbf{W}$ at time $t=b$.

Definition 3: A function $f \colon [a,b] \to \mathbf{W}$ is said to be differentiable at a point $t \in (a,b)$ if the limit

$f'(t) = \lim\limits_{h \to 0} \frac{f(t+h)-f(t)}{h}$

exists. In this case, the vector $f'(t) \in \mathbf{W}$ is said to be the derivative of $f$ at $t$. In full detail, this means that $f'(t) \in \mathbf{W}$ is a vector with the following property: for any $\varepsilon > 0$, there exists a corresponding $\delta > 0$ such that

$|h| < \delta \implies \left\|f'(t)-\frac{1}{h}(f(t+h)-f(t))\right\|<\varepsilon$

where $\|\mathbf{w}\|=\sqrt{\langle \mathbf{w},\mathbf{w}\rangle}$ is the Euclidean norm in $\mathbf{W}$.

Note that the component functions $f_{\mathcal{C}1},\dots,f_{\mathcal{C}m}$ of a curve $f \colon [a,b] \to \mathbf{W}$ relative to an orthonormal basis $\mathcal{C}=\{\mathbf{c}_1,\dots\mathbf{c}_m\}$ of $\mathbf{W}$ are scalar-valued functions of the scalar “time variable” $t,$ i.e.

$f_{\mathcal{C}1} \colon [a,b] \to \mathbb{R}, \quad i=1,\dots,m$

This is part of what makes curves easier to study than general vector-valued functions: they are just $m$-tuples of functions $\mathbb{R} \to \mathbb{R}$, for which we already have a well-developed calculus at our disposal.

Theorem 1: Let $f \colon [a,b] \to \mathbf{W}$ be a curve, and let $\mathcal{C}=\{\mathbf{c}_1,\dots,\mathbf{c}_m\}$ be an orthonormal basis of $\mathbf{W}$. Then $f$ is differentiable at time $t \in (a,b)$ if and only if its component functions $f_1,\dots,f_m$ relative to $\mathcal{C}$ are differentiable at time $t$, and in this case we have

$f'(t) = f_1'(t)\mathbf{c}_1 +\dots+ f_m'(t)\mathbf{c}_m$.

Proof: The components of the vector-valued Newton quotient

$\Delta_hf(t)=\frac{1}{h}(f(t+h)-f(t))$

relative to the basis $\mathcal{C}$ are the scalar-valued Newton quotients

$\Delta_hf_i(t)=\frac{1}{h}(f_i(t+h)-f_i(t)), \quad i=1,\dots,m.$

The statement now follows from Proposition 1 in Lecture 3.

Q.E.D.

Example 1: Let $\mathbf{W}$ be a $2$-dimensional Euclidean spaces with orthonormal basis $\mathcal{C}=\{\mathbf{c}_1,\mathbf{c}_2\}.$ Consider the function $f \colon [0,2\pi] \to \mathbf{W}$ defined by

$f(t) = \cos t\mathbf{c}_1+ \sin t\mathbf{c}_2.$

It is hopefully immediately apparent to you that the image of $f$ in $\mathbf{W}$ is the unit circle in this Euclidean space,

$\mathrm{Im}(f) =\{\mathbf{w} \in \mathbf{W} \colon \|\mathbf{w}\|=1\}.$

The graph

$\Gamma(f) = \{(t,\mathbf{w}) \colon t \in [0,2\pi]\} \subset \mathbf{R} \times \mathbf{W}$

is a helix. The component functions of $f$ in the basis $\mathcal{C}=\mathbf{c}_1,\mathbf{c}_2$ of $\mathbb{R}^2$ are

$f_1(t) = \cos t \quad\text{ and }\quad f_2(t) = \sin t,$

and as you know from elementary calculus these are differentiable functions with derivatives

$f_1'(t) = -\sin t \quad\text{ and }\quad f_2'(t)= \cos t.$

Thus, the curve $f(t)=f_1(t)\mathbf{c}_1+f_2(t)\mathbf{c}_2$ is differentiable, and its derivative is

$f'(t) = -\sin t\mathbf{c}_1 + \cos t \mathbf{c}_2.$

Equivalently, the coordinate vector of the vector $f'(t)\in \mathbf{W}$ in the basis $\mathcal{C}$ is

$[f'(t)]_{\mathcal{C}} = \begin{bmatrix} -\sin t \\ \cos t \end{bmatrix}.$

## Math 31BH: Lecture 3

Let $f \colon \mathbf{V} \to \mathbf{W}$ be a function from one Euclidean space to another. In making this declaration, we did not make the scalar products on the source and target spaces explicit. This omission is commonly made for the sake of convenience, and by abuse of notation we just denote all scalar products $\langle \cdot,\cdot \rangle$, since the vector space on which a given scalar product is defined can always be deduced from context. The resulting notational ambiguity is ultimately less confusing than requiring distinct symbols for all scalar products in play at any given time.

As discussed in Lecture 2, our goal is to do calculus with functions which take vector inputs and produce vector outputs. Since $f(\mathbf{v})$ is a function of only one variable, the vector $\mathbf{v}$, let us explain why this subject is also known as multivariable calculus. Let $\mathcal{B}=\{\mathbf{b}_1,\dots,\mathbf{b}_n\}$ be an orthonormal basis of $\mathbf{V}$, so that every $\mathbf{v} \in \mathbf{V}$ is given by

$\mathbf{v} = \langle \mathbf{b}_1,\mathbf{v}\rangle \mathbf{b}_1 + \dots + \mathbf{b}_n,\mathbf{v}\rangle \mathbf{b}_n.$

If $\mathbf{v}$ is viewed as a variable (rather than a particular vector), then its coordinates

$x_1:=\langle \mathbf{b}_1,\mathbf{v}\rangle,\dots,x_n:=\langle \mathbf{b}_m,\mathbf{v}\rangle$

relative to the orthonormal basis $\mathcal{B}$ also become variables. In other words, associated to the function ${f} \colon \mathbf{V} \to \mathbf{W}$ is another function $f_\mathcal{B} \colon \mathbb{R}^n \to \mathbf{W}$ defined by

$f_\mathcal{B}(x_1,\dots,x_n) := f(x_1\mathbf{b}_1 + \dots + x_n\mathbf{b}_n),$

which is a function of the $n$ scalar variables $x_1,\dots,x_n$, where $n =\dim \mathbf{V}$. The objects $f(\mathbf{v})$ and $f_\mathcal{B}(x_1,\dots,x_n)$ contain exactly the same information, even though the former is a function of a single vector variable $\mathbf{v}$ and the latter is a function of $n$ scalar variables $x_1,\dots,x_n$. Notice that the construction of $f_\mathcal{B}$ from $f$ makes no mention of the scalar product on $\mathbf{W}$, nor does it reference a basis in $\mathbf{W}$.

Now suppose we choose an orthonormal basis $\mathcal{C} = \{\mathbf{c}_1,\dots,\mathbf{c}_m\}$ of $\mathbf{W}$. Then, for any $\mathbf{v} \in \mathbf{V}$, we have

$f(\mathbf{v}) = \langle \mathbf{c}_1,f(\mathbf{v})\rangle \mathbf{c}_1 + \dots + \langle \mathbf{c}_m,f(\mathbf{v})\rangle \mathbf{c}_m,$

and since we are thinking of $\mathbf{v} \in \mathbf{V}$ as a variable, it is natural to think of the coordinates of $f(\mathbf{v})$ in the basis $\mathcal{C}$ as functions of $\mathbf{v}$,

$f_{\mathcal{C}1}(\mathbf{v}):=\langle \mathbf{c}_1,f(\mathbf{v})\rangle,\ \dots,\ f_{\mathcal{C}m}(\mathbf{v}):=\langle \mathbf{c}_m,\mathbf{v}\rangle$.

The functions $f_{\mathcal{C}1},\dots,f_{\mathcal{C}m} \colon \mathbf{V} \to \mathbb{R}$ so defined are called the component functions of $f$ relative to the basis $\mathcal{C}$, and they contain exactly the same information as the function $f$ itself. In particular, we have the following.

Proposition 1: Given a function $f \colon \mathbf{V} \to \mathbf{W}$ and vectors $\mathbf{p} \in \mathbf{V}$ and $\mathbf{q} \in \mathbf{W},$ we have

$\lim\limits_{\mathbf{v} \to \mathbf{p}} f(\mathbf{v}) = \mathbf{q}$

if and only if

$\lim\limits_{\mathbf{v} \to \mathbf{p}} f_{\mathcal{C}1}(\mathbf{v}) = q_1, \dots, \lim\limits_{\mathbf{v} \to \mathbf{p}} f_{\mathcal{C}m}(\mathbf{v})=q_m,$

where $q_1 = \langle \mathbf{c}_1,\mathbf{q}\rangle, \dots,q_m = \langle \mathbf{c}_m,\mathbf{q}\rangle$ are the components of the vector $\mathbf{q} \in \mathbf{W}$ relative to the orthonormal basis $\mathcal{C}=\{\mathbf{c}_1,\dots,\mathbf{c}_m\}$ of $\mathbf{W}$. In particular, the vector-valued function $f$ is continuous at $\mathbf{p} \in \mathbf{V}$ if and only if the scalar-valued functions $f_{\mathcal{C}1},\dots,f_{\mathcal{C}m}$ are continuous at $\mathbf{p}$.

Proof: Since the basis $\mathcal{C}$ is fixed, let us write $f_1,\dots,f_m$ as an abbreviation for $f_{\mathcal{C}1},\dots,f_{\mathcal{C}m}.$ We have

$f(\mathbf{v})-\mathbf{q} = (f_1(\mathbf{v})-q_1)\mathbf{c}_1+\dots+(f_m(\mathbf{v})-q_m)\mathbf{c}_m,$

hence

$\|f(\mathbf{v})-\mathbf{q}\|^2= (f_1(\mathbf{v})-q_1)^2+\dots+(f_m(\mathbf{v})-q_m)^2.$

Suppose first that $f(\mathbf{v}) \to \mathbf{q}$ as $\mathbf{v}\to \mathbf{p}$. Then, for any $\varepsilon>0$, there exists $\delta >0$ such that $\|\mathbf{v}-\mathbf{p}\| < \delta$ implies

$\|f(\mathbf{v})-\mathbf{q}\|^2=\sum_{i=1}^m (f_i(\mathbf{v})-q_i)^2 < \varepsilon,$

where we are assuming $\varepsilon <1$ so that $\varepsilon^2<\varepsilon.$ Since each term in the sum of squares is bounded by the total sum, this gives

$(f_i(\mathbf{v})-q_i)^2 < \varepsilon$

for each $i=1,\dots,m,$ which shows that $f_i(\mathbf{v})\to q_i$ as $\mathbf{v}\to\mathbf{p}.$

Conversely, suppose that for each $1 \leq i \leq m$ we have $f_i(\mathbf{v})\to q_i$ as $\mathbf{v}\to\mathbf{p},$ and let $\varepsilon > 0$ be given. Then, for each $1 \leq i \leq m$ there is a $\delta_i>0$ such that $\|\mathbf{v}-\mathbf{p}\|<\delta_i$ implies

$(f_i(\mathbf{v}-q_i)^2 < \frac{\varepsilon}{m}.$

Setting $\delta=\min(\delta_1,\dots,\delta_m)$, we thus have that $\|\mathbf{v}-\mathbf{p}\|<\delta$ implies

$\|f(\mathbf{v})-\mathbf{q}\|^2

which shows that $f(\mathbf{v})\to \mathbf{q}$ as $\mathbf{v} \to \mathbf{p}.$

Q.E.D.

Technicalities aside, the structure of the above proof is very simple: the argument is that if a sum of nonnegative numbers is small, then each term in the sum must be small, and conversely the sum of a finite number of small numbers is still small.

Now let us consider the above paragraphs simultaneously, meaning that we have chosen orthonormal bases $\mathcal{B} \subset \mathbf{V}$ and $\mathcal{C} \subset \mathbf{W}$. Then each coordinate function $f_{\mathcal{C}i}$ gives rise to an associated function $f_{\mathcal{BC}i} \colon \mathbf{R}^n \to \mathbb{R}$ defined by

$f_{\mathcal{BC}i}(x_1,\dots,x_n) :=\langle \mathbf{c}_i,f_\mathcal{B}(x_1,\dots,x_n)\rangle.$

In particular, upon choosing orthonormal bases $\mathcal{B} \subset \mathbf{V}$ and $\mathcal{C} \subset \mathbf{W}$, every function $f \colon \mathbf{V} \to \mathbf{W}$ gives rise to an associated function $f_{\mathcal{BC}} \colon \mathbb{R}^n \to \mathbb{R}^m$ defined by

$f_{\mathcal{BC}}(x_1,\dots,x_n) = (f_{\mathcal{BC}1}(x_1,\dots,x_n),\dots,f_{\mathcal{BC}m}(x_1,\dots,x_n)).$

Example 1: Let $f \colon \mathbf{V} \to \mathbf{V}$ be the function defined by $f(\mathbf{v})=\mathbf{p}+\mathbf{v}$, where $\mathbf{p} \in \mathbf{V}$ is a specified vector. This function is called “translation by $\mathbf{p}.$” Choose an orthonormal basis $\mathcal{B} \subset \mathbf{V}$, and suppose that the coordinate vector of $\mathbf{p}$ relative to this basis is $(p_1,\dots,p_n) \in \mathbb{R}^n$. Then, the function $f_{\mathbb{B}} \colon \mathbb{R}^n \to \mathbb{R}^n$ is given by

$f_{\mathcal{B}}(x_1,\dots,x_n) = (p_1,\dots,p_n) + (x_1,\dots,x_n) = (p_1+x_1,\dots,p_n+x_n),$

and

$f_{\mathcal{BB}i}(x_1,\dots,x_n) = p_i+x_i, \quad i =1,\dots,n.$

The above discussion shows that the perspective of vector calculus, in which we consider functions of a single vector variable $\mathbf{v}$, is equivalent to the perspective of multivariable calculus, in which we consider functions of multiple scalar variables $x_1,\dots,x_n$. From this perspective, one might wonder about the prospect of a “multivector” calculus in which we consider functions $f(\mathbf{v}_1,\dots,\mathbf{v}_n)$ of multiple vector variables, where it may even be that each vector variable $\mathbf{v}_i$ ranges over its own Euclidean space $\mathbf{V}_i$. In fact, this is already included in vector calculus, because such $n$-tuples of vectors are themselves single vectors in an enlarged Euclidean space.

Definition 1: Given Euclidean spaces $(\mathbf{V}_1, \langle \cdot,\cdot \rangle_1,)\dots,(\mathbf{V}_n,\langle \cdot,\cdot\rangle_n),$ their direct product is the Euclidean space consisting of the Cartesian product

$\mathbf{V}_1 \times \dots \times \mathbf{V}_n = \{(\mathbf{v}_1,\dots,\mathbf{v}_n) \colon \mathbf{v}_i \in \mathbf{V}_i$

with vector addition and scalar multiplication defined component-wise, i.e.

$(\mathbf{v}_1,\dots,\mathbf{v}_n) + (\mathbf{w}_1,\dots,\mathbf{w}_n) := (\mathbf{v}_1+\mathbf{w}_1,\dots,\mathbf{v}_n+\mathbf{w}_n)$

and

$t(\mathbf{v}_1,\dots,\mathbf{v}_n) := (t\mathbf{v}_1,\dots,t\mathbf{v}_n),$

and scalar product defined by

$\langle (\mathbf{v}_1,\dots,\mathbf{v}_n),(\mathbf{w}_1,\dots,\mathbf{w}_n)\rangle = \langle \mathbf{v}_1,\mathbf{w}_1 \rangle_1 + \dots + \langle \mathbf{v}_n,\mathbf{w}_n \rangle_n.$

It is good to be comfortable with both perspectives; the former is better for conceptual understanding, while the latter is useful for visualization and calculation.

Thus the calculus of functions of two vector variables, $f(\mathbf{v}_1,\mathbf{v}_2)$, is just the calculus of functions on the direct product Euclidean space $\mathbf{V}_1 \times \mathbf{V}_2$. Equivalently, the calculus of functions $f((x_1,\dots,x_{n_1}),(y_1,\dots,y_{n_2}))$ is the same thing as the calculus of functions $f(x_1,\dots,x_{n_1},y_1,\dots,y_{n_2}).$ There is in fact a further generalization of vector calculus called tensor calculus, which is very useful in physics and engineering (particularly in the theory of relativity and in materials science), but that is beyond the scope of this course.

Example 2: It may be tempting to throw away the more abstract perspective entirely, and in the previous lectures I have been arguing against doing this by holding up the example of the function

$f \colon \mathrm{Sym} \mathbf{V} \longrightarrow \mathbb{R}^n$

which sends each symmetric operator $S$ on $\mathbf{V}$ to the list $(s_1,\dots,s_n)$ of its eigenvalues arranged in weakly decreasing order. Conceptually, the function which sends a symmetric operator to its eigenvalues is very natural, and something you can hold in your mind quite easily. However, it is not easy to work concretely with this function by choosing coordinates. On Problem Set 1, we showed how to the choice of a basis $\mathcal{B} \subset \mathbf{V}$ leads to a corresponding basis $\mathcal{S} \subset \mathrm{Sym}\mathbf{V}$, and in particular that if $\dim \mathbf{V}=n$ then $\dim \mathrm{Sym}\mathbf{V} = {n+1 \choose 2}$. So, according to our discussion above we have an associated function

$f_\mathcal{S} \colon \mathbb{R}^{n+1 \choose 2} \longrightarrow \mathbb{R}^n.$

Moreover, if we choose the standard basis $\mathcal{E} \subset \mathbb{R}^n$, then we have component functions

$f_{\mathcal{SC}i}(S) \colon \mathbb{R}^{n+1 \choose 2} \longrightarrow \mathbb{R}$

which send a symmetric operator $S$ to its $i$th largest eigenvalue,

$f_{\mathcal{SC}i}(S)=s_i$,

and writing down a formula for these functions in terms of the coordinates of $S$ relative to $\mathcal{S}$ amounts to writing down a formula for the eigenvalues of a symmetric matrix in terms of its entries, and doing this for $n \geq 5$ is in a sense impossible. You’ll work out the $n=2$ case on PSet 2. Again, this all points to the need to be able to do approximations a la calculus, a question which we return to now.

We now come back to our general discussion of functions $f \colon \mathbf{V} \to \mathbf{W}$ from one Euclidean space to another. In linear algebra, we consider only the case where $f$ is linear, and in that context what matters are associated vector spaces like the kernel and image of $f$. However, to study more general (i.e. non-linear functions) between Euclidean spaces we need a more general vocabulary that includes a larger variety of special subsets of Euclidean space.

Defintion 1: Given a vector $\mathbf{v} \in \mathbf{V}$ and a number $\delta \in \mathbb{R}$, the open ball of radius $\delta$ centered at $\mathbf{v}$ is the subset of $\mathbf{V}$ defined by

$B_\delta(\mathbf{v}) =\{\mathbf{h} \in \mathbf{V} \colon \|\mathbf{v}-\mathbf{h}\| < \delta\}.$

This is the set of vectors $\mathbf{h}$ whose distance to $\mathbf{v}$ is strictly less than $\delta$. Observe that $B_\delta(\mathbf{v})$ is the empty set unless $\delta>0$.

In terms of open balls, the continuity of a function $f \colon \mathbf{V} \to \mathbf{W}$ at a point $\mathbf{v} \in \mathbf{V}$ may be formulated as follows: $f$ is continuous at $\mathbf{v}$ is and only if it has the property that, for any given $\varepsilon > 0$, there is a corresponding $\delta > 0$ such that the image of $B_\delta(\mathbf{v})$ under $f$ is contained in $B_\varepsilon(f(\mathbf{v}))$.

Definition 2: A subset $S$ of a Euclidean space $\mathbf{V}$ is said to be open if for any $\mathbf{v} \in \mathbb{V}$ there exists $\delta>0$ such that $B_\delta(\mathbf{v}) \subseteq S$. A subset $T$ of $\mathbf{V}$ is said to be closed if its complement $\mathbf{V}\backslash T$ is open.

There is a characterization of continuous functions in terms of open and closed sets.

Theorem 1: A function $f \colon \mathbf{V} \to \mathbf{W}$ is continuous if and only if the preimage $f^{-1}(S)$ of any open set $S \subseteq \mathbf{W}$ is open. Equivalently, $f$ is continuous if and only if the preimage $f^{-1}(T)$ of any closed set $T \subseteq \mathbf{W}$ is closed.

We won’t use Theorem 1 much, so we shall skip the proof – you will see this result again in a real analysis course.

We can also characterize continuity of a function as continuity of its components.

Theorem 2: A function $f \colon \mathbf{V} \to \mathbf{W}$ is continuous at $\mathbf{v} \in \mathbf{V}$ if and only if its component functions $f_{\mathcal{C}1},\dots,f_{\mathcal{C}m} \colon \mathbf{V} \to \mathbb{R}$ relative to an arbitrary orthonormal basis $\mathcal{C} \subset \mathbf{W}$ are continuous.

Definition 3: A set $B \subseteq \mathbf{V}$ is said to be bounded if there exists $\delta > 0$ such that $B \subseteq B_\delta(\mathbf{0}_\mathbf{V}),$ i.e. if $\|\mathbf{v}\| < \delta$ for all $\mathbf{v} \in B$.

Definition 4: A set $K \subseteq \mathbf{V}$ is said to be compact if it is closed and bounded.

Theorem (Extreme Value Theorem): If $K \subset \mathbf{V}$ is compact, every continuous function $f \colon K \to \mathbb{R}$ attains a maximum and a minimum: there are points $\mathbf{p},\mathbf{q} \in K$ such that

$f(\mathbf{p}) \geq f(\mathbf{v}) \text{ and } f(\mathbf{q}) \leq f(\mathbf{v}) \quad \forall \mathbf{v} \in \mathbf{V}$.

The point $\mathbf{p}$ is said to be a maximizer of $f$, and $\mathbf{q}$ is said to be a minimizer of $f$.

There is a particular situation in which we can say more about maximizers and minimizers. Recall that a linear combination of vectors $\mathbf{v}_1,\dots,\mathbf{v}_r \in \mathbf{V}$ is an expression of the form

$t_1\mathbf{v}_1 + \dots + t_r\mathbf{v}_r,$

where $t_1,\dots,t_r \in \mathbb{R}$ are arbitrary scalars, and that the linear span of $\mathbf{v}_1,\dots,\mathbf{v}_n$ is the subset of $\mathbf{V}$ consisting of all linear combinations of the these vectors,

$\mathrm{Span}(\mathbf{v}_1,\dots,\mathbf{v}_n) = \{\sum_{i=1}^r t_i\mathbf{v}_i \colon t_i \in \mathbb{R}\}.$

There is constrained version of this in which we consider only linear combinations whose scalar coefficients are nonnegative and sum to $1$. These special linear combinations of are called convex combinations, and the set

$\mathrm{Conv}(\mathbf{v}_1,\dots,\mathbf{v}_n) = \{\sum_{i=1}^r t_i\mathbf{v}_i \colon t_i \geq 0, \sum_{i=1}^r t_i = 1\}$

of all convex combinations of $\mathbf{v}_1,\dots,\mathbf{v}_n.$ For example, the convex hull of two vectors may be visualized as the line segment whose endpoints are $\mathbf{v}_1$ and $\mathbf{v}_2$, while the convex hull of three vectors may be visualized as the triangular region whose vertices are $\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3$.

Theorem (Convex Optimization Theorem): For any finite set of vectors $S=\{\mathbf{v}_1,\dots,\mathbf{v}_r\} \subset \mathbf{V}$, every linear function $f \colon \mathrm{Conv}(S) \to \mathbb{R}$ has a maximizer and a minimizer in $S$.

Here is a very interesting example of a convex hull. Let $\mathbf{V}$ be a Euclidean space, and let $\mathbf{B}=\{\mathbf{b}_1,\dots,\mathbf{b}_n\}$ be an orthonormal basis in $\mathbf{V}$. Recall that a permutation is a bijective function

$\pi \colon \{1,\dots,n\} \longrightarrow \{1,\dots,n\}.$

If we write the table of values of such a function as a $2 \times n$ matrix,

$\pi = \begin{pmatrix} 1 & 2 & \dots & n \\ \pi(1) & \pi(2) & \dots & \pi(n) \end{pmatrix},$

then the bottom row of the matrix consists of the numbers $1,2,\dots,n$ arranged in some order. For example, in the case $n=3$, the permutations consist of the following $6$ matrices:

$\begin{pmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \end{pmatrix}$

$\begin{pmatrix} 1 & 2 & 3 \\ 1 & 3 & 2 \end{pmatrix}$

$\begin{pmatrix} 1 & 2 & 3 \\ 2 & 1 & 3 \end{pmatrix}$

$\begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \end{pmatrix}$

$\begin{pmatrix} 1 & 2 & 3 \\ 3 & 1 & 2 \end{pmatrix}$

$\begin{pmatrix} 1 & 2 & 3 \\ 3 & 2 & 1 \end{pmatrix}$

For each permutation $\pi$, the associated permutation operator on $\mathbf{V}$ is defined by its action on the basis $\mathcal{B}$, which is given by

$P_\pi(\mathbf{e}_j) = e_{\pi(j)}, \quad 1 \leq j \leq n.$

For example, in the case $n=3$, the matrices of these operators relative to the basis $\mathcal{B}$ are, in the same order, as follows:

$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$

$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}$

$\begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$

$\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}$

$\begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$

$\begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{bmatrix}$

Since these matrices have exactly one $1$ in every row and column, with all other entries $0$, they obviously have the property that each row and column sums to $1$. There are many more such matrices, however, and the following theorem characterizes them as the convex hull of the permutation matrices.

Theorem (Birkhoff-von Neumann theorem): The convex hull of the permutation operators consists of all operators on $\mathbf{V}$ whose matrices relative to the basis $\mathcal{B}$ have nonnegative entries, and whose rows and columns sum to $1$.

We now have everything we need to prove that the eigenvalue map is continuous. In fact, we prove the following stronger result.

Theorem (Hoffman-Wielandt inequality): Let $f \colon \mathrm{Sym}\mathbf{V} \to \mathbb{R}^n$ be the function which sends each symmetric operator on $\mathbf{V}$ to its eigenvalues listed in weakly decreasing order. Then, we have

$\|f(S)-f(T)\| \leq \|S-T\| \quad \forall S,T \in \mathrm{Sym}\mathbf{V}.$

Proof: Let us make sure we understand which norms we are using. On the LHS of the inequality, we have the norm on $\mathbb{R}^n$ corresponding to the standard inner product, so that if

$f(S)=(s_1,\dots,s_n) \quad\text{ and }\quad f(T)=(t_1,\dots,t_n),$

then

$\|f(S)-f(T)\|^2 = (s_1-t_1)^2 + \dots + (s_n-t_n)^2,$

or equivalently

$\|f(S)-f(T)\|^2 = (s_1^2+\dots+s_n)^2 - 2(s_1t_1 + \dots + s_nt_n) + (t_1^2 + \dots + t_n^2).$

On the right hand side, we have the Frobenius norm for operators,

$\|S-T\|^2 = \mathrm{Tr}(S-T)^2 = \mathrm{Tr} S^2 - 2\mathrm{Tr} ST + \mathrm{Tr} T^2.$

Observing that

$\mathrm{Tr}S^2 = s_1^2+\dots+s_n^2 \quad\text{ and }\quad \mathrm{Tr}T^2 = t_1^2+\dots+t_n^2,$

we thus have that proving $\|S-T\|^2 - \|f(S)-f(T)\| \geq 0$, which is our objective, is equivalent to proving

$\mathrm{Tr}ST \leq s_1t_1 + \dots + s_nt_n,$

which we do now.

Since $S$ is a symmetric operator on $\mathbf{V}$, by the Spectral Theorem there exists and orthonormal basis $\mathcal{S}=\{\mathbf{s}_1,\dots,\mathbf{s}_n\}$ of $\mathbf{V}$ such that

$S\mathbf{s}_j = s_j \mathbf{s}_j \quad j = 1,\dots,n.$

From Lecture 1, we have the formula

$\mathrm{Tr} ST = \sum\limits_{i,j=1}^n \langle \mathbf{s}_i,S\mathbf{s}_j\rangle \langle \mathbf{s}_j,T\mathbf{s}_i \rangle,$

so that

$\mathrm{Tr} ST = \sum\limits_{j=1}^n s_j \langle \mathbf{s}_j,T\mathbf{s}_j \rangle.$

Invoking the Spectral Theorem again, there is an orthonormal basis $\mathcal{T}=\{\mathbf{t}_1,\dots,\mathbf{t}_n\}$ such that

$T\mathbf{t}_j = t_j\mathbf{t}_j, \quad j=1,\dots,n.$

We then have that

$\langle \mathbf{s}_j,T\mathbf{s}_j \rangle = \left\langle \sum_{i=1}^n \langle \mathbf{t}_i,\mathbf{s}_j\rangle \mathbf{t}_i ,\sum_{k=1}^n \langle \mathbf{t}_k,\mathbf{s}_j\rangle T\mathbf{t}_k\right\rangle = \sum\limits_{i=1}^n t_i\langle \mathbf{t}_i,\mathbf{s}_j\rangle^2,$

and plugging this into the above we finally have

$\mathrm{Tr} ST = \sum\limits_{i,j=1}^n s_it_j \langle \mathbf{t}_i,\mathbf{s}_j\rangle^2.$

Now observe that the matrix

$D = \begin{bmatrix} {} & \vdots & {} \\ \dots & \langle \mathbf{t}_i,\mathbf{s}_j\rangle^2 & \dots \\ {} & \vdots & {} \end{bmatrix}$

has nonnegative entries, and also each row and column of $D$ sums to $1$ (why?). Thus, if we define a function on the convex hull $K$ of the $n \times n$ permutation matrices by

$g(M) = \sum\limits_{i,j=1}^n s_it_j \langle m_{ij},$

then this function is linear and we have $\mathrm{Tr} ST=g(D)$. It now follows from the convex optimization theorem together with the Birkhoff-von Neumann theorem that

$\mathrm{Tr} ST = g(D) \leq \max_\pi g(P_\pi),$

where the maximum is over all permutations $\pi$. Evaluating the right hand side, we get that

$\mathrm{Tr} ST \leq \max_\pi\sum_{i,j=1}^n s_it_{\pi(i)} = \sum\limits_{i=1}^n s_it_i,$

where the final equality follows from the fact that $s_1 \geq \dots \geq s_n$ and $t_1 \geq \dots \geq t_n.$

Q.E.D.

Corollary 1: The eigenvalue function $f \colon \mathrm{Sym} \mathbf{V} \to \mathbb{R}^n$ is continuous.

Proof: A function $h \colon X \to Y$ from one metric space to another is said to be a “contraction” if

$\mathrm{d}_Y(h(x_1),h(x_2)) \leq \mathrm{d}_X(x_1,x_2) \quad \forall x_1,x_2 \in X.$

Thus, a contraction is a function which brings points closer together, or more precisely doesn’t spread them farther apart. It is immediate to check that the definition of continuity holds for any contraction (in fact, we can choose $\delta = \varepsilon$ when checking continuity). The Hoffman-Wielandt inequality says that the eigenvalue mapping is a contraction: the distance between the eigenvalue vectors $f(S),f(T)$ of symmetric operators $S,T$ is at most Frobenius distance between $S$ and $T.$

Q.E.D.

## Math 31BH: Lecture 1

Let us begin Math 31BH by reviewing some aspects of Math 31AH. Last quarter, you encountered the following definition.

Definition 1: A Euclidean space is a pair $(\mathbf{V},\langle \cdot,\cdot \rangle)$ consisting of a finite-dimensional real vector space together with a scalar product on that space.

Equivalently, a Euclidean space is a finite-dimensional real inner product space.

Given a Euclidean space $(\mathbf{V},\langle \cdot,\cdot \rangle)$, Math 31AH is largely about the associated vector space $\mathrm{End} \mathbf{V}$ whose elements are linear operators

$A \colon \mathbf{V} \to \mathbf{V}$

(The “End” here comes from the word endomorphism, which is a more general term used to refer to a structure-preserving function from some object to itself). Every linear operator on $\mathbf{V}$ is completely determined by its action on any basis $\mathcal{B}$ of $\mathbf{V}$, which gives us a concrete description of $A$ as a matrix $[A]_\mathcal{B}$. The form of this matrix depends heavily on the basis $\mathcal{B}$, and one of the main goals of Math 31AH was to treat the following problem: given an operator $A \in \mathrm{End}\mathbf{V}$, find a basis $\mathcal{B}$ of $\mathbf{V}$ such that $[A]_\mathcal{B}$ is as simple as possible.

It turns out that this problem has a lot to do with how the operator $A$ interacts with the scalar product $\langle \cdot,\cdot \rangle$ on $\mathbf{V}$. You will recall that associated to every $A \in \mathrm{End} \mathbf{V}$ is another operator $A^*$, called the adjoint of $A$, defined by the condition that

$\langle \mathbf{v},A\mathbf{w} \rangle = \langle A^*\mathbf{v},\mathbf{w} \rangle \quad \forall \mathbf{v},\mathbf{w} \in \mathbf{V}.$

Another way to describe the relationship between $A$ and $A^*$ is by saying that, for any basis $\mathcal{B}$ of $\mathbf{V},$ the matrix $[A^*]_\mathcal{B}$ is the transpose of the matrix $[A]_\mathcal{B}$, and vice versa, and indeed sometimes $A^*$ is referred to as the transpose of $A$.

Definition 2: An operator $A \in \mathrm{End} \mathbf{V}$ is said to be selfadjoint (or symmetric) if $A^*=A$.

We denote by $\mathrm{Sym} \mathbf{V}$ the subset of $\mathrm{End}\mathbf{V}$ consisting of symmetric operators.

Proposition 1: $\mathrm{Sym}\mathbf{V}$ is a subspace of $\mathrm{End}\mathbf{V}$.

Proof: The scalar product is bilinear, i.e. we have

$\langle a_1\mathbf{v}_1 + a_2\mathbf{v}_2,b_1\mathbf{w}_1 + b_2\mathbf{w}_2 \rangle = \\a_1b_1 \langle \mathbf{v}_1,\mathbf{w}_1 \rangle + a_1b_2 \langle \mathbf{v}_1,\mathbf{w}_2 \rangle + a_2b_1\langle \mathbf{v}_2,\mathbf{w}_1 \rangle + a_2b_2\langle \mathbf{v}_2,\mathbf{w}_2 \rangle,$

and from this it follows (why?) that the adjoint is a linear operator on linear operators: we have

$(a_1A_1+a_2A_2)^*= a_1A_1^*+a_2A_2^*.$

In particular, if $A_1,A_2$ are symmetric operators, we have that

$(a_1A_1+a_2A_2)^*= a_1A_1+a_2A_2,$

which says that $\mathrm{Sym}\mathbf{V}$ is closed under taking linear combinations, i.e. it is a subspace of $\mathrm{End}\mathbf{V}$.

Q.E.D.

We now come to what is arguably the central result of Math 31AH, the Spectral Theorem.

Theorem 1: Given a symmetric operator $S \in \mathrm{Sym}\mathbf{V}$, there exists an orthonormal basis $\mathcal{B}=\{\mathbf{e}_1,\dots,\mathbf{e}_n\}$ of $\mathbf{V}$ such that

$A\mathbf{e}_1 = s_1 \mathbf{e}_1,\ \dots,\ A\mathbf{e}_n=s_n\mathbf{e}_n$,

where $s_1\geq \dots \geq s_n$ are real numbers.

The orthonormal vectors $\mathbf{e},\dots,\mathbf{e}_n$ are eigenvectors of the symmetric operator $S$, and the numbers $s_1 \geq \dots \geq s_n$ are its eigenvalues.

Once the Spectral Theorem is known, Math 31AH tends to devolve into a veritable orgy of diagonalization in which one is compelled to find the eigenvalues of all manner of particular symmetric operators. Diagonalization is an important skill which has many applications in science and engineering, from quantum mechanics to data science, but it is not the perspective taken in Math 31BH.

To elaborate, the Spectral Theorem gives us a function

$f \colon \mathrm{Sym} \mathbf{V} \longrightarrow \mathbb{R}^n$

which sends a symmetric operator on $\mathbf{V}$ to the list of its eigenvalues arranged in weakly decreasing order, which is a vector in $\mathbb{R}^n$,

$f(S) = (s_1,\dots,s_n).$

In Math 31AH, a basic problem is: given $S$, compute $f(S)$. That is, one wants to calculate the output of the function $f$ for a given input. In Math 31BH, we would like to analyze the function $f$ itself and see what kind of a function it is — what can we say about the function which sends a symmetric matrix to its eigenvalues?

First of all, it is important to understand that the function $f$ is non-linear: given $S,T \in \mathrm{Sym} \mathbf{V}$, it is typically not the case that $f(S+T) = f(S)+f(T)$, i.e. the eigenvalue vector of $S+T$ is usually not simply the sum of the eigenvalue vector of $S$ and the eigenvalue vector of $T$. This does occur if $S$ and $T$ happen to have a common set of eigenvectors, a situation which is equivalent to saying that they commute, $ST=TS$, and that is atypical behavior. So $f$ is not a linear transformation, and even though this function arises from the context of linear algebra, we need to go beyond linear algebra to understand it.

Probably most natural question to be addressed is that of continuity: if two symmetric operators $S,T$ are close together, are their eigenvalues also close together? Before we can answer this question, we need to formulate it precisely, and to do this we will briefly leave the world of vector spaces in order to axiomatize the notion of distance.

Definition 2: A metric space is a pair $(X,\mathrm{d})$ consisting of a set $X$ together with a function

$\mathrm{d} \colon X \times X \longrightarrow \mathbb{R}$

which has the following properties:

1. $\mathrm{d}(x_1,x_2) \geq 0$ for all $x_1,x_2 \in X$, with equality if and only if $x_1=x_2$;
2. $\mathrm{d}(x_1,x_2) = \mathrm{d}(x_2,x_1)$ for all $x_1,x_2 \in X$;
3. $\mathrm{d}(x_1,x_2) \leq \mathrm{d}(x_1,y) + \mathrm{d}(y,x_2)$ for all $x_1,x_2,y \in X$.

The function $\mathrm{d}$ is referred to as a distance function or metric on $X$, and indeed the three axioms above are chosen so that $\mathrm{d}$ has all the features we intuitively associate to the notion of distance: the first axiom says that the distance between two points cannot be negative, and is zero if and only if the two points are the same point; the second axiom says that the distance from home to work is the same as the distance from work to home; the third says that the shortest distance between to points is a straight line. The notion of distance is all we need to define the concepts of limit and continuity.

Defintion 2: Let $(X,\mathrm{d}_X)$ and $(Y,\mathrm{d}_Y$) be metric spaces, and let $f \colon X \to Y$ be a function. Given points $p \in X$ and $q \in Y$, we say that the limit of $f(x)$ as $x$ approaches $p$ is $q$, written

$\lim\limits_{x \rightarrow p} f(x) = q,$

if the following holds: for any $\varepsilon > 0$, there exists a corresponding $\delta > 0$ such that

$\mathrm{d}_X(x,p) < \delta \implies \mathrm{d}_Y(f(x),q) < \varepsilon.$

We say that $f$ is continuous at $p$ if the above holds with $q=f(p)$. We say that $f$ is continuous on a set $P \subseteq X$ if it is continuous at every point $p \in P$.

In order to get a feel for the above definition, you should try to prove a familiar result from calculus in this general context: the composition of two continuous functions is continuous. Here is the precise formulation of this result.

Proposition 1: Let $(X,\mathrm{d}_X)$, $(Y,\mathrm{d}_Y),$ and $(Z,\mathrm{d}_Z)$ be metric spaces, and let

$f \colon X \to Y \quad\text{ and }\quad g \colon Y \to Z$

be functions such that $f$ is continuous at $p$ and $g$ is continuous at $f(p).$ Then, the composite function $h= g \circ f$ is continuous at $p$.

Proof: Try it! If you aren’t able to write down a proof of this theorem, go back to Definition 2 and read it again. Iterate this procedure as many times as necessary, and don’t feel bad if a few iterations are required.

Q.E.D.

We are now quite close to having the conceptual tools needed to meaningfully ask whether the function which sends a symmetric operator to its eigenvalues is continuous. The remaining conceptual step is to understand that the scalar product on a Euclidean space gives us a very natural metric. To see this, let us take an intermediate step.

Definition 3: Given a vector space $\mathbf{V}$, a norm on $\mathbf{V}$ is a function

$\|\cdot\| \colon \mathbf{V} \longrightarrow \mathbb{R}$

which has the following properties:

1. We have $\|\mathbf{V}\|\geq 0$ for all $\mathbf{v} \in \mathbf{V}$, with equality if and only if $\mathbf{v}=\mathbf{0}_\mathbf{V}$ is the zero vector;
2. We have $\|t\mathbf{v}\| = |t| \|\mathbf{v}\|$ for all scalars $t \in \mathbb{R}$ and vectors $\mathbf{v} \in \mathbf{V}$, where $|\cdot|$ is the usual absolute value function on $\mathbb{R}$;
3. We have $\|\mathbf{v}_1+\mathbf{v}_2\| \leq \|\mathbf{v}_1\|+\|\mathbf{v}\|_2$ for all $\mathbf{v}_1,\mathbf{v}_2 \in \mathbf{V}$.

A normed vector space is a pair $(\mathbf{V},\|\cdot\|)$ consisting of a vector space together with a norm on that space.

Reading through the norm axioms, it is apparent that they are chosen to abstract the features of the familiar absolute value function $|\cdot|$ on the real numbers $\mathbb{R}$, which has all of these features. This is relevant because the distance between $x_1,x_2 \in \mathbb{R}$ is the absolute value of their difference,

$\mathrm{d}(x_1,x_2) = |x_1-x_2|.$

Proposition 2: If $\|\cdot\|$ is a norm on $\mathbf{V},$ then $\mathrm{d}(\mathbf{v}_1,\mathbf{v}_2)= \|\mathbf{v}_1-\mathbf{v}_2\|$ defines a metric on $\mathbf{V}$.

Proof: We have the prove that $\mathrm{d}$ satisfies the metric axioms. The first distance axiom follows immediately from the first norm axiom. For the second, we have

$\mathrm{d}(\mathbf{v}_1,\mathbf{v}_2) = \|\mathbf{v}_1-\mathbf{v}_2\| = \|(-1)(\mathbf{v}_2-\mathbf{v}_1)\| = |-1| \|\mathbf{v}_2-\mathbf{v}_1\|= \mathrm{d}(\mathbf{v}_2,\mathbf{v}_1),$

where we made use of the second norm axiom. Finally, for any three vectors $\mathbf{v}_1,\mathbf{v}_2,\mathbf{w} \in \mathbf{V}$, we have

$\mathrm{d}(\mathbf{v}_1,\mathbf{v}_2) = \|\mathbf{v}_1-\mathbf{v}_2\| = \|\mathbf{v}_1-\mathbf{w} + \mathbf{w}-\mathbf{v}_2\| \leq \|\mathbf{v}_1-\mathbf{w}\| + \|\mathbf{w}-\mathbf{v}_2\| = \mathrm{d}(\mathbf{v}_1,\mathbf{w}) + \mathrm{d}(\mathbf{w},\mathbf{v}_2),$

where we made use of the third norm axiom.

Q.E.D.

In view of Proposition 2, we will have found a metric on $(\mathbf{V},\langle \cdot,\cdot \rangle)$ as soon as we’ve found a norm. The recipe to get a norm out of the scalar product is inspired by the familiar relation between the absolute value and the dot product in the most familiar of Euclidean spaces, $\mathbb{R}^n$:

$|(x_1,\dots,x_n)| = \sqrt{(x_1,\dots,x_n) \cdot (x_1,\dots,x_n)}.$

First we need the following important lemma.

Lemma 1 (Cauchy-Schwarz inequality): In any Euclidean space, the following inequality holds for every two vectors: we have

$\langle \mathbf{v}_1,\mathbf{v}_2 \rangle \langle \mathbf{v}_1,\mathbf{v}_2 \rangle \leq \langle \mathbf{v}_1,\mathbf{v}_1 \rangle \langle \mathbf{v}_2,\mathbf{v}_2 \rangle,$

with equality if and only if $\mathbf{v}_1$ and $\mathbf{v}_2$ are linearly dependent.

Proof: The proof is not difficult, and actually it’s a beautiful argument involving the quadratic formula, which you’ve known for many years. However it’s a bit long, and I don’t want to break up our flow — so look it up! I’ve provided you with a convenient link above.

Q.E.D.

Proposition 3: If $\langle \cdot, \cdot \rangle$ is a scalar product on $\mathbf{V},$ then $\|\mathbf{v}\| = \sqrt{\langle \cdot,\cdot \rangle}$ defines a norm on $\mathbf{V}$.

Proof: We have to check that the norm axioms are satisfied. The first norm axiom follows immediately from the first scalar product axiom, namely that $\langle \mathbf{v},\mathbf{v} \rangle \geq 0$ with equality holding only for the zero vector. For the second norm axiom, we have

$\| t \mathbf{v}\| = \sqrt{\langle t\mathbf{v},t\mathbf{v} \rangle} = \sqrt{t^2}\sqrt{\langle \mathbf{v},\mathbf{v} \rangle} = |t| \|\mathbf{v}\|.$

Last is the triangle inequality, and to verify this property we use the Cauchy-Schwarz inequality. We have

$\|\mathbf{v}_1+\mathbf{v}_2\|^2 = \langle \mathbf{v}_1+\mathbf{v}_2,\mathbf{v}_1+\mathbf{v}_2\rangle = \langle \mathbf{v}_1,\mathbf{v}_1 \rangle +2\langle \mathbf{v}_1,\mathbf{v}_2 \rangle+ \langle \mathbf{v}_2\mathbf{v}_2 \rangle,$

which gets us to

$\|\mathbf{v}_1+\mathbf{v}_2\|^2 = \|\mathbf{v}_1\|^2 + 2\langle \mathbf{v}_1,\mathbf{v}_2 \rangle+\|\mathbf{v}_2\|^2 \leq \|\mathbf{v}_1\|^2 +2\|\mathbf{v}_1\|^2 \|\mathbf{v}_2\|^2+\|\mathbf{v}_2\|^2 = \left( \|\mathbf{v}_1\|+\|\mathbf{v}_2\|\right)^2,$

so that taking the square root on both sides we obtain the triangle inequality,

$\|\mathbf{v}_1+\mathbf{v}_2\| \leq \|\mathbf{v}_1\|+\|\mathbf{v}_2\|.$

Q.E.D.

So, is the function which sends a symmetric matrix to its eigenvalues continuous? From the above developments, we are almost at the point where we can meaningfully ask this question: all that remains is to find a metric on the vector space $\mathrm{Sym}\mathbf{V}$, which reduces to finding a norm on $\mathrm{Sym}\mathbf{V}$, which in turn reduces to finding a scalar product on $\mathrm{Sym}\mathbf{V}$. In fact, we will define a scalar product on all of $\mathrm{End} \mathbf{V}$, and the subspace $\mathrm{Sym}\mathbf{V}$ will inherit this scalar product.

The construction of a scalar product on $\mathrm{End} \mathbf{V}$ uses the concept of trace, which you encountered in Math 31AH and which we now review. Let $\mathcal{B} = \{\mathbf{e}_1,\dots,\mathbf{e}_n\}$ be an orthonormal basis of $\mathbf{V}$, and define the trace relative to $\mathcal{B}$ to be the function

$\mathrm{Tr}_\mathcal{B} \colon \mathrm{End}\mathbf{V} \longrightarrow \mathbb{R}$

given by

$\mathrm{Tr}_\mathcal{B}A = \sum\limits_{i=1}^n \langle \mathbf{e}_i,A\mathbf{e}_i\rangle,$

which is simply the sum of the diagonal elements of the matrix $[A]_\mathcal{B}.$

Proposition 3: For any operator $A \in \mathrm{End}\mathbf{V}$, we have

$\mathrm{Tr}_\mathcal{B}A^* = \mathrm{Tr}_\mathcal{B}A,$

and for any pair of operators $A,B \in \mathrm{End}\mathbf{V}$, we have

$\mathrm{Tr}_\mathcal{B}(aA+bB) = a\mathrm{Tr}_\mathcal{B} A+b\mathrm{Tr}_\mathcal{B}(B)$

for all $a,b \in \mathbb{R}$, and

$\mathrm{Tr}_\mathcal{B}(AB)=\mathrm{Tr}_\mathcal{B}(BA).$

Proof: By definition of the adjoint, we have

$\mathrm{Tr}_\mathcal{B}A^* = \sum\limits_{i=1}^n \langle \mathbf{e}_i,A^*\mathbf{e}_i\rangle = \sum\limits_{i=1}^n \langle A\mathbf{e}_i,\mathbf{e}_i\rangle = \sum\limits_{i=1}^n \langle \mathbf{e}_i,A\mathbf{e}_i\rangle = \mathrm{Tr}_\mathcal{B}A,$

where symmetry of the scalar product on $\mathbf{V}$ was used in the second to last equality.

The second identity says that $\mathrm{Tr}_\mathcal{B}$ is a linear function from $\mathrm{End}\mathbf{V}$ to $\mathbb{R}$, and this follows immediately from the definition of $\mathrm{Tr}_\mathcal{B}$ together with the bilinearity of the scalar product on $\mathbf{V}$ (work this out to make sure you understand why!).

For the third identity, using Problem 1 in Assignment 1 we have

$\mathrm{Tr}_\mathcal{B}AB = \sum\limits_{i=1}^n \langle \mathbf{e}_i,AB\mathbf{e}_i \rangle = \sum\limits_{i=1}^n \left\langle \mathbf{e}_i,A\sum\limits_{j=1}^n \langle \mathbf{e}_j,B\mathbf{e}_i\rangle \mathbf{e}_j \right\rangle = \sum\limits_{i=1}^n\sum\limits_{j=1}^n \langle \mathbf{e}_i,A\mathbf{e}_j\rangle \langle \mathbf{e}_j,B\mathbf{e}_i\rangle,$

and likewise

$\mathrm{Tr}_\mathcal{B}BA =\sum\limits_{i=1}^n\sum\limits_{j=1}^n \langle \mathbf{e}_i,B\mathbf{e}_j\rangle \langle \mathbf{e}_j,A\mathbf{e}_i\rangle.$

Thus, the $(i,j)$-term in the expansion of $\mathrm{Tr}_\mathcal{B}AB$ coincides with the $(j,i)$-term in the expansion of $\mathrm{Tr}_\mathcal{B}BA$, and since these expansions run over all pairs $(i,j) \in \{1,\dots,n\} \times \{1,\dots,n\}$, they are in fact the same sum.

Q.E.D.

Theorem 2: The function $\langle A,B \rangle = \mathrm{Tr}A^*B$ is a scalar product on $\mathrm{End}\mathbf{V}$.

Proof: We have to check the scalar product axioms; they all follow directly form the properties of the trace established above. For non negativity, we have

$\langle A,A \rangle = \mathrm{Tr} A^*A = \sum\limits_{i,j=1}^n \langle \mathbf{e}_i,A^*\mathbf{e}_j \rangle \langle \mathbf{e}_j,A\mathbf{e}_i \rangle = \sum_{i,j=1}^n \langle A\mathbf{e}_i,\mathbf{e}_j \rangle\langle A\mathbf{e}_i,\mathbf{e}_j \rangle.$

This is a sum of squares, hence it is nonnegative, and equal to zero if and only if all of its terms are equal to zero. But if $\langle A\mathbf{e}_i,\mathbf{e}_j \rangle=0$ for all $1 \leq i,j \leq n,$ then $A$ is the zero operator (make sure you understand why).

For symmetry, we have

$\langle A,B \rangle = \mathrm{Tr}_\mathcal{B}(A^*B) = \mathrm{Tr}_\mathcal{B}(BA^*) = \mathrm{Tr}_\mathcal{B}(B^*A) = \langle B,A \rangle$.

Finally, for (bi)linearity we have

$\langle a_1A_1+a_2A_2,B \rangle = \mathrm{Tr}_\mathcal{B} (a_1A_1^*B + a_2A_2^*B) = a_1\mathrm{Tr}_\mathcal{B} A_1^*B + a_2\mathrm{Tr}_\mathcal{B}A_2^*B = a_1 \langle A_1,B\rangle + a_2\langle A_2,B \rangle.$

Q.E.D.

In view of Proposition 3, Theorem 2 immediately gives us the following.

Corollary 1: The function $\|A\|=\sqrt{\mathrm{Tr}_\mathcal{B} A^*A}$ defines a norm on $\mathrm{End}\mathbf{V}$.

The above norm on operators is called the Frobenius norm, and it is very important in matrix analysis and data science. The Frobenius norm is defined using the trace, and the since our definition of the trace depends on a choice of orthonormal basis $\mathcal{B}$, so does our definition of the Frobenius norm. In fact, this dependence is illusory.

Proposition 4: For any orthonormal basis $\mathcal{C}=\{\mathbf{f}_1,\dots,\mathbf{f}_n\}$ of $\mathbf{V}$, the function $\mathrm{Tr}_\mathcal{C} \colon \mathrm{End}\mathbf{V} \to \mathbb{R}$ defined by

$\mathrm{Tr}_\mathcal{C}A = \sum\limits_{I=1}^n \langle \mathbf{f}_i,A\mathbf{f}_i\rangle$

coincides with $\mathrm{Tr}_\mathcal{B}$.

Proof: the linear transformation $U\colon \mathbf{V} \to \mathbf{V}$ defined by $U\mathbf{e}_i=\mathbf{f}_i$ for $i=1,\dots,n$ is invertible — it’s inverse transforms the basis $\mathcal{C}$ back into the basis $\mathcal{B}$, and moreover $U^{-1}=U^*$, i.e. $U$ is an orthogonal transformation of $\mathbf{V}$. For any operator $A \in \mathrm{End}\mathbf{V}$, we have

$\mathrm{Tr}_\mathcal{C}A = \sum\limits_{i=1}^n \langle \mathbf{f}_i,A\mathbf{f}_i \rangle = \sum\limits_{i=1}^n \langle U\mathbf{e}_i,AU\mathbf{e}_i \rangle = \sum\limits_{i=1}^n \langle \mathbf{e}_i,U^*AU\mathbf{e}_i \rangle = \mathrm{Tr}_\mathcal{B}U^*AU=\mathrm{Tr}_\mathcal{B}(AUU^{-1})=\mathrm{Tr}_\mathcal{B}A.$

Q.E.D.

In view of the above, we may simply refer to “the trace,” and write $\mathrm{Tr}$ for this function without specifying a basis.

Now that $\mathrm{End}\mathbf{V}$ has been promoted from a vector space to a Euclidean space, the function which sends a symmetric matrix to its ordered list of eigenvalues is a function between Euclidean spaces, and we can legitimately ask if this function is continuous. Next lecture, we will answer this question.

## Math 31BH: Lecture 0

Welcome to Math 31BH, the second quarter of a three-quarter honors integrated linear algebra/multivariable calculus sequence for well-prepared students. Math 31BH combines vector algebra (which you learned in Math 31AH) with calculus (which you presumably already have substantial familiarity with): the two react to yield a new subject called vector calculus, which in addition to its inherent mathematical utility and appeal has many applications, especially in physics and engineering.

Vector calculus is a difficult subject, and this is especially so for the honors treatment of it offered in this course. Before saying anything else, I want to draw your attention to the following remark which accompanies the registrar’s listing for this course:

The honors calculus courses 31AH-BH-CH are unusually demanding and are intended for students with strong aptitude and deep interest in Mathematics.

– UCSD Registrar’s Office

If you’re enrolled in this class, I want you to be aware immediately: Math 31BH is an unusually demanding course. If the prospect of an intellectual challenge appeals to you, you’ve come to the right place; otherwise, this is not the course for you, and you should instead consider the Math 20 course sequence. It is not the case that the Math 31 course sequence is “better” than the Math 18/20 course sequence — but it is different. The Math 31 sequence is more theoretical and rigorous, meaning that there is a strong emphasis on precise definitions and clear logical reasoning, as opposed to an emphasis on the memorization and application of formulas. This does not mean that you are not expected to master formulas in Math 31BH — you are — rather it means that the process of translating theoretical knowledge into the ability to do concrete calculations is largely left to the student. Effectively, this means that there is a large “make your own examples” component in which you will have to unpack the course content outside of lecture.

Now let us discuss the basic parameters of the course. First of all, we will not be meeting for standard in-person lectures this quarter. The course will instead be organized as follows.

This blog will serve as the textbook for the course: the entire course content will be made available here, in the form of lecture posts. These posts will be of a quality unmatched by any textbook, and the goal is that they will be written in a way which imparts certain elements of the live lecture experience. Out of respect for your hard-earned tuition dollars, some of this content will only be available to students enrolled in Math 31BH. There will be two posts per week, each of which will contain content corresponding to what would be covered in a rather ambitious in-person lecture. Also, the posts will contain embedded links to additional material, typically background material that you should read up on. The two weekly lecture posts will be made available prior to our designated Monday and Wednesday 15:00-15:50 lecture slots.

That leaves the Friday lecture slot, which will consist of two parts. The first part will occupy the 15:00-15:50 lecture slot, and will be conducted in the style of a flipped classroom in order to cement your understanding of material you have already absorbed. This will often take the form of an interactive problem solving session: typically we will discuss the weekly problem sets and I will try to guide you through these problems so as to start you down the right path. At other times we will engage in a recap of essential material presented in the lecture posts, or a more free-form discussion of the course content intended to deepen and broaden your understanding.

The flipped classroom session will segue into an additional 70 minutes corresponding to “office hours,” which will be driven primarily by student questions (many students find it worthwhile to attend office hours even if they don’t plan to ask questions). The combined flipped classroom and office hours endeavor makes for a two hour live Zoom session every Friday, 15:00-17:00. For those of you not already aware (probably a null set), Zoom is a freely available videoconferencing platform. These Friday Zoom sessions will be recorded and made available on Canvas. I will communicate the Friday lecture Zoom link to you in a class-wide email.

In addition to the course content delivered by Professor Novak in the manner described above, the Math 31BH teaching assistant, Zhiyuan Jiang, will be running discussion sections via every Tuesday, from 17:00-17:50 for students in section A01, and from 18:00-18:50 for students in section A02. Zhiyoun will post further details concerning the discussion sections on Piazza.

Now that we have discussed how the course content will be disseminated to students, let us discuss the content to be created by students and submitted for evaluation. Students in Math 31BH will generate two types of content: solutions to weekly problem sets, and solutions to exams.

We will aim for a total of nine weekly problem sets in this course. Problem sets will be posted to Piazza on Sundays before 24:00 (with the exception of Week 1, where the problem set will be posted today), and the corresponding solutions will be due the following Sunday before 24:00. Your solutions will be both submitted by you and returned to you using Gradescope (entry code P5ZY44). This process will be managed by the teaching assistant, and any questions or concerns related to Gradescope should be directed to Zhiyoun. The problem sets are a very important part of the course, and accordingly make up 50% of the total grade. While you may submit handwritten solutions, it is recommended that you typeset your solutions using LaTeX, the professional standard for the preparation of scientific documents. Learning how to prepare scientific documents of professional quality is a valuable skill that will serve you well in your university career, and beyond. In order to help with this, the problem sets themselves will be typeset in LaTeX, and the source files will be posted along with their PDF output. Problem set solutions which have been typeset using LaTeX will receive an automatic 5% bonus.

There will be no problem set due on February 6, nor will there be a new lecture post on February 7, because this is the date of the midterm exam. The midterm exam will count for 20% of your course grade. The midterm exam will be an at-home test, submitted via Gradescope. Further details will be released as the date approaches.

The final exam for the course is set for March 16, with a scheduled time of 15:00-17:59. This date and time slot is set by the university registrar and cannot be changed; if you are unable to write the final exam at this specified day and time, you should not enroll in the course. The final exam will count for 30% of your course grade. The details of how the final exam will be written and submitted are not yet available, but will be soon, and I will keep you updated.

Our first live Zoom session will take place on January 7, 15:00-17:00. I expect you will have many questions, so this meeting will be mostly logistical, but we should still have some time for math. The schedule below indicates the expected mathematical content of future lectures. It will be filled in one week at a time (so refer back to it weekly), and is constantly subject to change.

PODCAST VERSION

## Math 202C: Lecture 21 -The sensitivity conjecture

##### Preliminary definitions and Notations

First up, a list of preliminary notations and definitions of terms that recur throughout this blog post.

• $Q^n$ denotes the $n$-dimensional hypercube graph whose vertex set consists of all strings in $\{-1,1\}^n$ and two vertices are adjacent iff the corresponding strings differ exactly in one-coordinate.
• For an undirected graph $G$, the maximum degree of a vertex in $G$ is denoted by $\Delta(G)$ and $\lambda_1(G)$ denotes the largest eigenvalue of the adjacency matrix of $G$.
• For $x \in \{-1,1\}^n$ and a subset $S$ of indices from $[n] = \{1, 2, \cdots, n\}$, the string obtained from $x$ by flippling all entries of $x$ corresponding to the indices in $S$ is denoted by $x^S$.
• Any function $f : \{-1,1\}^n \rightarrow \{-1,1\}$ is called a boolean function.

Definition (Local sensitivity and sensitivity). Given a boolean function $f : \{-1,1\}^n \rightarrow \{-1,1\}$, the local sensitivity on the input $x$, denoted by $s(f,x)$, is defined as the number of indices $i$, such that $f(x) \neq f(x^{\{i\}})$, and the sensitivity $s(f)$ of $f$ is $\max_x s(f,x)$.

Definition (Local block sensitivity and sensitivity). Given a boolean function $f : \{-1,1\}^n \rightarrow \{-1,1\}$, the local block sensitivity on the input $x$, denoted by $bs(f,x)$ is defined as the maximum number of disjoint blocks $B_1, \cdots B_k$ of $[n]$ such that for each $B_i, f(x) \neq f(x^{B_i})$, and the block sensitivity $bs(f)$ of $f$ is $\max_x bs(f,x)$.

#### The sensitivity conjecture and the tale of three theorems

(Sensitivity conjecture) There exists an absolute constant $C >0$, such that for every boolean function $f$, $bs(f) \leq s(f)^C$.

Sensitivity and block sensitivity are examples of complexity measures for boolean functions. Two complexity measures $A$ and $B$ are said to be polynomially related to each other if there exist polynomials $p(t), q(t)$ such that for any Boolean function $f$, we have $A(f) \leq p(B(f))$ and $B(f) \leq q(A(f))$. The block sensitivity is known to be polynomially related to other complexity measures of boolean functions such as the degree (to be defined later). However, the question (a.k.a the sensitivity conjecture) of whether the block sensitivity is polynomially related to the sensitivity was posed as an open problem by Nisan and Szagedy in 1989 (Note that $bs(f) \geq s(f)$ by a direct consequence of their definitons. The open problem was showing whether $bs(f) \leq s(f)^C$ for some $C >0$). A few years later in 1992, Gotsman and Linial [2] showed the equivalence of the sensitivity conjecture to another open problem on the hypercube. Finally in 2019, Hao Huang resolved the sensitivity conjecture by leveraging on the equivalence shown by Gotsman and Linial.

#### Cue harmonic analysis on finite abelian groups

In lecture post 7 by Prof. Novak, we saw that $\mathrm{B}(n)$ is isomorphic to the group $\mathbb{Z}_2^n$, i.e. bitstrings of length $n$ (Recall that $B(n)$ denotes the subgroup of $\mathrm{S}(2n)$ generated by the n disjoint transpositions $\tau_1=(1\ 2),\ \tau_2=(3\ 4),\ \dots, \tau_n = (2n-1\ 2n)).$ Thus, a boolean function $f: \{-1,1\}^n \rightarrow \{-1,1\}$, can be identified with a member of the Hilbert space $\mathbb{C}^{ \mathrm{B}(n) }$ and the rich tools of harmonic analysis on $B(n)$ discussed in lecture posts 7-9 are applicable to boolean functions. Note that the hypercube $Q^n$ is the subgraph of the Caley graph of $S(2n)$ induced by the subgroup $B(n)$. Thus, the sensitivity conjecture can be rephrased in the language of induced subgraphs of the hypercube $Q^n$ and this is exactly the essence of the work done by Gotsman and Linial in 1992.

Before we dive into the details, a few (post)-preliminary notations and remarks.

• Given an induced subgraph $H$ of $Q^n$, let $Q^n- H$ denote the subgraph of $Q^n$ induced on the vertex set $V(Q) \backslash V(H)$.
• For $S \subseteq [n] := \{1,2, \cdots, n\}$, the function $\chi_S : \{-1,1\}^n \rightarrow \{+1,-1\}$ is defined as $\chi_S(x) = \Pi_{i \in S}x_i$, with the convention that for all $x, \: \: \chi_{\phi}(x) = 1$ where $\phi$ denotes the empty set.
• In lecture post 7 (and also in the qual exam) we saw that the functions $\{\chi_S\}_{S \subseteq [n]}$ (which exactly correspond to the characters of $B(n)$ since $\mathbb{Z}_2^n$ is isomorphic to $\mathrm{B}(n)$) form an orthogonal basis for $\mathbb{C}^{ \mathbb{Z}_2^n }$ under the inner product $\langle f,g \rangle = \sum_{x \in \{-1,1\}^n} f(x)g(x)$.
• Thus, given a boolean function $f$, by Fourier expansion, $f$ can be written as $f(x) = \sum_{S \subseteq [n]} \hat{f}(S)\chi_S(x)$, where $\hat{f}$ denotes the Fourier coefficients.
• The average of a boolean function $f$ on $Q^n$ is denoted as $\mathbf{E}(f) : = \frac{1}{2^n}\sum_{x \in Q^n}f(x)$. Note that since $\chi_{\phi}(x) = 1$ for all $x$, we have the identity $\hat{f}(\phi) = \mathbf{E}(f)$ which is quite useful when proving theorem 1 (see below).

Definition. Given a boolean function $f$, the degree of $f$ is defined as $deg(f) := \max _{S \subseteq [n]}\{|S| \: | \: \hat{f}(S) \neq 0\}$.

Theorem 1 [Gotsman and Linial]. The following two statements are equivalent for any monotone strictly increasing function $h: \mathbb{N} \rightarrow \mathbb{R}$.
a. For any induced subgraph $H$ of $Q^n$ with $|V(H)| \neq 2^{n-1}$, we have $\max \{ \Delta(H), \Delta(Q^n - H)\} \geq h(n)$.
b. For any boolean function $f$, we have that $s(f) \geq h(deg(f))$.

Gotsman and Linial show that proving the equivalence of statements a and b above is equivalent to proving the equivalence of yet another two statements and then proceed to prove this second equivalence.

Proof of Theorem 1. First, we show the equivalence of statement a to following statement a’:

a’. For any boolean function $g$, $\mathbf{E(g)} \neq 0$ implies there exists an input $x$ such that $n - s(g,x) \geq h(n)$.

a’ implies a : Given an induced subgraph $H$ of $Q^n$, it can be associated with a corresponding boolean function $g$ defined such that $g(x) = 1$ iff $x \in V(H)$. By a direct consequence of the definitions of adjacency in $Q^n$ and the local sensitivity of $g, \: \: \text{deg}_H(x) = n - s(g,x)$ for $x \in V(H)$ and $\text{deg}_{Q^n -H}(x) = n - s(g,x)$ for $x \in V(Q) \backslash V(H)$ (Note that here $\text{deg}_H(x)$ denotes the degree of the vertex $x$ in the subgraph $H$ and is not to be confused with the degree of a boolean function that was defined earlier). Thus, $\max \{ \Delta(H), \Delta(Q^n - H)\} = \max_x \{n - s(g,x)\}$. Let $\mathbf{E}(g)$ denote the average value of $g$ on $Q^n$, i.e. $\mathbf{E}(g) = \frac{1}{2^n}\left[ \sum_{x \in V(H)}g(x) + \sum_{x \in V(Q)\backslash V(H)}g(x) \right]$. Note that $V(H) \neq 2^{n-1}$ implies $\mathbf{E}(g) \neq 0$. Thus, statement a’ implies $\max \{ \Delta(H), \Delta(Q^n - H)\} = \max_x \{n - s(g,x)\} \geq h(n)$.

a implies a’ : Given a boolean function $g$, define $H$ to be the subgraph induced by the set of vertices $\{x \in Q^n \: | \: g(x) = 1\}$. By an identical arument as in the previous paragraph, $\max \{ \Delta(H), \Delta(Q^n - H)\} = \max_x \{n - s(g,x)\}$. Also, $\mathbf{E}(G) \neq 0$ implies $V(H) \neq 2^{n-1}$. Thus, statement a implies $\max \{ \Delta(H), \Delta(Q^n - H)\} = \max_x \{n - s(g,x)\} \geq \sqrt{n}$ thereby showing the equivalent of statements a and a’.

Now, we show the equivalence of statement b to statement b’:

b’. For any boolean function $f$, $s(f) < h(n)$ implies $deg(f) < n$.

Assuming b and that the premise for b’ is true, then $h(deg(f)) < s(f) < h(n)$ and since $h$ is assumed to be monotonic strictly increasing this shows that $deg(f) < n$ and thus b implies b’. Now, towards showing the other implication let $f$ denote a boolean function with degree $d \leq n$. Assume without loss of generality that for $S =\{1, \cdots, d\}, \: \: \hat{f}(S) \neq 0$. Now, consider the function $g : \{-1,1\}^d \rightarrow \{-1,1\}$ defined as $g(x_1, x_2, \cdots x_d) := f(x_1, x_2, \cdots, x_d, -1, -1, \cdots -1)$. By construction $s(f) \geq s(g)$ and $deg(g) = d$, i.e $g$ has full degree as a boolean function on $Q^d$. Now, the contrapositive of b’ says that $deg(g) = d$ implies $s(g) \geq h(d)$. Thus, $s(f) \geq s(g) \geq h(d) = h(deg(f))$ and this finshes showing the equivalence of b and b’.

Thus, proving theorem 1 has been reduced to the task of proving the equivalence of statements a’ and b’. Now, given a boolean function $f$, define another boolean function $g(x) = f(x)\chi_{[n]}(x)$ where $\chi_{[n]}(x) = \Pi_{i=1}^nx_i$ is the parity function of $x$. Note that $\hat{g}(S) = \frac{1}{2^n}\sum_{x \in Q^n}f(x) \chi_{[n]}(x) \chi_S(s)$. Thus,

$\hat{g}(S) = \frac{1}{2^n}\sum_{x \in Q^n}f(x) (\Pi_{i=1}^nx_i) (\Pi_{i \in S}x_i) = \frac{1}{2^n}\sum_{x \in Q^n}f(x) \Pi_{i \notin S}x_i = \hat{f}([n] \backslash S)$

Thus, for all $S \subseteq [n]$, $\hat{g}(S) = \hat{f}([n]\backslash S)$. Also note that for all $x \in Q^n$, $s(g,x) = n- s(f,x)$ because every bit flip which causes a change in value of $f(x)$ does not cause a change in function value of $g(x)$ and vice-versa. Thus, $\mathbf{E}(g) = \hat{g}(\phi) = \hat{f}([n])$ where $\phi$ denotes the empty set.

a’ implies b’: Towards a contradiction assume that $s(f) < h(n)$ and $deg(f) = n$ . This by definition implies $\hat{f}([n]) \neq 0$ and thus $\mathbf{E}(g) \neq 0$. By a’, there exist a $x$ such that $s(g,x) \leq n - h(n)$ and since $s(g,x) = n- s(f,x)$ this implies there exists a $x$ such that $s(f,x) \geq h(n)$ which contradicts the premise that $s(f) < h(n)$.

b’ implies a’: Again towards a contradiction assume that given $g$ such that $\mathbf{E}(g) \neq 0$ we have that for all $x, \: s(g,x) > n- h(n)$. Since $s(g,x) = n- s(f,x)$ this implies that $s(f) < h(n)$ which by 2′ implies $deg(f) . Note that $deg(f) < n$ is equivalent to $0 = \hat{f}([n])$ and thus $\hat{f}([n]) = \hat{g}(\phi) = \mathbf{E}(g) = 0$ which contradicts the premise that $\mathbf{E}(g) \neq 0$ and this completes the proof of theorem 1.

Theorem 2 [Huang] . For every integer $n \geq 1$, let $H$ be an arbitrary $(2^{n-1}+1)$-vertex induced subgraph of $Q^n$, then

$\Delta(H) \geq \sqrt{n}$

The proof of theorem 2 is going to be the main focus for the remainder of this blogpost but before that we discuss two corollaries which show why theorem 2 resolves the sensitivity conjecture.

Corollary 1. For every boolean function $f$, $s(f) \geq \sqrt{deg(f)}$.

Proof of Corollary 1. For any induced subgraph $H$ of $Q^n$ with $|V(H)| \neq 2^{n-1}$, since the total number of vertices in $Q^n$ is $2^n$, either $H$ contains atleast $2^{n-1}+1$ vertices or $Q^n - H$ does. So without loss of generality, assume $H$ contains atleast $2^{n-1}+1$ vertices (If not, just replace $H$ with $Q^n -H$ in the remainder of the proof). Now, theorem 2 (along with the fact that the maximum degree $\Delta(H)$ is non-decreasing in the number of vertices in $H$) implies $\max \{ \Delta(H), \Delta(Q^n - H)\} \geq \sqrt{n}$. Let $h(n) := \sqrt{n}$ and note that it is a monotone strictly increasing function on $\mathbb{N}$. Now, theorem 1 implies $s(f) \geq \sqrt{deg(f)}$ and this completes the proof.

Theorem 3 [Tal]. For every boolean function $f$, $bs(f) \leq deg(f)^2$.

It should be noted that theorem 3 is an imporvement on the work by Nisan and Szegedy in 1992 wherein they showed that $bs(f) \leq 2deg(f)^2$.

Corollary 2. For every boolean function $f$, $bs(f) \leq s(f)^4$.

Note that corollary 2 follows immediately from theorem 3 and corollary 1. Thus, corollary 2 confirms the sensitivity conjecture by showing that the block sensitivity and the sensitivity of a boolean function are indeed polynomially related.

#### Cue matrix analysis – “A proof that came straight from THE Book”

In this section, we will focus on Hao Huang’s proof of theorem 2. The core chunk of the proof invovles a creative construction of an iterative sequence of matrices whose $n^{th}$ instance is closely related to the adjacency matrix of $Q^n$ and an application of Cauchy’s interlace theorem which is stated as follows,

(Cauchy’s interlace theorem) Let $A$ be a symmetric $n \times n$, $B$ be a $m \times m$ principal submatrix of $A$ for some $m. If the eigenvalues of $A$ are $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$, and the eigenvalues of $B$ are $\mu_1 \geq \mu_2 \geq \cdots \geq \mu_n$, then for all $1 \leq i \leq m$,

$\lambda_i \geq \mu_i \geq \lambda_{i+n-m}$

Recall that we saw in Math 202A that Cauchy’s interlace theorem can be proven via a direct application of the Courant-Fischer theorem.

Now, we move on to the iterative matrix sequence that turns out to be a bite-sized kryptonite for the sensitivity conjecture. Consider the sequence of symmetric square matrices defined iteratively as follows,

$B_1 := \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \qquad B_n :=\begin{pmatrix} B_{n-1} & I \\ I & -B_{n-1} \end{pmatrix}$

The first exercise is to study the spectrum of $B_n$ and the second exercise is to compute an upper bound for the largest eigenvalue $\lambda_1$ of a specific type of matrix. These two exercises then set us up nicely for a clever application of Cauchy’s interlace theorem to prove theorem 2.

Exercise 1.
a)
Show that $B_n^2 = nI$ where $I$ denotes the $2^n \times 2^n$ identity matrix.
b) Show that the trace of $B_n$ is $0$.
c) Conclude that the eigenvalues of $B_n$ are $\sqrt{n}$ with multiplicity $2^{n-1}$ and $-\sqrt{n}$ with multiplicity $2^{n-1}$ .

Exercise 2. Suppose $H$ is a $m$-vertex undirected graph and $B$ is a symmetric matrix whose entries are in $\{-1,0,1\}$, and whose rows and columns are indexed by the vertex set of $H$. Also assume $B_{u,v} = 0$ whenever $u$ and $v$ are non-adjacent vertices in $H$. Then show that

$\Delta(H) \geq \lambda_1 := \lambda_1(B)$

The defintion of this iterative sequence of matrices $\{B_j\}_1^n$ is not without precedent. Note that the adjacency matrix of $Q^n$ can also be defined in a similarly iterative manner. Recall that a vertex of $Q^n$ is a string in $\{-1,1\}^n$. Given a $(n-1)$-length string of -1s and 1s it naturally extends to exactly two strings in $\{-1,1\}^n$ and these two $n$-strings correspond to the only two vertices of $Q^n$ whose corresponding strings have their first $n-1$ entries to be the same as the original $n-1$ string (note that these two vetices of $Q^n$ are adjacent). Exapnding on this observation, the hypercube $Q^n$ can be thought of as having two $Q^{n-1}$ sub-hypercubes with a perfect matching connecting these two subcubes. In otherwords, consider the following sequence of iteratively defined matrices,

$A_1 := \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \qquad A_n :=\begin{pmatrix} A_{n-1} & I \\ I & A_{n-1} \end{pmatrix}$

Then $A_n$ represents the adjacency matrix of $Q^n$ and in particular, note that $A_n$ is equal to the matrix obtained by changing every $(-1)$ entry of $B_n$ to $1$.

Now that we have all the necessary ingredients, let’s discuss the proof of Theorem 2.

(Restated) Theorem 2 [Huang] . For every integer $n \geq 1$, let $H$ be an arbitrary $(2^{n-1}+1)$-vertex induced subgraph of $Q^n$, then

$\Delta(H) \geq \sqrt{n}$

Proof of Theorem 2 [Haung].

Let $B_n$ be the sequence of matrices that we defined above and let $H$ be a $(2^{n-1} + 1)$-vertex induced subgraph of $Q^n$. Note that $H$ naturally induces a principal submatrix $B_H$ of $B_n$ (written with respect to an appropriate permutation of the standard basis). As discussed in the preceeding paragraph, by changing every $(-1)$ entry of $B_n$ to 1, we get the adjacency matrix of $Q^n$. So, in particular, $B_H$ and $H$ satisfy the conditions of the statement in Exercise 2 and thus,

$\Delta(H) \geq \lambda_1(B_H)$

Also, since $B_H$ is a $(2^{n-1} + 1) \times (2^{n-1} + 1)$ principal submatrix of the $2^n \times 2^n$ matrix $B_n$, by Cauchy’s interlace theorem,

$\lambda_1(B_H) \geq \lambda_{2^{n-1}}(B_n)$

From Exercise 1, we know that the eigenvalues of $B_n$ are $\sqrt{n}$ with multiplicity $2^{n-1}$ and $-\sqrt{n}$ with multiplicity $2^{n-1}$. Thus, $\lambda_{2^{n-1}}(B_n) = \sqrt{n}$ and thus,

$\Delta(H) \geq \lambda_1(B_H) \geq \lambda_{2^{n-1}}(B_n) = \sqrt{n}$

and this completes the proof.

#### Closing Remarks

I will end this blog post for our Math 202C class with a quote from Scott Aaronson’s blog post about Hao Huang’s proof of the sensitivity conjecture.

Paul Erdös famously spoke of a book, maintained by God, in which was written the simplest, most beautiful proof of each theorem. The highest compliment Erdös could give a proof was that it “came straight from the book.” In this case, I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.

-Scott Aaronson

The funny thing is, a few days after Prof. Aaronson’s blogpost, his former student Shalev Ben-David posted in the comments section, a simplication of Huang’s proof which does not use Cauchy’s interlacing theorem.

Anyways, talking about the sensitivity conjecture is indeed quite a fitting end to the Math 202 series since both, Gotsman & Linial’s work and Hao Huang’s work on this 30 year old ex-open problem utilize almost exclusively, concepts which we have studied over the past three quarters.

Thanks for reading my post and have a great summer everyone!

##### References
• [1] Huang, H. (2019). Induced subgraphs of hypercubes and a proof of the sensitivity conjecture. Annals of Mathematics190(3), 949-955.
• [2] N. Nisan, M. Szegedy, On the degree of Boolean functions as real polynomials, Comput. Complexity, 4 (1992), 462–467.
• [3] C. Gotsman, N. Linial, The equivalence of two problems on the cube, J. Combin.Theory Ser. A, 61 (1) (1992), 142–146.
• [4] F. Chung, Z. F¨uredi, R. Graham, P. Seymour, On induced subgraphs of the cube, J. Comb. Theory, Ser. A, 49 (1) (1988), 180–187.
• [5] A. Tal, Properties and applications of boolean function composition, Proceedings of the 4th conference on Innovations in Theoretical Computer Science, ITCS ’13, 441–454.
• [6] Karthikeyan, Rohan & Sinha, Siddharth & Patil, Vallabh. (2020). On the resolution of the sensitivity conjecture. Bulletin of the American Mathematical Society. 57. 1. 10.1090/bull/1697.
• [7] https://www.scottaaronson.com/blog/?p=4229
• [8] Lecture notes for Math 202A by Prof. Philip Gill.
• [9] D. Kroes, J. Naranjo, J. Nie, J. O’ Neill, N. Seiger, S. Spiro, E. Zhu, Linear Algebra methods in Combinatorics, ABACUS notes Fall 2019 UCSD.

## Math 202C: Lecture 20 – Uncertainty Principle

Author: Zhaolong Han

In this lecture, we will introduce the Fourier transform on finite Abelian group and a discrete version of “uncertainty principle” based on the chapter 2 of “Discrete Harmonic Analysis” by Ceccherini-Sillberstein[1]. In quantum mechanics, there is a well-known uncertainty principle saying that the more precisely the position of some particle is determined, the less precisely its momentum can be predicted from initial conditions, and vice versa.
For finite Abelian group, there is a theorem that assimilates this uncertainty principle, so people refer to this group-theoretic version as “uncertainty principle” too. However, our focus today will be Tao’s uncertainty principle for cyclic group of prime order[2], which is a refinement of the discrete uncertainty principle due to the special structure of the group.

First, let us introduce some notations and redefine the Discrete Fourier transform as Andrew did in Lecture 18. In fact, these materials already appeared in Lecture 8, where we studied the concrete example of the the rank $k$ hypercube group $B(k)$.

Let $X$ be a finite set and denote by $L(X)=\{f:X\to \mathbb{C}\}$ the vector space of all complex -valued functions defined on $X$. Then $\mathrm{dim}\ L(X)=|X|$, where $|\cdot|$ denotes cardinality.

For $x\in X$ we denote by $\delta_x$ the Dirac function centered at $x$, that is , the element $\delta_x\in L(X)$ defined by $\delta_x(y)=1$ if $y=x$ and $\delta_x(y)=0$ if $y\neq x$ for all $y\in X$.

The set $\{\delta_x:x\in X\}$ is a natural basis for $L(X)$ and if $f\in L(X)$ then $f=\sum_{x\in X}f(x)\delta_x$.

The space $L(X)$ is endowed with the scalar product defined by setting

$\langle f_1,f_2\rangle=\sum_{x\in X}f_1(x)\overline{f_2(x)}$

for $f_1,f_2\in L(X)$, and we denote by $||f||=\sqrt{\langle f,f\rangle}$ the norm of $f\in L(X)$. Note that ${\delta_x:x\in X}$ is an orthonormal basis of $L(X)$ with respect to $\langle\cdot\rangle$.

Let $A$ be a finite Abelian group. A character of $A$ is a map $\chi:A\to\mathbb{T}$ such that

$\chi(x+y)=\chi(x)\chi(y)$

for all $x,y\in A$. The set $\widehat{A}$ of all characters of $A$ is an Abelian group with respect to the product $(\chi,\psi)\mapsto \chi\cdot\psi$ defined by $(\chi\cdot\psi)(x)=\chi(x)\psi(x)$ for all $x\in A$. $\widehat{A}$ is called the dual of $A$.

For $A=\mathbb{Z}_n$, where $\mathbb{Z}_n$ is the cyclic group of order $n$, $n\ge 2$, $n\in \mathbb{N}$. Let $\omega:=e^{\frac{2\pi i}{n}}$. For $x\in \mathbb{Z}_n$, define $\chi_x\in L(\mathbb{Z}_n)$ by the function $z\mapsto \omega^{zx}$.

Exercise 1: Check that for $x\in \mathbb{Z}_n$, $\chi_x$ is indeed a character of $\mathbb{Z}_n$, and $\widehat{\mathbb{Z}_n}=\{\chi_x:x\in \mathbb{Z}_n\}$ is isomorphic to $\mathbb{Z}_n$.

Now we define the Discrete Fourier transform. Let $A$ be a finite Abelian group. The Fourier transform of a function $f\in L(A)$ is the function $\hat{f}\in L(\widehat{A})$ defined by

$\widehat{f}(\chi)=\sum_{y\in A}f(y)\overline{\chi(y)}$

for all $\chi\in \widehat{A}$. Then $\widehat{f}(\chi)$ is called the Fourier coefficient of $f$ with respect to $\chi$. When $A=\mathbb{Z}_n$ and $f\in L(\mathbb{Z}_n)$, we call $\frac{1}{n}\hat{f}$ the Discrete Fourier transform (DFT) of $f$. Specifically, for $x\in\mathbb{Z}_n$ and $\omega=e^{\frac{2\pi i}{n}}$, the DFT of $f$ is

$\hat{f}(x)=\frac{1}{n}\sum_{y\in\mathbb{Z}_n}f(y)\omega^{-xy}.$

The uncertainty principle is the following (see Theorem 2.5.4 of [1]):
Theorem (Uncertainty principle): Let $f\in L(A)$ and suppose that $f\neq 0$. Then

$|\mathrm{supp}(f)|\cdot|\mathrm{supp}\hat{f}|\ge |A|.$

Now we come to the main part of this lecture: Tao’s uncertainty principle for cyclic group of prime order. This improves the inequality in the above discrete uncertainty principle when the finite Abelian group $A$ is cyclic with prime order $p$. To prove the theorem, we present some specific tools developed in Tao’s paper [2].

The main theorem in [2] is the following:
Theorem (Tao’s uncertainty principle): Let $p$ be a prime number and $f\in L(\mathbb{Z}_p)$ non-zero. Then

$|\mathrm{supp}(f)|+|\mathrm{supp}(\hat{f})|\ge p+1.$

Conversely, if $\emptyset\neq A,A'\subset \mathbb{Z}_p$ are two subsets such that $|A|+|A'|=p+1$, then there exists $f\in L(\mathbb{Z}_p)$ such that $\mathrm{supp}(f)=A$ and $\mathrm{supp}(\hat{f})=A'$.

To prove the theorem, we present some specific tools developed in Tao’s paper [2].

Recall that $\mathbb{Z}[x]$ denotes the ring of polynomials with integer coefficients.

Proposition 1:
Let $P(x_1,x_2,\cdots,x_n)$ be a polynomial in the variables $x_1,x_2,\cdots,x_n$ with integer coefficients. Suppose that for some $i\neq j$,

$P(x_1,x_2,\cdots,x_n)|_{x_i=x_j}\equiv 0.$

Then there exists a polynomial $Q(x_1,x_2,\cdots,x_n)$ with integer coefficients such that

$P(x_1,x_2,\cdots,x_n)=(x_i-x_j)Q(x_1,x_2,\cdots,x_n)$.

Remark: The proof of this proposition can be omitted for the first reading. Instead, considering $P(x_1,x_2)=x_1^2+x_1x_2-2x_2^2$ and trying to find corresponding $Q(x_1,x_2)$ by using this proposition would be helpful for understanding.

Proof:
For the sake of simplicity, suppose that $i = 1$ and $j = 2$ so that $P(x_1,x_2,\cdots,x_n) \equiv 0$. Let us denote by $P_1(x_1,x_2,\cdots,x_n)$ (respectively $P_2(x_1,x_2,\cdots,x_n)$) the sum of the monomials of $P(x_1,x_2,\cdots,x_n)$ with positive (respectively negative) coefficients so that

$P(x_1,x_2,\cdots,x_n) = P_1(x_1,x_2,\cdots,x_n) + P_2(x_1,x_2,\cdots,x_n).$

Note that

$P_1(x_1,x_1,x_3,\cdots,x_n) =- P_2(x_1,x_1,x_3,\cdots,x_n),$

since $P(x_1 , x_1 ,x_3, \cdots , x_n ) \equiv 0$. This implies that there exists a bijection between the monomials in $P_1(x_1,x_1,x_3,\cdots,x_n)$ and those in $P_2(x_1,x_1,x_3,\cdots,x_n)$. More precisely, let us fix $m > 0$ and $k, k_3 , \cdots, k_n \ge 0$; then the monomial $mx_1^k x_3^{k_3}\cdots x_n^{k_n}$ appears in $P_1(x_1,x_1,x_3,\cdots,x_n)$ if and only if $-mx_1^k x_3^{k_3}\cdots x_n^{k_n}$ appears in $P_2(x_1,x_1,x_3,\cdots,x_n)$. Suppose this is the case. Then we can find $m_0 , m_1 , \cdots , m_k$ and $n_0 , n_1 , \cdots, n_k$ non-negative integers such that the sum of the monomials of $P(x_1,x_1,x_3,\cdots,x_n)$ whose variables $x_i$ have degree $k_i$ for $i = 1, 2,\cdots , n$ and $k_1 + k_2 = k$ is

$\sum_{l=0}^k m_lx_1^{k-l}x_2^lx_3^{k_3}\cdots x_n^{k_n}-\sum_{l=0}^k n_lx_1^{k-l}x_2^lx_3^{k_3}\cdots x_n^{k_n}$

and

$m_0+m_1+\cdots+m_k=n_0+n_1+\cdots+n_k=m$

but also such that

$m_l\neq 0\implies n_l=0\quad n_l\neq 0\implies m_l\neq 0$

(because otherwise there would be a cancellation). Thus, with every monomial $x_1^{k-l}x_2^lx_3^{k_3}\cdots x_n^{k_n}$ such that $m_l\neq 0$ we can (arbitrarily but bijectively) associated a monomial $x_1^{k-h}x_2^lx_3^{k_3}\cdots x_n^{k_n}$ with $m_h\neq 0$ and $h\neq l$. Now for $h>l$ we have the identity

$x_1^{k-l}x_2^l-x_1^{k-h}x_2^h=x_2^lx_1^{k-h}(x_1^{h-l}-x_2^{h-l})$

$=x_2^lx_1^{k-h}(x_1-x_2)(x_1^{h-l-1}+x_1^{h-l-2}x_2+\cdots+x_2^{h-l-1}).$

Exchanging $h$ with $l$ we get the analogous identity for $h. This shows that

$\sum_{l=0}^k m_lx_1^{k-l}x_2^lx_3^{k_3}\cdots x_n^{k_n}-\sum_{l=0}^k n_lx_1^{k-l}x_2^lx_3^{k_3}\cdots x_n^{k_n}$

is divisible by $x_1-x_2$.

Repeating the argument for each monomial $mx_1^kx_3^{k_3}\cdots x_n^{k_n}$ with $m>0$ and $k,k_3,\cdots,k_n\ge 0$ appearing in $P_1(x_1,\cdots,x_n)$, we deduce that $P(x_1,\cdots,x_n)$ is divisible by $x_1-x_2$.

Lemma 1: Let $p$ be a prime, $n$ a positive integer, and $P(x_1,x_2,\cdots,x_n)$ a polynomial with integer coefficients. Suppose that $\omega_1,\cdots,\omega_n$ are (not necessarily distinct) $p$th roots of unity such that $P(\omega_1,\cdots,\omega_n)=0$. Then $P(1,\cdots,1)=0$ is divisible by $p$.
Proof:
Letting $\omega=e^{\frac{2\pi i}{p}}$ we can find integers $0\le k_j\le p-1$ such that $\omega_j=\omega^{k_j}$ for $j=1,2,\cdots,n$.

Define the polynomials $q(x),r(x)\in\mathbb{Z}[x]$ by settinig

$P(x^{k_1},\cdots,x^{k_n})=(x^p-1)q(x)+r(x)$

where $\mathrm{deg}r. Then $r(\omega)=0$ and since $\mathrm{deg}r we deduce that $r(x)$ is a multiple of the minimal polynomial of $\omega$, that is, $r(x)=m(1+x+\cdots+x^{p-1})$ for some $m\in\mathbb{Z}$, where we used a fact in undergraduate algebra that the minimal polynomial of $\omega$ is $1+x+\cdots+x^{p-1}$. It follows that $P(1,\cdots,1)=r(1)=mp$.

Theorem 3(Chebotarev):
Let $p$ be a prime and $1\le n\le p$. Let $\eta_1,\cdots,\eta_n$ (respectively $\xi_1,\cdots,\xi_n$) be the distinct elements in $\{0,1,\cdots,p-1\}$. Then the matrix

$A=\left(e^{\frac{2\pi i\eta_h\xi_k}{p}}\right)_{h,k=1}^n$

is nonsingular.

Proof: Set $\omega_h=e^{\frac{2\pi i\eta_h}{p}}$ for $h=1,2,\cdots,n$. Note that the $\omega_h$s are distinct $p$th root of unity and $A=\left(\omega_h^{\xi_k}\right)_{h,k=1}^n$. Define the polynomial $D(x_1,\cdots,x_n)$ with integer coefficients by setting

$D(x_1,\cdots,x_n)=\mathrm{det}\left(x_h^{\xi_k}\right)_{h,k=1}^n.$

As the determinant is an alternating form, we have $D(x_1,\cdots,x_n)|{x_h=x_k}\equiv 0$ whenever $1\le h\neq k\le n$, so that, by recursively applying Proposition 1, we can find a polynomial $Q(x_1,\cdots,x_n)$ with integer coefficients such that

$D(x_1,\cdots,x_n)=Q(x_1,\cdots,x_n)\prod_{1\le h

To prove the theorem, it is equivalent to show that $Q(\omega_1,\cdots,\omega_n)\neq 0$ (because $\omega_h$s are all distinct) so that, by Lemma 1 it suffices to show that $p$ does not divide $Q(1,\cdots,1)$. For this we need the next three lemmas to complete this proof. Let us first introduce some notations.

Given an $n$-tuple $\mathbf{k}=(k_1,\cdots,k_n)$ of non-negative integers, we say that the (monomial) differential operator

$L=\mathbf{L_k}=\left(x_1\frac{\partial}{\partial x_1}\right)^{k_1}\cdots\left(x_n\frac{\partial}{\partial x_n}\right)^{k_n}$

is of type $\mathbf{k}$ and order $o(\mathbf{k})=k_1+\cdots+k_n$.

Lemma 2:
Let $L$ be a differential operator of type $\mathbf{k}$ and $F(x_1,\cdots,x_n)$ and $G(x_1,\cdots,x_n)$ two polynomials. Then

$L(FG)=\sum_{(\mathbf{i},\mathbf{j})}L_{\mathbf{i}}(F)\cdot L_{\mathbf{j}}(G)$

where the sum runs over all pairs $(\mathbf{i},\mathbf{j})$ such that (componentwise) $\mathbf{i}+\mathbf{j}=\mathbf{k}$ (and therefore $o(\mathbf{i})+o(\mathbf{j})=o(\mathbf{k})$).
Proof:
Left as as exercise.

Exercise 2: Prove Lemma 2. (Hint: use induction on the order of $L$.)

Lemma 3:
For $1\le j\le n$ and $1\le h\le j-1$ we have

$\left(x_j\frac{\partial}{\partial x_j}\right)^{h}(x_j-x_1)(x_j-x_2)\cdots (x_j-x_{j-1})=\sum_{t=1}^ha_{h,t}x_j^t\sum_{\mathbf{i}_t}\prod_{1\le i\le j-1,i\neq i_1,\cdots,i_t}(x_j-x_i)$

where $\sum_{\mathbf{i}_t}$ runs over all $\mathbf{i}_t=(i_1,\cdots,i_t)$ with $1\le i_1<\cdots and the $a_{h,t}=a_{h,t}(j)$s are non-negative integers such that $a_{h,h}=h!$. In particular,

$\left(x_j\frac{\partial}{\partial x_j}\right)^{j-1}(x_j-x_1)(x_j-x_2)\cdots (x_j-x_{j-1})=(j-1)!x_j^{j-1}+\text{terms containing at least one factor } (x_j-x_i)$

with $1\le i.
Proof:
Proof by induction on $h=1,\cdots,j-1$. For the details see Lemma 2.6.15 of section 2.6 of [1].

Lemma 4:
Let $L=L_{(0,1,\cdots,n-1)}$, that is,

$L=\left(x_1\frac{\partial}{\partial x_1}\right)^{0}\cdots\left(x_n\frac{\partial}{\partial x_n}\right)^{n-1}.$

Then if $D(x_1,\cdots,x_n)$ and $Q(x_1,\cdots,x_n)$ are given by

$D(x_1,\cdots,x_n)=Q(x_1,\cdots,x_n)\prod_{1\le h

we have

$[LD](1,\cdots,1)=\prod_{j=1}^n (j-1)!Q(1,\cdots,1).$

Proof:
By Lemma 2 and 3, we have

$[LD](1,\cdots,1)\prod_{j=1}^n(j-1)!x_j^{j-1}Q(x_1,\cdots,x_n)+\text{terms containing at least one factor }(x_j-x_i)$

with $1\le i. Here the first term is obtained by acting the whole $L$ on $\prod_{1\le h (the second term of $D$). In particular, taking $x_i=1$ for $i=1,\cdots,n$ we complete the proof.

Now we can finish the proof of Theorem 3 with these lemmas.

Proof of Theorem 3 (continued):
For $L$ as in Lemma 4, we have (where $S_n$ denotes the symmetric group of degree