Math 31AH: Lecture 10

Let \mathbf{V} be a vector space, and let \langle \cdot,\cdot \rangle be a bilinear form on \mathbf{V}. Note that we are not assuming this bilinear form is a scalar product — it might not satisfy the second and third scalar product axioms (symmetry and positive definiteness).

Defintion 1: The quadratic form associated to \langle \cdot,\cdot \rangle is the function Q \colon \mathbf{V} \to \mathbb{R} defined by

Q(\mathbf{v}) := \langle \mathbf{v},\mathbf{v} \rangle.

Note that if \langle \cdot,\cdot \rangle is a scalar product, then Q(\mathbf{v}) = \langle \mathbf{v},\mathbf{v} \rangle = \|\mathbf{v}\|^2 is the square of the norm of \mathbf{v}. However, since we do not assume that \langle \cdot,\cdot \rangle is a scalar product, it might take negative values on some vectors, and it might be zero for some vectors which are not the zero vector.

For an example, let us consider the Lorentz bilinear form, which you may recall from Lecture 8 is the bilinear form on \mathbb{R}^4 defined by

\langle (x_1,x_2,x_3,x_4), (y_1,y_2,y_3,y_4) \rangle = x_1y_1 + x_2y_2 + x_3y_3 -x_4y_4.

The associated quadratic form is

Q(x_1,x_2,x_3,x_4) = x_1^2 + x_2^2 + x_3^2 - x_4^2,

and for example we have

Q(1,0,0,1) = 0 \quad\text{ and }\quad Q(0,0,0,1) = -1.

Although the Lorentz bilinear form violates positive definiteness, it is symmetric: we do in fact have

\langle (x_1,x_2,x_3,x_4), (y_1,y_2,y_3,y_4) \rangle = \langle (y_1,y_2,y_3,y_4), (x_1,x_2,x_3,x_4) \rangle.

Symmetric bilinear forms are quite well-behaved, even if they do sometimes violate positive definiteness. In particular, symmetric bilinear forms are uniquely determined by their associated quadratic form.

Theorem 1: Suppose that \langle \cdot,\cdot \rangle_1 and \langle \cdot,\cdot \rangle_2 are symmetric bilinear forms on the same vector space \mathbf{V}. Then

\langle \mathbf{v},\mathbf{w} \rangle_1 = \langle \mathbf{v},\mathbf{w} \rangle_2 \quad \forall \mathbf{v},\mathbf{w} \in \mathbf{V}

if and only if

\langle \mathbf{v},\mathbf{v} \rangle_1 = \langle \mathbf{v},\mathbf{v} \rangle_2 \quad \forall \mathbf{v} \in \mathbf{V}.

Proof: One direction of the equivalence is obvious: the the two bilinear forms are the same, then they induce the same quadratic form. However, the other direction is non-obvious — for any bilinear form, the table

\langle \mathbf{v},\mathbf{w} \rangle, \quad \mathbf{v},\mathbf{w} \in \mathbf{V}

contains much more information than the list

\langle \mathbf{v},\mathbf{v} \rangle, \quad \mathbf{v}\in \mathbf{V}.

But in fact, a little algebra shows that the latter (smaller) data set in fact determines the former (bigger) data set. Indeed, for any two vectors \mathbf{v},\mathbf{w} \in \mathbf{V}, thanks to bilinearity we have

\langle \mathbf{v}+\mathbf{w},\mathbf{v}+\mathbf{w} \rangle = \langle \mathbf{v},\mathbf{v}\rangle + \langle \mathbf{v},\mathbf{w} \rangle + \langle \mathbf{w},\mathbf{v} \rangle + \langle \mathbf{w},\mathbf{w} \rangle,

or equivalently

\langle \mathbf{v},\mathbf{w} \rangle + \langle \mathbf{w},\mathbf{v} \rangle = \langle \mathbf{v}+\mathbf{w},\mathbf{v}+\mathbf{w} \rangle - \langle \mathbf{v},\mathbf{v}\rangle - \langle \mathbf{w},\mathbf{w} \rangle.

Notice that the right hand side only involves pairings to two equal vectors. Moreover, in the presence of symmetry, the two quantities on the left hand side are the same. We thus obtain

\langle \mathbf{v},\mathbf{w} \rangle = \frac{1}{2}\left( Q(\mathbf{v}+\mathbf{w}) - Q(\mathbf{v}) - Q(\mathbf{w}) \right),

where Q(\cdot) is the quadratic form associated to the symmetric bilinear form \langle \cdot,\cdot \rangle.

— Q.E.D.

In view of Theorem 1, the study of any symmetric bilinear form reduces to the study of its associated quadratic form. For the remainder of the lecture, let us restrict to the setting where \mathbf{V} is an n-dimensional vector space. Let \langle \cdot,\cdot \rangle be a symmetric bilinear form on \mathbf{v}, and let Q(\cdot) be the corresponding quadratic form. Let A=\{\mathbf{a}_1,\dots,\mathbf{a}_n\} be any basis in \mathbf{V}. To evaluate Q(\mathbf{v}) for any vector \mathbf{v} \in \mathbf{V}, we first write \mathbf{v} as a linear combination of vectors in A,

\mathbf{v} = x_1\mathbf{a}_1 + \dots + x_n\mathbf{a}_n,

and then use bilinearity and symmetry to write Q(\mathbf{v}) as

Q(\mathbf{v}) \\ = \left\langle \sum_{i=1}^n x_i \mathbf{a}_i, \sum_{j=1}^n x_i \mathbf{a}_i \right\rangle \\ = \sum_{i,j=1}^n x_ix_j \langle \mathbf{a}_i,\mathbf{a}_j \rangle \\ = \sum_{i=1}^n x_i^2\langle \mathbf{a}_i,\mathbf{a}_i \rangle + 2 \sum_{1 \leq i<j\leq n} x_ix_j\langle \mathbf{a}_i,\mathbf{a}_j \rangle.

This looks cleaner if we write a_{ij} = \langle \mathbf{a}_i,\mathbf{a}_j \rangle, as it becomes

Q(\mathbf{v}) = \sum_{i=1}^n a_{ii}x_i^2 + 2 \sum_{1 \leq i<j\leq n} a_{ij}x_ix_j.

In particular, now we can see where the term “quadratic form” comes from — the right hand side of the above equation is a polynomial in the variables x_1,\dots,x_2, and each monomial in this polynomial is quadratic in the sense that it involves a product of exactly two of these variables. For example, when n=2, a quadratic form looks like

Q(\mathbf{v}) = a_{11}x_1^2 + a_{22}x_2^2 + 2a_{12}x_1x_2,

which is a homogeneous degree 2 polynomial in two variables, and when n=3 a quadratic form looks like

Q(\mathbf{v}) = a_{11}x_1^2 + a_{22}x_2^2 +a_{33}x_3^2+ 2a_{12}x_1x_2 +2a_{13}x_1x_3 + 2a_{23}x_2x_3,

which is a homogeneous degree 2 polynomial in three variables (exercise: infer from context what the word “homogeneous” means here).

From this point of view, we can see what is special about quadratic forms that actually do come from scalar products. If \langle \cdot,\cdot \rangle is a scalar product on \mathbf{V}, then we know that there exists a basis E=\{\mathbf{e}_1,\dots,\mathbf{e}_n\} which is an orthonormal set relative to this scalar product. In this case, for every vector \mathbf{v} \in \mathbf{V}, we have

Q(\mathbf{v}) = x_1^2 + x_2^2 + \dots + x_n^2,

where \mathbf{v} = x_1\mathbf{e}_1 + \dots + x_n\mathbf{e}_n. The fact that Q(\mathbf{v}) is a sum of squares is a reflection of, and equivalent to, the fact that the symmetric bilinear form \langle \cdot,\cdot \rangle from which it comes is positive definite. On the other hand, we know that the Lorentz bilinear form is not a scalar product. Nevertheless, as we saw above it is a sum of squares, albeit with one negative coefficient. This raises the question: which quadratic forms can be represented as a sum of squares? The answer is: those quadratic forms which come from a symmetric bilinear form \langle \cdot,\cdot \rangle for which we can find a “pseudorthonormal” basis, i.e. a basis E= \{\mathbf{e}_1,\dots,\mathbf{e}_n \} which interacts with \langle \cdot,\cdot \rangle in such a way that \langle \mathbf{e}_i,\mathbf{e}_i \rangle = \pm 1 and \mathbf{e}_i,\mathbf{e}_j \rangle = 0 whenever i \neq j. Remarkably enough, such a basis always exists.

Theorem 2: Given any symmetric bilinear form \langle \cdot,\cdot \rangle on an n-dimensional vector space \mathbf{V}, the associated quadratic form Q(\cdot) can be represented as a sum of p positive squares and q negative squares. That is, there exists a basis E=\{\mathbf{e}_1,\dots,\mathbf{e}_n\} such that the associated quadratic form satisfies

Q(\mathbf{v}) = x_1^2 + \dots + x_p^2 - x_{p+1}^2 - \dots - x_{p+q}^2

for some p,q \in \{0,1,\dots,n\} such that p+q \leq n, where \mathbf{v} = x_1\mathbf{e}_1 + \dots + x_n\mathbf{e}_n.

The proof of Theorem 2 is a long and tedious exercise in the high school algebra technique of completing the square, which involves checking a lot of cases. You might want to try to figure this out for yourself in the case n=2, where it literally does come down to completing the square, and see if you can extrapolate from there. We will see a different proof when we talk about diagonalizing symmetric matrices. As for right now, a conceptually much more interesting phenomenon is that, for any two bases which represent Q as a sum of positive squares and negative squares, the number of positive squares and negative squares is the same in both representations.

Theorem 3: Let \langle \cdot,\cdot \rangle be a symmetric bilinear form on an n-dimensional vector space \mathbf{V}, and let Q(\cdot) be the corresponding quadratic form. Suppose that

Q(\mathbf{v})= x_1^2+\dots+x_{p_1}^2-x_{p_1+1}^2 + \dots + x_{p_1+q_1}^2= y_1^2+\dots+y_{p_2}^2-y_{p_2+1}^2 + \dots + y_{p_2+q_2}^2,

where E=\{\mathbf{e}_1,\dots,\mathbf{e}_n\} and F=\{\mathbf{f}_1,\dots,\mathbf{f}_n\} are two bases of \mathbf{V} and

\mathbf{v}=x_1\mathbf{e}_1 + \dots + x_n\mathbf{e}_n = y_1\mathbf{f}_1 + \dots + y_n\mathbf{f}_n.

Then (p_1,q_1)=(p_2,q_2).

Proof: Suppose that (p_1,q_1) \neq (p_2,q_2). Then either p_1 \neq p_2 or q_1 \neq q_2, and without loss in generality we may assume p_1>p_2. Let \mathbf{W}_1 be the subspace of \mathbf{V} spanned by the vectors \mathbf{e}_1,\dots,\mathbf{e}_{p_1}, and let \mathbf{W}_2 be the subspace of \mathbf{V} spanned by \mathbf{f}_{p_2+1},\dots,\mathbf{f}_{p_2+n}. Then

\dim \mathbf{W}_1 + \dim \mathbf{W}_2 = p_1 + n-p_2 > p_1+n-p_1 = n ,

and hence \mathbf{W}_1 \cap \mathbf{W}_2 contains a nonzero vector \mathbf{w}, as proved in Assignment 1. This vector can be represented relative to both bases, e.g. as

\mathbf{w}=x_1\mathbf{e}_1 + \dots + x_p\mathbf{e}_p

and as

\mathbf{w}=y_{q+1}\mathbf{f}_{q+1} + \dots + y_n\mathbf{f}_n.

We then have

Q(\mathbf{w}) = x_1^2 + \dots + x_p^2 >0,

since at least one of x_1,\dots,x_n is nonzero, but also

Q(\mathbf{w}) = -y_{p_2+1}^2 - \dots - y_{p_2+q_2}^2 \leq 0.

But this says that the number Q(\mathbf{w}) is both positive and non-positive, which is a contradiction. Hence our assumption that (p_1,q_1) \neq (p_2,q_2) is false, and it must be the case that (p_1,q_1) = (p_2,q_2).

— Q.E.D.

In view of Theorem 3, we may define the signature of a quadratic form Q to be the number of positive squares and negative squares in any representation of Q as a sum of squares.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s