Let be a vector space, and let
be a bilinear form on
Note that we are not assuming this bilinear form is a scalar product — it might not satisfy the second and third scalar product axioms (symmetry and positive definiteness).
Defintion 1: The quadratic form associated to is the function
defined by
Note that if is a scalar product, then
is the square of the norm of
However, since we do not assume that
is a scalar product, it might take negative values on some vectors, and it might be zero for some vectors which are not the zero vector.
For an example, let us consider the Lorentz bilinear form, which you may recall from Lecture 8 is the bilinear form on defined by
The associated quadratic form is
and for example we have
Although the Lorentz bilinear form violates positive definiteness, it is symmetric: we do in fact have
Symmetric bilinear forms are quite well-behaved, even if they do sometimes violate positive definiteness. In particular, symmetric bilinear forms are uniquely determined by their associated quadratic form.
Theorem 1: Suppose that and
are symmetric bilinear forms on the same vector space
. Then
if and only if
Proof: One direction of the equivalence is obvious: the the two bilinear forms are the same, then they induce the same quadratic form. However, the other direction is non-obvious — for any bilinear form, the table
contains much more information than the list
But in fact, a little algebra shows that the latter (smaller) data set in fact determines the former (bigger) data set. Indeed, for any two vectors thanks to bilinearity we have
or equivalently
Notice that the right hand side only involves pairings to two equal vectors. Moreover, in the presence of symmetry, the two quantities on the left hand side are the same. We thus obtain
where is the quadratic form associated to the symmetric bilinear form
— Q.E.D.
In view of Theorem 1, the study of any symmetric bilinear form reduces to the study of its associated quadratic form. For the remainder of the lecture, let us restrict to the setting where is an
-dimensional vector space. Let
be a symmetric bilinear form on
and let
be the corresponding quadratic form. Let
be any basis in
To evaluate
for any vector
we first write
as a linear combination of vectors in
and then use bilinearity and symmetry to write as
This looks cleaner if we write as it becomes
In particular, now we can see where the term “quadratic form” comes from — the right hand side of the above equation is a polynomial in the variables and each monomial in this polynomial is quadratic in the sense that it involves a product of exactly two of these variables. For example, when
a quadratic form looks like
which is a homogeneous degree polynomial in two variables, and when
a quadratic form looks like
which is a homogeneous degree polynomial in three variables (exercise: infer from context what the word “homogeneous” means here).
From this point of view, we can see what is special about quadratic forms that actually do come from scalar products. If is a scalar product on
then we know that there exists a basis
which is an orthonormal set relative to this scalar product. In this case, for every vector
we have
where The fact that
is a sum of squares is a reflection of, and equivalent to, the fact that the symmetric bilinear form
from which it comes is positive definite. On the other hand, we know that the Lorentz bilinear form is not a scalar product. Nevertheless, as we saw above it is a sum of squares, albeit with one negative coefficient. This raises the question: which quadratic forms can be represented as a sum of squares? The answer is: those quadratic forms which come from a symmetric bilinear form
for which we can find a “pseudorthonormal” basis, i.e. a basis
which interacts with
in such a way that
and
whenever
Remarkably enough, such a basis always exists.
Theorem 2: Given any symmetric bilinear form on an
-dimensional vector space
the associated quadratic form
can be represented as a sum of
positive squares and
negative squares. That is, there exists a basis
such that the associated quadratic form satisfies
for some such that
where
The proof of Theorem 2 is a long and tedious exercise in the high school algebra technique of completing the square, which involves checking a lot of cases. You might want to try to figure this out for yourself in the case where it literally does come down to completing the square, and see if you can extrapolate from there. We will see a different proof when we talk about diagonalizing symmetric matrices. As for right now, a conceptually much more interesting phenomenon is that, for any two bases which represent
as a sum of positive squares and negative squares, the number of positive squares and negative squares is the same in both representations.
Theorem 3: Let be a symmetric bilinear form on an
-dimensional vector space
and let
be the corresponding quadratic form. Suppose that
where and
are two bases of
and
Then
Proof: Suppose that Then either
or
and without loss in generality we may assume
Let
be the subspace of
spanned by the vectors
and let
be the subspace of
spanned by
Then
and hence contains a nonzero vector
as proved in Assignment 1. This vector can be represented relative to both bases, e.g. as
and as
We then have
since at least one of is nonzero, but also
But this says that the number is both positive and non-positive, which is a contradiction. Hence our assumption that
is false, and it must be the case that
— Q.E.D.
In view of Theorem 3, we may define the signature of a quadratic form to be the number of positive squares and negative squares in any representation of
as a sum of squares.