First recall the definition of spanning.

Let V be a vector space over a field k. Vectors v1,...,vn are a spanning set, for V if for every vector v in V there exist scalars ai in k (which depend on v) such that v can be written in the form

v=a1v1 + a2v2 +...+ anvn.

A vector space with a finite spanning set is called finite dimensional otherwise it is infinite dimensional. As Debbie points out the dimension of a finite dimensional vector space is the number of elements in a basis of the space.

This does leave us with two rather unpleasant possibilities. What if no such basis exists and what if two different bases of the vector space could have different numbers of elements? In either of these two cases, our definition wouldn't make a whole lot of sense.

In this writeup I will show that these worrying possibilities do not occur and we can all return safely to our normal activities safe in the knowledge that things are exactly as they should be.

The main idea needed for this is the following lemma which says roughly that we can exchange elements from a spanning set for elements of a linearly independent set and still keep the spanning property.

Steinitz Exchange Lemma Let v1,...,vn be a linearly independent set of vectors in a vector space V and let S be a spanning set with m elements. Then n<=m and we can replace n suitable elements of S by the vi and still have a spanning set. i.e.

{v1,...,vn,wn+1,...,wm}
is a spanning for some wn+1,...,wm in S.

Proof:

Consider the statement

I(r): There exists wr+1,...,wm in S so that v1,...,vr,wr+1,...,wm give a spanning set for V.
I claim that I(r) is true for all 0<=r<=n. Notice that when r=n this is the assertion of the lemma.

We prove that I(r) is true for all 0<=r<=n by induction. Start with r=0. The result is trivially true because since S already spans V, so certainly any superset of it does. Suppose then that I(r-1) holds for some r>0. Thus v1,...,vr-1,wr,...,wm span V for some wr,...,wm from S. In particular, vr must be a linear combination of these vectors, that is

vr=a1v1 +...+ ar-1vr-1+brwr +...+ bmwm. (*)
Now if br=...=bm=0 then this contradicts the fact that the vi are linearly independent. So WLOG br is nonzero. It follows from this that by rearranging (*) and multiplying by br-1 we can express wr as a linear combination of v1,...,vr, wr+1,...,wm. It follows that this set spans V as needed.

Corollary A finite dimensional vector space has at least one basis and any two bases have the same number of elements. Hence its dimension is well-defined.

Proof: Since the vector space is finite dimensional it has a finite spanning set. If the elements of this set are linearly independent then by definition we have a basis. If not we can express one of the vectors in our spanning set as a linear combination of the others. Thus, we can safely remove this vector from our spanning set and still have a spanning set. Repeating this procedure as often as required we will arrive at a basis.

Now suppose that we have two bases S and T. A fortiori S is a spanning set and T is a linearly indepednent set. So by the exchange lemma. |S| >= |T|. By symmetry we have the opposite inequality. Hence the result.

Some examples are long overdue.

• kn the collection of n-tuples of elements of k with its usual vector space vector structure has a basis consisting of the coordinate vectors ei (where ei is the column vector with a zero in all positions except the ith row, where it has a 1). Thus it has dimension n
• We can add polynomials and multiply them by scalars and so k[x] is a vector space. It is infinite dimensional. Since if there were to be a finite spanning set then this would put a limit on the degrees of polynomials.
• The collection of nxn matrices over k form a vector space with the obvious operations. This space has dimension n2. A basis is given by the usual matrix units ei,j (where ei,j is the matrix with a 1 in the i,j position and zeroes elsewhere).

Log in or register to write something here or to contact authors.