Invented by

William K. Clifford (

1845-

1879), drawing on the little understood, but advanced, work of

Hermann Grassmann, Clifford algebras are a way of doing arithmetic with multidimensional quantities which rivals (and some would say surpasses) in scope the more usual

matrix representation for

linear algebra.

Although they can be a very powerful mathematical tool, the basics of Clifford algebras are not complicated, and it's possible to explain the main idea fairly simply, which I attempt below.

In mathematics, an algebra is made up from a few standard parts. All you need is a way of multiplying things and some things to multiply. The multiplication has to work specially, so that everytime you multiply two of your things together, you only get one of your things for the result (you never get a different sort of thing). You also need a special thing that guarantees to end up with the same thing you started with, when you multiply by it (like multiplying by 1). And your multiplication has to work so that a times (b times c) is the same as (a times b) times c. (This is called associativity, by the way.)

Because it's possible to create different kinds of multiplications (different *products*) it's possible to have different algebras. You have probably seen "high school algebra", where our "things" are numbers, and the product is ordinary multiplication, and also Boolean algebra where the "things" are just the two numbers 0 and 1. ("But there is no multiplication in Boolean algebra!" - Oh yes there is! Take a look at the truth table for "and"...)

Clifford algebras can be thought of as a way of doing multiplication on "numbers" (or rather quantities) which have an inbuilt dimensionality - planes, volumes, lines - and an "orientation" (positive and negative planes, volumes, lines... basically, just the latter with a sign added.)

Technically, these "things" are called multivectors, because they can be expressed as sums of vectors of different "rank" or *grade* (dimensionality), in much the same way that complex numbers are written as a sum of two terms, one of which is multiplied by i.

To show how we can build a Clifford algebra, here's a simple example. It's called Cl(2), which means it takes place in a 2D geometric space, a plane.

The first thing we need to do is define our "things" for this algebra. The way we do this is to say that all our things are multiples of "basis elements", and their sums. (just like *i* and 1 are the basis elements of complex algebra).

All our basis elements have to fit in our geometric space, and so, of course, do their products. In 2D we can have points (0 dimensions, or "grade 0", we'll call these "scalars") lines (1 dimension, grade 1) and planes (2 dimensions or grade 2) and that's it.

There's only one kind of scalar^{*}. Easy enough: we don't need a special name for the unit scalar, our grade 0 basis element, and we can just call it "1", then if we multiply by it, it disappears. (this is like not writing a complex number as "a*1 + b*i" - we just write it as "a + bi", instead.)

On to our 1D basis elements: lines (vectors). There are (we'll assume) *two* "kinds" of line in our plane. Horizontal and vertical, if you like, or any other pair of lines at right-angles will do. We call the lines of length 1 in these directions e_{0} and e_{1}. These unit vectors will be our two grade-1 basis elements

Now, how about the plane basis element? To get to that, we have to introduce a *part* of our way of multiplying, our product.

When we've got two lines, a and b, we'll say their *outer product*, written a /\ b, is just the area of the parallelogram formed by sweeping a along b, so if a is the line:

/
/
/ = a
/
/

and b is

--------------- = b

then the outer product a /\ b is the parallelogram:

__________________
/ /
/ ____\ /
/ / / / = a /\ b
/ / /
/_________________/

which (as indicated by the presence of an arrow) has an inbuilt

orientation - this can be thought of as just a sign (+,-) possessed by the parallelogram.

Or, as hinted at by the shape of the arrow above, it may sometimes be taken to represent a direction of rotation, clockwise in this case. Notice how the path of the diagrammed arrow follows the order of the outer product operation - first it goes along the direction of a, then turns to run along the direction of b. If we'd diagrammed b /\ a instead, the arrow would have run counterclockwise - an interesting property of the outer product, known as anticommutativity: a /\ b = -(b /\ a). Reversing the order of an outer product's terms reverses the sign of the result.

Back to our 2D planar basis element. We use e_{0} /\ e_{1}, which is the "unit plane" (its area is one) - the square formed by sweeping e_{0} along e_{1}:

_____________
| |
| |
|e_{0} | = e_{0} /\ e_{1}
| |
| |
|___________|
e_{1}

This is usually called the

**unit bivector**, since it's made out of two vectors. By convention, we'll give this a positive sign. As you can see from the order of the terms, the appropriate arrow goes up, then to the right, so a positive sign is by convention given to a clockwise rotation.

So, to recap, we've got four basis elements: 1, e_{0}, e_{1} and e_{0}/\e_{1}. What we need to do now is define the rules for our full product so that we satisfy the conditions for an algebra, given above. We need to show how to multiply all these different grade elements with each other consistently, always ending up with just more basis elements.

The product we're going to use, sometimes called the Clifford product and sometimes the geometric product can be defined as a way of multiplying two vectors. It uses the outer product, "/\" defined above, and the normal inner product for vectors, written ("."). We write it for two vectors, A and B, as:

AB = A . B + A /\ B

So our product for two vectors is just the sum of their inner and outer products. (In general, there's no need to use a special sign for the geometric product, and we'll take "

`xy`" to mean the geometric product of

`x` and

`y` whether

`x` and

`y` are vectors or other things.)

Now we can start to look at what we get when we multiply our basis elements.

Multiplication by a scalar is just ordinary multiplication, so multiplication by our first basis element, the scalar value 1, just leaves the thing alone: 1a = a. Not much of interest there.

Multiplying our vector (grade 1) basis elements is more interesting. Since e_{0} and e_{1} are orthogonal, their inner product (e_{0} . e_{1}) is zero (this is just a property of the standard inner product operation on two vectors - the inner product is the projection of one vector onto the other, and the projection of one vector on an orthogonal one is zero, just as a needle has no shadow when the light is dead above it).

So in this case the inner product part disappears, and the Clifford product (e_{0} . e_{1} + e_{0} /\ e_{1}) is equal to the outer product, which in turn is just our planar basis element, e_{0} /\ e_{1}, the unit bivector. Since this is true, we can now forget about the '/\' and just write the unit bivector as e_{0}e_{1}.

Ok, we know how to multiply e_{0} by e_{1}. As we've already seen when considering the orientation arrows on a bivector, the *outer product* ('/\') anticommutes, so that:

e_{0} /\ e_{1} = -e_{1} /\ e_{0}

Swapping the order changes the sign.

This tells us straight away: e_{1}e_{0} = -e_{0}e_{1} (because e_{0} and e_{1} are orthogonal, the inner product disappears again.)

To complete the list, we have to consider e_{0}^{2}, or e_{0}e_{0}, that is e_{0} times itself. In this case, we notice that the area of a parallelogram swept by one line along a parallel line is zero, so in the case of multiplying two *parallel* vectors, the outer product drops out of consideration, and we just have the inner product. Since e_{0} is parallel to itself (like any vector) this means e_{0}^{2} = e_{0} . e_{0} = 1. Similarly, e_{1}^{2} = 1.

Now, we've dealt with everything except our planar basis element, the unit bivector, e_{0}e_{1} as we're now calling it. What do we get when we multiply that?

Scalars are as easy as usual, no need to worry about those. What about multiplying e_{0} times e_{0}e_{1}? Well, we can write: e_{0}e_{0}e_{1}, but we know e_{0}e_{0} is 1 from the above, so we just get e_{1}. That is e_{0} times e_{0}e_{1} is e_{1}.

How about swapping the order: (e_{0}e_{1})e_{0}? We learned above that e_{0}e_{1} = -e_{1}e_{0}. So we get -e_{1}e_{0}e_{0}. But e_{0}e_{0} is 1 (see above) so we're left with just -e_{1}, that is (e_{0}e_{1})e_{0} = -e_{1}.

Similarly, e_{1}(e_{0}e_{1}) = -e_{0} and (e_{0}e_{1})e_{1} = e_{0}.

Again, swapping the order gives us the same result with the sign changed. We could say this as "the unit bivector anticommutes with both unit basis vectors".

All that remains is to multiply the unit bivector by itself: e_{0}e_{1}^{2}.

We can write this as e_{0}e_{1}e_{0}e_{1}. We know e_{0}e_{1} = -e_{1}e_{0}. And so it follows that e_{0}e_{1}e_{0}e_{1} = -e_{0}e_{1}e_{1}e_{0}. (We've just swapped the order of the last two terms, and changed the sign according to the rule.) But we know e_{1}e_{1} = 1 (see above) so we can replace the middle two terms with 1, which disappears because anything times 1 is itself, giving e_{0}e_{1}e_{0}e_{1} = -e_{0}e_{0}.

But we also know that e_{0}e_{0} = 1 (see above, just the same) giving us, finally,

(e_{0}e_{1})^{2} = -1

So, the square of the unit bivector is -1 (in this algebra.) Personally,
I find this construction much more satisfying than the usual woffle
produced when people try to explain why sqrt(-1) = *i*.

Since we've gone through all the ways in which we can multiply our basis elements together, and we always end up with just more of the same basis elements when we do, we can claim that our algebra obeys the rule about "never ending up with a different kind of thing", and because we have a sensible identity element (1), we can see that we have indeed constructed a bona fide algebra as we set out to. Terms in this algebra (our "things", the multivectors) will be sums of products of our basis elements, like 5 + 8e_{0} + 4e_{1} + 11e_{0}e_{1} for example, and we will always get another such thing when we add or multiply two of them together. It's an "interesting exercise" to work out the rules for multiplying two multivectors, A and B, where

A = a + b(e_{0}) + c(e_{1}) + d(e_{0}e_{1})

and

B = m + n(e_{0}) + o(e_{1}) + p(e_{0}e_{1})

which when multiplied give

C = w + x(e_{0}) + y(e_{1}) + z(e_{0}e_{1})

Here, "working out the rules" means giving expressions for w, x, y and z in terms of a, b, c, d, m, n, o and p.

That's about it for defining Cl(2), or the Clifford algebra of the plane.

But what about this sqrt(-1) business? In a sense, Clifford algebras are a generalization of the idea of complex numbers, which works in any dimension. However, it's not a *straightforward* generalization, because, though complex numbers *are* a Clifford algebra, they are a special and restricted type of Clifford algebra, with particular properties - they aren't the algebra we've described above, but rather its "even subalgebra" - that is to say the subalgebra which has only the basis elements of even grade (dimension), in this case, the grade 0 (scalar) and grade 2 (bivector) elements.

In general, for a Clifford algebra corresponding to *n* geometric dimensions, the grade-*n* basis element (there's guaranteed to be only one) will have the property that its square is -1. In this role, it is often called the "unit pseudoscalar".

Really, it's the geometric usefulness of complex numbers that's being generalised, not their particular properties (to do with *i*). Though as we've seen, Clifford algebras do have their own, arguably superior, explanation of the status, role and origin of *i*.

There's much more to be said about Clifford algebras. They're a very general idea, and a lot of existing mathematics can be rewritten in terms of them, but there's no room for that here. Let's just mention a couple of interesting facts:

- They work in any (finite) number of dimensions.

- For a Clifford algebra corresponding to geometric dimension
*n*, there will be 2^{n} basis elements. They will be grouped by grade (dimension) as indicated by the (*n* + 1)th row of Pascal's Triangle (the row corresponding to our Cl(2) is the third: 1,2,1 corresponding to one scalar (grade 0), two vector (grade 1), and one bivector (grade 2) basis elements.)

- They are widely used in modern physics: the Pauli spin matrices are the Clifford algebra Cl(3), and the Dirac equation can be written very succinctly in the Clifford algebra of the Minkowski spacetime, known as Cl(3,1) (or is it Cl(1,3)? I forget.)

- When used to study rotation, there are very natural ways of constructing the property of electron spin (spin 1/2), that a 720 degree rotation is required to return to the original state instead of the usual 360 degrees, and Clifford algebras are intimately related with spinors - members of rotation groups that have this property.

* - Well, there's only one kind of scalar in *this* algebra, Cl(2). In other Clifford algebras, one can use different types of things for the scalar field - complex-valued ones, for example. This is the main difference between Clifford algebra and Geometric algebra, a development of Clifford algebras by David Hestenes. Hestenes' Geometric Algebra restricts itself to using the reals as elements of the scalar field.

Sourced largely from *Imaginary Numbers are not Real - the Geometric Algebra of Spacetime*, available at *http://www.mrao.cam.ac.uk/~clifford/introduction/intro/intro.html*, while stocks last.

The Geometric Algebra Research Group homepage, at http://www.mrao.cam.ac.uk/~clifford/ is a good place to start exploring further.

This is a long complicated node about something which I don't know that much about. There are likely errors, so if you spot a howler, please /msg me so I can fix it! Thanks!