Quite possibly the most beautiful marriage of concepts in all of

mathematics, a lie group is an object endowed with rich structure and great symmetry. It is also of great use to

physicists in all fields, due to its intimate connection with physical

symmetries. Understanding the great power and beauty of lie groups requires some mathematical rigor. However, the basic idea is not too difficult to grasp.

There are two core concepts that must be understood before tackling lie groups:

The basic overview of these two concepts follows:

**Group Theory**

A group G is a collection of objects together with a composition law, or "multiplication" law • that satisfies:

- Closure: a,b ∈ G → (a • b) ∈ G.
- Associativity: (a • b) • c = a • (b • c)
- An identity element exists: 1 • a = a • 1 = a, for all a ∈ G
- Inverses exist: a
^{ -1} • a = 1, for all a ∈ G

G is said to be

commutative or

abelian if a • b = b • a for all a,b ∈ G.

A subgroup H of G is simply a subset of G which is a closed group by itself under the inherited multiplication law.

**A Few Examples of Groups**

- The integers, Z, under addition
- R under addition, R
^{×} = R - {0} under multiplication
- The quaternions H = {±1, ±i, ±j, ±k} given the following multiplication rules:

i^{2} = j^{2} = k^{2} = -1

ij = -ji = k

ki = -ik = j

jk = -kj = i

You might think of this as the set of unit vectors with the cross product multiplication law. This is an example of a non-commutative group.
- Any vector space is a group under addition.
- Finally, there are matrix groups, which are subsets of GL
_{n}R (n × n invertible matrices with real-valued entries) or GL_{n}C (the same thing but with complex-valued entries). GL stands for the "General Linear" group.

There is a great deal more to be said about groups, which is why you should read the group theory node. We, however, will now shift gears to briefly explain manifolds:

**Manifolds**

Very briefly, a manifold is an n-dimensional space. That is, it is a space which can be parameterized by n continuous coordinates.

**A Few Examples of Manifolds:**

Manifolds, like groups, are deep mathematical objects, which is why

they have their own node.

**Lie Groups**

We now meld these two important mathematical concepts together.

A lie group is a smooth manifold endowed with a group structure. Call this manifold G. Points on G can be multiplied.

There exists a smooth multiplication map

M: G × G → G

(g_{1}, g_{2}) → g_{1} • g_{2} ∈ G

There also exists a smooth inversion map

I: G → G

g → g^{ -1}

In other words, if g_{1} is near g_{2} on the manifold, we expect h • g_{1} to be near h • g_{2}. Similarly, we expect g_{1}^{-1} to be near g_{2}^{-1}.

**Examples**

Here are a few examples where we can add a group structure to a given manifold:

R and R^{×} provide immediate examples.

The circle, S^{1}, where each point p is given by an angle θ(p). Points can be "multiplied" in an intuitive fashion: θ(p • q) = θ(p) + θ(q). We could also think of this group as the set of complex phases, p ↔ e^{i θ(p)}, q ↔ e^{i θ(q)}. Literally multiplying these points together gives us the same group composition law. Note that this is an abelian group. It is called U_{1}, for reasons that will be explained soon.

We cannot endow a general manifold with a group structure. For example, the sphere, S^{2} cannot be given a group structure.

The most potent examples of lie groups are matrix groups, i.e. subgroups of GL_{n}R, or GL_{n}C. We can readily see that GL_{n}R as a manifold, by choosing our coordinates to be the n^{2} matrix entries. The only requirement is that the determinant of the matrix is nonzero. We can remove this locus to get the whole manifold. We could go through a mathematical proof to show that we are still left with an n^{2}-dimensional manifold, but our time would be better spent exploring the various subgroups of GL_{n}R and GL_{n}C.

**Examples of Subgroups of GL**_{n}R and GL_{n}C

Subgroups of GL_{n}R:

- The Special Linear Group, SL
_{n}R; the group of n × n matrices whose determinant = 1. This is closed, since determinants multiply (Det[AB] = Det[A] × Det[B]). S stands for "Special" by which we mean "unit determinant". SL_{n}R is (n^{2} - 1)-dimensional, since the determinant condition places a one-dimensional restriction on the coordinates.
- The group of rotations and reflections in n dimensions can be represented by O
_{n}, the group of orthogonal matrices, A^{T} = A^{ -1}, or A^{T}A = 1. This is easily shown to be a closed subgroup, because (AB)^{T} = B^{T}A^{T} = B^{ -1}A^{ -1} = (AB)^{ -1}. The orthogonality condition A^{T}A = 1 is equivalent to the condition that its columns form a set of orthonormal vectors in R^{n}. This is an n(n+1)/2 - dimensional condition, which means that O_{n} is an n(n-1)/2 - dimensional space.
- The group of rotations only (no reflections) can be represented by SO
_{n}R, the Special Orthogonal group, which is the group of orthogonal matrices restricted with the unit-determinant condition. This is not a very strong restriction, as orthogonal matrices can only have determinant ±1, since

1 = Det[1] = Det[A^{T}A] = Det[A] Det[A]

Det[A]^{2} = 1

SO_{n} has the same dimensionality as O_{n}, dim[ SO_{n} ] = n(n-1)/2. Thus, the group of rotations in three dimensions has dimensionality 3.
Subgroups of GL_{n}C:

- U
_{n} = Unitary n × n matrices. A unitary matrix satisfies U^{†}U = 1, where U^{†} = U*^{T} (the complex conjugate of the transpose). As a special case, if we set n = 1, U_{1} = 1 × 1 complex matrices, which is really just a fancy way of expressing complex numbers, but subject to the condition (u*)u = |u|^{2} = 1. These are just complex phases, e^{iθ}, like those used earlier to describe the circle S^{1}.
- SU
_{n} = Special Unitary matrices, i.e. unitary matrices with unit determinant.

A distinction should be made before we continue. A lie group like SO_{3} can act on a manifold like S^{2}, in that SO_{3} can be thought of as the group of rotations of the points on the sphere. However, the manifold SO_{3} is not the same as the manifold S^{2}. It has a very different shape, which we will explore eventually, but you might quickly note that SO_{3} has dimension 3, while S^{2} is only 2-dimensional. It is important to make a clear distinction between the group manifold itself and any manifold it may act on.

One of the reasons that the structure of a lie group is so useful is that we can get from any point on G to any other point via a smooth invertible map L_{g}: G → G. This map is simply left multiplication in G, i.e. L_{g}(h) = g • h. This provides us with a smoothly varying family of maps, and as we said before, we can use this map to get from any point h_{1} to any other point h_{2}, simply by setting g = h_{2} • h_{1}^{-1}.

**The Tangent Space of a Lie Group**^{+}

Any manifold M has a tangent space T_{p}M at each point p. This is a vector space, so we can always add elements of T_{p}M together. In the special case that the manifold is also a group, the tangent space has a way of inheriting a multiplication operation from the multiplicative structure of the group, meaning we can multiply two vectors together to get a new vector. This multiplication of vectors will be made explicit shortly. Since T_{p}G is a group under addition, and also has a multiplication operation, it is what is known in mathematics as a ring or algebra. Specifically, T_{p}G is called the Lie Algebra of G, often denoted £[G]. We will find that the lie algebra of G encodes much of the information about G itself. This is very useful, because algebras are usually much simpler to work with.

Let's make things somewhat more explicit. First, we choose our coordinates on G to be the matrix entries x^{ij}. Recall that we can think of any vector as a directional derivative, **V** = V^{ij} ∂/∂x^{ij}. If you are confused by the notation, just think of "ij" as a single index, specified by two numbers. We could have written this out the usual way, like V^{k} ∂/∂x^{k}, where k runs from 1 to n^{2}, but this notation doesn't encode the multiplicative structure of matrices.

Intuitively, we will reconstruct G from its lie algebra by pushing forward our tangent space from the identity element to every other point g' ∈ G. This is acomplished by using the pushforward of the L_{g} map of left-multiplication by group elements. One way to think of the pushforward map is to note that L_{g} smoothly and invertibly maps points near h to points near gh, and therefore also maps curves through h to curves through gh. Since vector fields are the velocities of curves, the map L_{g} must have a natural manifestation as a map on vectors. This map, called the pushforward map or L_{g ∗}: T_{h}G → T_{gh}G can be expressed in coordinate-dependent terms, as follows.

This is the coordinate-dependent form of L_{g}:

(L_{g}(h))^{ij} = x^{ik}(g) x^{kj}(h)

And here is the coordinate-dependent form of the pushforward map:

(L_{g ∗} **V**)^{ij}|_{h} = V^{αβ} ∂L_{g}^{ij}/∂x^{αβ}(h)

= V^{αβ} x^{ik}(g) δ^{k}_{α} δ^{j}_{β}

= V^{kj} x^{ik}(g)

= (x • V)^{ij}

In other words, the pushforward of the L_{g} map is just given by the matrix operation of g on the matrix-coordinate components of V. The fact that the pushforward map has such a simple form in this coordinate representation will be of great utility to us.

**Left-Invariant Fields**

Now that we've introduced the lie algebra as the tangent space of the lie group, let us introduce another useful manifestation of the lie algebra: the set of left-invariant vector fields of G.

Since we have a smoothly varying family of smooth maps L_{g}: G → G, we can push a vector forward from any point on the manifold to any other. Specifically, we can transport a vector from the identity to every point g ∈ G simply by acting with L_{g ∗}. This generates a vector field on G, known as a left-invariant vector field. It is called "left-invariant" because it is invariant under the pushforward map associated with left-multiplication. The vector field **X** being left-invariant just means that pushing the vector **X|**_{h} at h to **L**_{g ∗} X|_{h} at gh gives exactly the same result as simply evaluating vector field **X|**_{gh} at gh.

There is a one-to-one correspondence between vectors **V|**_{e} defined at the identity, and left-invariant vector fields. We can generate a unique left-invariant vector field from a vector **V|**_{e} by pushing **V|**_{e} forward via **V(g)** = **L**_{g ∗} V|_{e}, for all g ∈ G. Going back the other direction is even easier; we can generate a unique vector **V|**_{e} from a left-invariant vector field **V(g)** simply by evaluating **V(g)** at the identity, g = e. Thus, the left-invariant vector fields give us another manifestation of the vector space T_{e}G.

Additionally, the left-invariant fields defined on G give us something else. We can now define the multiplication law required for our lie algebra: **V** ∗ **W** = £_{V} [ **W** ] = [ **V, W** ], the lie derivative of **W** with respect to the vector field **V**. This is also called the lie bracket, or commutator of the two vector fields, as it is expressed computationally:

[ **V, W** ]^{ij} = V_{g}^{kl} ∂W_{g}^{ij}/∂x^{kl} - W_{g}^{kl} ∂V_{g}^{ij}/∂x^{kl}

We will now show that this algebra is closed, i.e. the lie bracket of two left-invariant vector fields indeed gives us another left-invariant vector field. We can express these vector fields explicitly, since they are just defined by the pushforward map:

[ **V**_{g}, W_{g} ]^{ij} = x^{in} V_{e}^{kl} W_{e}^{mj} δ_{kn} δ_{lm} - x^{in} W_{e}^{kl} V_{e}^{mj} δ_{kn} δ_{lm}

[ **V**_{g}, W_{g} ]^{ij} = x^{in} V_{e}^{nl} W_{e}^{lj} - x^{in} W_{e}^{nl} V_{e}^{lj}

[ **V**_{g}, W_{g} ]^{ij} = (x • V_{e} • W_{e} - x • W_{e} • V_{e})^{ij}

But x's action on a vector is just the pushforward map:

[ **V, W** ]_{g} = **L**_{g ∗} (**V**_{e} • W_{e} - W_{e} • V_{e})

And this is manifestly left-invariant. Thus, the lie derivative has provided us with a multiplication law between two left-invariant vector fields, which produces another left-invariant vector field, as seen explicitly above. However, something much more exciting is seen explicitly above. On the left side of the equation is the lie bracket, or vector commutator of the two vector fields, **V** and **W**. On the right side is the matrix commutator of the two sets of components, expressed in matrix form, V^{ij} and W^{ij}. So, on a lie group, the vector commutator of two left-invariant vector fields is exactly the matrix commutator of the corresponding matrix-coordinate components! Symbolically,

[ **V, W** ]_{vector} = [ **V, W** ]_{matrix}

**From the Group to the Algebra and Back Again**

It should come as no surprise that we are able to construct the lie algebra from our knowledge of the group structure. What may come as a surprise is that we can construct most of the structure of a group out of our knowledge of its lie algebra. That is, if we have a set of matrices {**X**^{a}} satisfying [ **X**^{a}, X^{b} ] = c_{abc} **X**^{c}, we can reconstruct a group manifold corresponding to the lie algebra given by the coefficients c_{abc}, which are known as structure constants. In order to get there from here, we must first study one-parameter subgroups and the exponential map.

A one-parameter subgroup of G is a connected curve in G whose elements form a subgroup. In the example of rotations SO_{3}, a one-parameter subgroup might be rotations about a single axis. This set of transformations forms a curve parameterized by the angle θ being rotated about the axis. There is a one-parameter subgroup in SO_{3} for each axis-direction. Each one-parameter subgroup must, of course, contain the identity, meaning these curves all intersect at this point on the manifold.

Now, general curves in a manifold M are often used to describe tangent vectors. Naturally, we can specialize to the case of one-parameter subgroups, now benefiting from the additional group structure. First, we search for a natural way of constructing all the one-parameter subgroups H(t) of a group G.

Since all the one-parameter subgroups intersect at the identity, we should be able to find them by looking in a neighborhood of the identity. Each one-parameter subgroup H(t) has an associated velocity vector **V**_{e} = **dH/dt** at the identity, where we will conveniently choose t = 0. We can generate the one-parameter subgroup H_{V}(t) from the velocity vector **V**_{e}, by making infinitesimal translations in the manifold. As ε tends to zero,

H_{V}(ε)^{ij} = x_{e}^{ij} + ε V_{e}^{ij}

= 1^{ij} + ε V_{e}^{ij}

Now, we've only moved an infinitesimal distance away from the identity, so we haven't yet constructed much of the subgroup at all. However, we know we can always get new group elements by multiplying old group elements together. In other words, we can translate a macroscopic distance by making a large number of microscopic translations in the subgroup, via matrix multiplication:

H_{V}(N × ε) = [ H_{V}(ε) ]^{N} = [ 1 + ε V_{e} ]^{N}

Now, if we take t = N × ε, we get:

H_{V}(t) = [ 1 + t/N V_{e} ]^{N}

Then take the limit as N → infinity (while ε → 0), and we find that our result is simply the matrix exponential:

H_{V}(t) = exp_{matrix}{t **V**_{e}}, evaluated via power series expansion:

H_{V}(t) = 1 + t **V**_{e} + ½ t^{2} **V**_{e} • **V**_{e} + ...

We have found one-parameter subgroups H_{V}(t) by just looking at the tangent space T_{e}M. We can be sure that this exhausts all connected one-parameter subgroups, because they must all pass through the identity, and hence be generated by this procedure. Since T_{e}M = £[ G ] is just the lie algebra of G, we see that there is a correspondence between one-parameter subgroups and elements of the lie algebra. The relationship becomes more apparent upon realizing that the velocity of a one-parameter subgroup is always left-invariant:

**L**_{H(t) ∗} V_{e} = **V**_{H(t)} = **dH/dt**.

This claim can be verified immediately, since we have an explicit formula for both H(t) and the pushforward map. The pushforward of a vector is just given by matrix multiplication on the matrix-coordinate components of **V**_{e}:

**L**_{H(t) ∗} V_{e} = H(t) • **V**_{e}

= (1 + t **V**_{e} + ½ t^{2} **V**_{e} • **V**_{e} + ...) • **V**_{e}

and dH/dt is easily computed:

dH/dt = **V**_{e} + t **V**_{e} • **V**_{e} + ½ t^{2} **V**_{e} • **V**_{e} • **V**_{e} + ...

= H(t) • **V**_{e}

We could have also simply noted that the equation **dH/dt** = H • **V**_{e} is the differential equation often used to define the matrix exponential.

So, the velocity of a one-parameter subgroup is left-invariant. We can also go the other direction. Given a left-invariant vector field **V**_{p}, we can produce a one-parameter subgroup H(t) whose tangent vector is **V**_{H(t)} at every point H(t) along the curve. Thus, we will show that this relationship between the lie algebra and one-parameter subgroups is one-to-one.

On a general manifold M, we have seen that there is a correspondence between curves and vectors; vectors are the velocities of curves. Therefore, if we have a smoothly varying nonintersecting family of curves filling M, we can produce a smooth vector field on M, by taking the velocities of these curves at each point. Now let's turn that around. Given a vector field **V(p)** on M, we can (at least locally) produce a unique family of curves, known as the set of integral curves of **V**. Each point p lies on exactly one curve, and the velocity of the curve at that point is exactly the vector given by V(p) at that point. The transition from vectors to curves in M is carried out via the *vector* exponential map (not to be confused with the matrix exponential used earlier, though we shall soon see that they conveniently produce the same curve in a lie group).

Computationally, exp_{vector}{t**V**} translating points along curves through g ∈ G can be expressed in the following manner:

exp_{vector}{t**V**} x^{ij}(g) = (1 + t V^{kl} ∂/∂x^{kl} + ½ t^{2} V^{kl} ∂/∂x^{kl} V^{mn} ∂/∂x^{mn} + ...) x^{ij}(g)

Now, ∂x^{αβ}/∂x^{γδ} = δ^{α}_{γ} δ^{β}_{δ}, so the derivatives that land directly on x^{ij} just produce delta functions, and we get:

exp_{vector}{t**V**} x^{ij}(g) = x^{ij} + t V^{ij} + ½ t^{2} V^{kl} ∂V^{ij}/∂x^{kl} + ...

Now, we see what this map looks like when V is a left-invariant vector field, and g is the identity, i.e. x^{ij}(e) = 1^{ij}, V_{t}^{ij} = x^{im}(t) V_{e}^{mj}:

exp_{vector}{t**V**} x^{ij}(e) = 1^{ij} + t V^{ij} + ½ t^{2} V^{kl} V^{mj} δ^{i}_{k} δ^{m}_{l} + ...

= 1 + t **V** + ½ t^{2} **V** • **V** + ...

In other words,

exp_{vector}{t**V**} = exp_{matrix}{t**V**}

When V is a left-invariant field, the abstract exponential map from tangent vectors T_{e}G to the manifold G is just the matrix exponential acting on the matrix-valued lie algebra element. We can intuitively interpret from this that the manifold structure of left-invariant fields mimics the group structure; the vector commutator is the same as the matrix commutator, and the vector exponential map is just given by matrix exponentiation.

Given a left-invariant vector field V(p), we look at the integral curve passing through the identity. Since the vector exponential map is the same as the matrix exponential map given above, the curve generated in this fashion is always a one-parameter subgroup. Thus, we have a natural map between the lie algebra and one-parameter subgroups. We are almost ready to use this to generate the group, G.

Every group element in the connected component of a compact lie group G lies in some one-parameter subgroup, and can therefore be expressed as g = exp{**A**_{g}}.

We now appeal to a mathematical theorem in linear algebra. If A and B are matrices,

exp{A} • exp{B} = exp{A + B + ½[ A,B ] + (more commutators of A and B)}

If we can express an element of G as an exponential of an element of the lie algebra, we can derive the group multiplication structure from the commutation relations:

g_{1} • g_{2} = exp{**A**_{1}} • exp{**A**_{2}} = exp{**A**_{1} + A_{2} + commutation relations}.

Thus, we should always be able to derive the group structure of G from the algebraic structure of £ [G]. We can't quite get back the whole group, for a few reasons which will be explained in a moment, but we can come close.

**Examples**

- Let's simply start with the group G = GL
_{n}R. What would the lie algebra look like?
We can represent an element close to the identity g(ε), like so:

g(ε) = 1 + ε**A** + O(ε^{2}), where **A** ∈ £ [ G ].

For sufficiently small &epsilon, g(ε) is invertible no matter what **A** is, so **A** can be any n × n matrix (including matrices with zero determinant). Thus, the lie algebra of GL_{n}R is the set of all n × n matrices, with no restrictions.

- G = SL
_{n}R
We have one restriction this time; det(g) = 1. Let's see how this manifests itself in the algebra:

g(ε) = 1 + ε**A** + O(ε^{2})

Since most of the matrix elements are very small, the only term contributing to the determinant for small ε is the product along the diagonal:

det(g) = (1 + εA^{11})(1 + εA^{22})...(1 + εA^{nn}) + O(ε^{2})

= 1 + ε(A^{11} + A^{22} + ... + A^{nn}) + O(ε^{2}).

This second term is just the trace of **A**:

det(g) = 1 + ε tr(**A**) + O(ε^{2})

We see that the restriction det(g) = 1 is equivalent to the requirement that tr(**A**) = 0. Thus, the lie algebra of SL_{n}R is the set of all traceless n × n matrices.

- G = SO
_{n}
Now our group requirement is that g^{T} • g = 1.

(1 + ε**A**^{T}) • (1 + ε**A**) + O(ε^{2}) = 1

1 + ε(A + A^{T}) = 1 + O(ε^{2})

Thus, our requirement is **A**^{T} = -**A**. The lie algebra of SO_{n} is just the set of antisymmetric n × n matrices.

- G = O
_{n}
It should come as no surprise that this group has the same lie algebra as SO_{n}, since they locally look the same; SO_{n} is just the connected component of O_{n} which is connected to the identity. Since the exponential map acts locally, in infinitesimal connected increments, it only maps to points on SO_{n}. This is one of the ways that two distinct lie groups can have the same lie algebra.

- G = GL
_{n}C
It should be clear that £ [G] is the set of all n × n complex matrices, with no restrictions.

- G = U
_{n}
By analogy with our treatment of SO_{n}, we can replace all transposes with hermitian conjugates. Our lie algebra is then just the set of all n × n anti-hermitian matrices, **A**^{†} = -**A**. Physicists often relabel these as **A** = iB, where B is a hermitian matrix, B^{†} = B.

- G = SU
_{n}
There is an additional requirement, that det(g) = 1. This, we have seen, means that tr(**A**) = 0. In other words, the lie algebra of SU_{n} is the set of all traceless anti-hermitian matrices (or, if you're a physicist, it's the set of traceless hermitian matrices).

- Special Case G = SU
_{2}
The set of all 2 × 2 traceless, anti-hermitian matrices is a three-dimensional vector space, which can be spanned by the basis, {**e**_{k}}:

| 0 i/2 | | 0 -1/2 | |i/2 0 |
|i/2 0 |, |1/2 0 |, | 0 -i/2 |

These are related to the pauli matrices σ_{1}, σ_{2}, σ_{3}, by e_{k} = ½ i σ_{k} (the pauli matrices are more commonly used by physicists). You may check for yourself that the lie algebra is summed up by the commutation relation:
[ **e**_{i}, e_{j} ] = ε_{ijk} **e**_{k}

where ε_{ijk} is the totally antisymmetric epsilon tensor. Thus, the structure constants of this lie algebra are given by c_{ijk} = ε_{ijk}.

Another useful relation you can quickly check: e_{1}^{2} = e_{2}^{2} = e_{3}^{2} = -¼

(Notice that the multiplicative structure of the lie algebra of SU_{2} is very similar to that of the quaternions given in "A Few Examples of Groups" near the beginning of this writeup)

We can use the {e_{k}} to discover the manifold structure of SU_{2}. Look at the set of matrices that can be expressed as a real linear combination of the {e_{k}}'s and the identity:

A = 2x e_{1} + 2y e_{2} + 2z e_{3} + w 1,

where the factors of two were added for later convenience. This set is a group if we omit matrices with determinant zero. This group is larger than SU_{2}, seen readily by noting that it is four-dimensional, rather than 3-dimensional, as SU_{2} should be. Near the identity (x,y,z small, w close to 1), we can remain in the group SU(2) by setting up a proper relationship between the coefficients (x,y,z,w). In fact, we can do this in the vicinity of any element of SU_{2}, since the group structure is the same all over the manifold. The relationship can be determined by requiring A • A^{†} = 1:

A • A^{†} = (2x e_{1} + 2y e_{2} + 2z e_{3} + w) • (-2x e_{1} - 2y e_{2} - 2z e_{3} + w)

The cross terms between w and the matrices cancel:

= -(2x e_{1} + 2y e_{2} + 2z e_{3})^{2} + w^{2}

And the cross terms between matrices cancel, because they anticommute:

= -4x^{2} e_{1}^{2} - 4y^{2} e_{2}^{2} - 4z^{2} e_{3}^{2} + w^{2}

Now, e_{k}^{2} = -¼, so

A^{†} • A = x^{2} + y^{2} + z^{2} + w^{2} = 1

We restrict ourselves from what is essentially R^{4} with coordinates (x,y,z,w) to the three-dimensional subgroup SU_{2}, using the condition that x^{2} + y^{2} + z^{2} + w^{2} = 1. Therefore, the manifold SU_{2} is nothing other than the three-sphere, S^{3}.

**The Rest of the Lie Algebra Story**

We've seen that a lie algebra can be produced from a lie group. Can distinct lie groups give rise to the same lie algebra? The answer is yes, for a couple of reasons.

First, there is the case that G might be disconnected, as in the case with O_{n} (example 4). The lie algebra for SO_{n} is identical to that of O_{n} just because there is some additional global structure to O_{n}, which is unreachable through the exponential map. There are more interesting examples, summed up with the following claim:

If lie groups G_{1}, G_{2}, ..., G_{i} all have the same lie algebras, then amongst all the {G_{k}} there is one that is simply connected. Call it G. Then all the other {G_{k}} can be written in terms of G as G_{k} = G / D_{k}, where D_{k} is a discrete normal subgroup of G. The quotient map G → G_{k} = G / D_{k} tells us how to construct G_{k}: Take G and create an equivalence relation by identifying g ≅ hg, for all h ∈ D_{k}, all g ∈ G. For example, all h ∈ D_{k} get mapped to the identity in G_{k} = G / D_{k}. An example of this follows.

**The Manifold Structure of SO**_{3}

Recall that the lie algebra of SO_{3} is the set of traceless 3 × 3 antisymmetric matrices. This set has dimension three (it had better, since that's the dimension of the group). We can write down a basis for £ [ SO_{3} ]:

| 0 0 0 | | 0 0 1 | | 0 -1 0 |
L1 = | 0 0 -1 |, L2 = | 0 0 0 |, L3 = | 1 0 0 |
| 0 1 0 | |-1 0 0 | | 0 0 0 |

You can check that these three basis elements satisfy the commutation relation:

[ **L**_{i}, L_{j} ] = ε_{ijk} **L**_{k}

Exactly the same commutation relations we found for the lie algebra of SU_{2}! In other words, the group structure of SO_{3} is locally identical to the group structure of SU_{2}. To put it yet another way, there is an isomorphism between a neighborhood of the identity in SO_{3} and a neighborhood of the identity in SU_{2}. Let's see if we can find a relationship between their global structures.

We can use the exponential map to find group elements in SO_{3} corresponding to group elements in SU_{2}. Let's look at the one-parameter subgroups generated by **e**_{1} and **L**_{1}. In the case of SU_{2},

exp{t **e**_{1}} = Σ (1/k!) t^{k} e_{1}^{k} = 1 + t e_{1} + ½ t^{2} e_{1}^{2} + ...

Remember, e_{1}^{2} = -¼, meaning if we look at powers of (2 e_{1}), they follow the same cyclic multiplication rules as i = √-1. In other words,

exp{t **e**_{1}} = exp{½t (2e_{1})} = cos(½t) + 2e_{1}sin(½t).

If you are not convinced by this argument, just expand the power series for the exponential, and you will find this to be true by nearly the same proof as that which shows e^{iθ} = cosθ + isinθ.

We can now write this in matrix form:

exp{t **e**_{1}} = cos(½t) + 2e_{1}sin(½t)

| 1 0 | | 0 i |
= | 0 1 |cos(t/2) + | i 0 |sin(t/2)
| cos(t/2) isin(t/2) |
= | isin(t/2) cos(t/2) |

Now, we find the same one-parameter subgroup in SO

_{3}:

exp{t **L**_{1}} = Σ (1/k!) t^{k} L_{1}^{k}

= 1 + t L_{1} + ½ t^{2} L_{1}^{2} + ...

Now, L_{1}^{2} is the following matrix:

| 0 0 0 |
| 0 -1 0 |
| 0 0 -1 |

which is just -1, but omitting the first row and column. This means a couple of things: (1) The first row and column will not appear in this taylor series, excepting the first term, which is just the identitiy element. (2) The 2 × 2 minor matrix left over will follow the same chain of reasoning as did 2e

_{1} in the expansion for SU

_{2} (again, expand it out for yourself if you need convincing). Thus, we will find that the exponential becomes:

exp{t **L**_{1}} = (1 in the first row and column) + (1_{2×2}) cos(t) + L_{1} sin(t)

| 1 0 0 |
= | 0 cos(t) -sin(t) |
| 0 sin(t) cos(t) |

We recognize this as the group of rotations by an angle t about the x-axis.

Now, we know that the exponential map gives us the same group structure when it acts on the same lie algebra. Thus, we can write down a homomorphism between these groups, based on equivocating the two exponential maps. Symbolically,

| 1 0 0 |
| 0 cos(t) -sin(t) | ↔ | cos(t/2) isin(t/2) |
| 0 sin(t) cos(t) | | isin(t/2) cos(t/2) |

Now, we can finally compare the global structures of these two groups, when viewed as manifolds. Notice what happens when we send t → t + 2π. For the group element R ∈ SO

_{3}, R → R. However, for the group element U ∈ SU

_{2}, U → -U. In other words, this mapping is two-to-one. You need to travel twice the parameter distance in SU

_{2} to get back to the same point. Therefore, this equivalence is a two-to-one quotient map. To put it another way,

SO_{3} ≅ SU_{2} / Z_{2}.

Since SU_{2} ≅ S^{3} is the three-sphere, SO_{3} is homeomorphic to the three-sphere after identifying antipodal points. You may recognize this space as RP^{3}, three-dimensional projective space. It can also be thought of as the space of lines passing through the origin in R^{4}.

SO_{3} ≅ RP^{3}

In summary, we start with a given lie algebra, and from this, we can produce two separate groups: SU_{2} and SO_{3}. Since the entirety of each group can be expressed in terms of the exponential map, they are implicitly related by a two-to-one mapping which preserves the group structure: SU_{2} → SO_{3}. SU_{2} is the simply connected lie group with structure constants ε_{ijk}. SO_{3} is another lie group, with the same structure constants ε_{ijk}. SO_{3} ≅ SU_{2} / Z_{2}. This relationship can be viewed as both a group-quotient and a topology-quotient.

**Realizations and Representations**

Throughout this discussion, we've interchangeably viewed Lie Groups from two different perspectives:

- On the abstract level, where lie groups are manifolds with a given multiplication law.
- Via concrete parameterizations of points in a lie group, using matrices (for which the group multiplication law is turned into matrix multiplication).

We should briefly flush out the second of these perspectives, as we've been using this description without really identifying it. To do so, we need a few definitions:

A realization of a lie group G is given by the *action* of G on a manifold M. An action of G on M is a differential map G × M → M which can be symbolically written (with a slight abuse of notation) as g • p, where g ∈ G, p ∈ M . The following rules must hold:

- e • p = p (where e is the identity in G)
- (g
_{1} • g_{2}) • p = g_{1} • (g_{2} • p)

Each element of G is "realized" as a particular transformation of M. For example, G = SO

_{3}, M = S

^{2}. Each element in SO

_{3} can be realized as a rotation, which maps points on the sphere to other points on the sphere.

There are a few ways that G can have a an action on itself, setting M = G. For example, we have already seen the realization L_{g}, left multiplication by g. L_{g} always provides an action of G on itself. Another example is conjugation, C_{g}, where C_{g}(h) = g • h • g^{ -1}.

A realization of G is called faithful if it is a one-to-one mapping. That is, if g_{1} • p ≠ g_{2} • p whenever g_{1} ≠ g_{2}.

A realization of G on a vector space, M = V, is called a representation. If the vector space has dimension n, then we say that it is an n-dimensional representation.

There is a concrete, useful way of thinking about representations. We can always choose a basis for V, {**e**_{i}}. We use this to express any vector **X** = X^{i} **e**_{i}. Then the action of g ∈ G on **X** ∈ V is g • **X** = **Y** = Y^{j} **e**_{j}. Then the relationship between {X^{i}} and {Y^{j}} can be summarized by a matrix:

**Y** = g • **X**

Y^{i} = M^{ij}(g) X^{j}

In other words, g • e_{i} = M^{ij} e_{j}

Thus, we have an explicit representation of every g ∈ G via an n × n matrix. More formally, when a lie group G acts on an n-dimensional vector space V, we get a homomorphism Φ: G → {collection of n × n matrices}.

Note: Although we defined O_{n}, SO_{n}, U_{n}, and SU_{n} in terms of n × n matrices, we can have m-dimensional representations of these groups.

**Examples**

For a more trivial example, take G = SU_{4}. We can construct an 8-dimensional representation by mapping each 4 × 4 matrix to a block-diagonal 8 × 8 matrix:

| M 0 |
| 0 M |

where M is a 4 × 4 matrix ∈ SU

_{4}. We could also add rows and columns by inserting the identity down the rest of the diagonal, e.g.

| M 0 0 |
| 0 1 0 |
| 0 0 1 |

but these are all rather silly examples.

For a more interesting example, take G = SU_{2}. It has a 3-dimensional representation, {e_{i}/2} → {L_{i}} which we flushed out earlier. There is a subtlety, though, as this is not a faithful representation. We've already shown that this is a two-to-one mapping. ±1 in SU_{2} gets mapped to the identity in SO_{3}. Thus, unit-determinant orthogonal matrices can be thought of as either an unfaithful representation of SU_{2} or the defining representation of SO_{3}.

**At the Level of Lie Algebras**

Representations can also be described in the lie algebra of a group. A d-dimensional representation of £ [ G ] is a map Γ from elements **A**_{i} of the lie algebra £ [ G ] to d × d matrices, Γ(**A**_{i}).

This map must preserve the vector structure of £ [ G ], by being a linear operator, but it must also preserve the algebraic structure, so that [ Γ(**A**_{i}), Γ(**A**_{j}) ] = Γ ( [ **A**_{i}, A_{j} ] ) = c_{ijk} Γ(**A**_{k}). The lie algebras we derived above for U_{n}, SU_{n}, SO_{n}, etc. can be considered the defining representations of these lie algebras, but the basic vector and algebraic structure is independent of representation.

**Equivalent Representations**

Let Φ(g) be a d-dimensional representation for G. Consider a new representation, Φ', given by conjugating every Φ(g) by a d × d matrix:

Φ'(g) = S • Φ(g) • S^{ -1}

You can check that Φ' has the same group structure, and since the conjugation operation is one-to-one, this is a faithful representation. Φ' is said to be "equivalent" to Φ.

A representation Φ of G is said to be completely reducible if it is equivalent to a block-diagonal representation,

| Φ1 0 0 ... |
| 0 Φ2 0 ... |
| 0 0 Φ3 ... |
| ... Φn |

Where each of the Φ

_{k} is a representation of Φ. In other words, Φ can just be built from simpler representations, using the

direct sum operation (which we are about to describe).

**Buiding Representations from Other Representations**

There are two basic operations we can use to produce a representation Φ_{3} from two given representations Φ_{1} and Φ_{2}:

- Direct Sum Representations
Given two representations of a group, Φ_{1}(g) and Φ_{2}(g), we can create another representation by writing the two representations in block-diagonal form:

| Φ1 0 |
Φ1 ⊕ Φ2 = | 0 Φ2 |

This is a faithful representation of both Φ_{1} and Φ_{2}, since we are really not changing any of the matrix multiplication. This gives us a d_{1} + d_{2} dimensional representation acting on the vector space, V_{1} ⊕ V_{2} (This is a direct sum of vector spaces, a related but distinct concept).

- Direct Product Representations
We can also define an action on the product vector space, V_{1} ⊗ V_{2}. Let's first make sure we understand the direct product of two vector spaces. Concretely, we can formulate a basis from the bases {**e**_{i}}, {**f**_{j}} of V_{1} and V_{2}, respectively. Then, a given element of V_{1} ⊗ V_{2} can be written a^{ij} **e**_{i} ⊗ f_{j}. Thus, the set of {**e**_{i} ⊗ f_{j}} is a basis for the direct product space.

Now, if V_{1} has a representation Φ_{1}, and V_{2} has a representation Φ_{2}, we can act on the direct product space by acting on the first basis vector with Φ_{1} and the second with Φ_{2}:

Φ_{V1 ⊗ V2} [ W^{ij} **e**_{i} ⊗ f_{j} ] = W^{ij} Φ_{V1 ⊗ V2} [ **e**_{i} ⊗ f_{j} ] = W^{ij} Φ_{1} [ **e**_{i} ] ⊗ Φ_{2} [ **f**_{j} ]

We get a d_{1} × d_{1} matrix times a d_{2} × d_{2} matrix. This can be thought of as a single large matrix, whose rows and columns are specified by two indices each: the (ij),(kl)th indices can be calculated by multiplying the (ik)'th component of Φ_{1} with the (jl)'th component of Φ_{2}. This matrix representation will have dimensions (d_{1}d_{2}) × (d_{1}d_{2}).

**The Adjoint Representation**
As stated before, we can define an action of G on itself by conjugation. This specifies a realization, which we will call "Ad".

Ad_{g'}: G → G, all g' ∈ G

h → g' • h • g'^{ -1}

We can extract from this realization an action of G on its tangent space, given by the pushforward map:

Ad_{g' ∗}: T_{g}G → T_{g' • g • g'-1}G, all g' ∈ G

Now, this isn't an action unless it maps the same space to itself. Fortunately, conjugation maps the identity to itself, so if we take g = e, the result is a bona fide realization.

Ad_{g' ∗}: T_{e}G → T_{e}G, all g' ∈ G

In fact, since T_{e}G is a vector space, this is a representation of G. It is fairly easy to show that the pushforward map also manifests itself via conjugation, but now acting on lie algebra elements. We show this by thinking of the pushforward map as a map from curves to curves. Specifically, we can look at Ad_{g}'s action on the one-parameter subgroup corresponding to the lie algebra element, **A**:

Ad_{g} (e^{A}) = g • e^{A} • g^{ -1}

= g • (1 + **A** + ½ **A**^{2} + ...) • g^{ -1}

= 1 + g • **A** • g^{ -1} + ½ g • **A** • **A** • g^{ -1} + ...

Now we use a common trick, which is to insert the identity 1 = g^{ -1} • g between adjacent copies of **A**:

Ad_{g} (e^{A}) = 1 + **g • A • g**^{ -1} + ½ **g • A • g**^{ -1} • **g • A • g**^{ -1} + ...

Since we can do this in between every two copies of **A**, we can note that this is now an exponential expansion in the conjugate, **g • A • g**^{ -1}:

Ad_{g} (e^{A}) = e^{g • A • g-1}

In a nutshell, the pushforward of conjugation is still conjugation, just acting on elements of the lie algebra. Moreover, conjugation in this sense is now a representation, not just a realization. It is possible to construct matrices g_{ij} which act on the lie algebra elements A_{k}, via:

g **A**_{i} g^{ -1} = g_{ik} **A**_{k}

This representation {g_{ij}} is known as the adjoint representation.

**The Adjoint Representaion, at the Level of Lie Algebras**

The adjoint representation must have a manifestation in the lie algebra of G. What would this look like? We can find out by setting g = exp{**A**_{a}}, and looking at the result the adjoint map acting on **A**_{b}:

**Ad**_{g ∗} A_{b} = exp{**A**_{a}} • **A**_{b} • exp{-**A**_{a}}

= (1 + **A**_{a} + ½ **A**_{a}^{2} + ...) • **A**_{b} • (1 - **A**_{a} + ½ **A**_{a}^{2} - ...)

We can write this expression in terms of commutators, if we multiply it all out:

= **A**_{b} + [ **A**_{a}, A_{b} ] + ½ [ **A**_{a}, [ **A**_{a}, A_{b} ] ] + ...

Of course, the commutator is how we define multiplication in the lie algebra:

= **A**_{b} + **A**_{a} ∗ **A**_{b} + ½ **A**_{a} ∗ **A**_{a} ∗ **A**_{b} + ...

This is beginning to look a great deal like the exponential map, but in terms of the lie algebra multiplication. We can make this more explicit by invoking the structure constants:

**A**_{a} ∗ **A**_{b} = c_{abc} **A**_{c}

The expansion looks like:

**A**_{b} + c_{abc} **A**_{c} + ½ c_{abc} c_{acd} **A**_{d} + ...

Now, we express the structure constants in terms of a set of matrices {T^{a}}, where (T^{a})_{bc} = c_{abc}

**Ad**_{g ∗} A = **A** + T^{a} • **A** + ½ T^{a} • T^{a} • **A** + ...

And we can now think of this as a matrix exponential map. In other words,

(Ad_{g ∗})_{bc} = exp{T^{a}}_{bc}

So, at the level of the lie algebra, the adjoint representation is given by n matrices {T^{a}}, n × n in dimension. The matrix entries are given explicitly by (T^{a})_{bc} = c_{abc}, where c_{abc} are the structure constants of the algebra. Since this is a representation of the algebra, the {T^{a}} must satisfy:

[ T^{a}, T^{b} ] = c_{abc} T^{c}

**Summary and Application**

The two mathematical concepts of a group and a manifold merge to form a lie group. Useful examples of lie groups can be formed as subgroups of GL_{n}R and GL_{n}C, in specific matrix representations. When we look at the tangent space of a lie group, we find that the group structure of the manifold endows a multiplication law onto its tangent space, in addition to its additive group structure as a vector space. Thus, the tangent space of a lie group forms an algebra. The mutiplication law is given specifically by the lie bracket of left-invariant vector fields, which are in one-to-one correspondence with tangent vectors at the identity.

The lie group structure therefore fixes the lie algebra structure, given by the structure constants c_{ijk}. The lie algebra can be used to reconstruct the group, or at least part of it. The exponential map reconstructs the manifold at the level of points, and the multiplication law of the group can be derived from the commutation relations, whenever group elements can be expressed as exponentials of lie algebra elements: e^{A} • e^{B} = e^{A + B + [ A, B ] + ...}.

Instead of considering a group as an abstract mathematical object, one can literally be realized by its action on a set. In the case that this set is a vector space, the realization becomes a linear transformation, and it is known as a representation. Representations are usually presented by mapping group elements to matrices, while preserving the group structure. One choice of representation that appears often in physics is the adjoint representation, where the lie group acts on its own lie algebra, by conjugation. At the level of the lie algebra, the adjoint representation can be embodied by viewing the structure constants c_{ijk} as a set of n matrices of dimension n × n.

In physics, lie groups manifest themselves as symmetries. For example, three-dimensional space exhibits the symmetry group SO_{n} as the set of rotations in space. Quantum mechanics tells us that these symmetries also manifest themselves at the level of particles. For example, electrons transform as 2-component spinors under rotations. In fact, it is the group SU_{2} under which electrons transform. It is possible (and, in fact, more mathematically natural) to think of the SO_{3} transformations that we are so familiar with as a mere representation of the fundamental transformation that is going on; that of SU_{2}. This is because the transformations of SO_{3} are an unfaithful representation of SU_{2}; SO_{3} only captures *half* of the transformation group. This is most apparent in the fact that the electron waveform picks up a minus sign when it undergoes a rotation of 2π -- something which appears quite peculiar, unless we can believe that physical rotations are fundamentally represented by SU_{2}.

It is a subject somewhat beyond the scope of this writeup, but each of the fundamental forces of the universe can ultimately be described by lie groups. The electromagnetic force is given by the group U_{1}, the weak nuclear force by SU_{2}, and the strong nuclear force SU_{3}. These groups act on physical particles in various representations (i.e. SU_{2} acts on electrons, showing that they respond to weak nuclear interactions), and the representations in which the particles transform can ultimately describe how the particles interact via the various forces. Certainly there are many details to be filled in, and perhaps they will be, elsewhere in everything2.

^{+}*I normally use boldface to describe vectors, so I continued this notation as a reminder that the lie algebra of G is equivalent to its tangent space. I probably lost some consistency, as there are situations where it's better to think of A as a vector, and some where it's better to just think of it as a matrix. I hope it doesn't cause more confusion than it's worth.*