Quite possibly the most beautiful marriage of concepts in all of mathematics, a lie group is an object endowed with rich structure and great symmetry. It is also of great use to physicists in all fields, due to its intimate connection with physical symmetries. Understanding the great power and beauty of lie groups requires some mathematical rigor. However, the basic idea is not too difficult to grasp.

There are two core concepts that must be understood before tackling lie groups:

The basic overview of these two concepts follows:

Group Theory

A group G is a collection of objects together with a composition law, or "multiplication" law • that satisfies:

  • Closure: a,b ∈ G → (a • b) ∈ G.
  • Associativity: (a • b) • c = a • (b • c)
  • An identity element exists: 1 • a = a • 1 = a, for all a ∈ G
  • Inverses exist: a -1 • a = 1, for all a ∈ G
G is said to be commutative or abelian if a • b = b • a for all a,b ∈ G.

A subgroup H of G is simply a subset of G which is a closed group by itself under the inherited multiplication law.

A Few Examples of Groups

  • The integers, Z, under addition
  • R under addition, R× = R - {0} under multiplication
  • The quaternions H = {±1, ±i, ±j, ±k} given the following multiplication rules:
         i2 = j2 = k2 = -1
         ij = -ji = k
         ki = -ik = j
         jk = -kj = i
    You might think of this as the set of unit vectors with the cross product multiplication law. This is an example of a non-commutative group.
  • Any vector space is a group under addition.
  • Finally, there are matrix groups, which are subsets of GLnR (n × n invertible matrices with real-valued entries) or GLnC (the same thing but with complex-valued entries). GL stands for the "General Linear" group.

There is a great deal more to be said about groups, which is why you should read the group theory node. We, however, will now shift gears to briefly explain manifolds:


Very briefly, a manifold is an n-dimensional space. That is, it is a space which can be parameterized by n continuous coordinates.

A Few Examples of Manifolds:

Manifolds, like groups, are deep mathematical objects, which is why they have their own node.

Lie Groups

We now meld these two important mathematical concepts together.

A lie group is a smooth manifold endowed with a group structure. Call this manifold G. Points on G can be multiplied.

There exists a smooth multiplication map
     M: G × G → G
     (g1, g2) → g1 • g2 ∈ G

There also exists a smooth inversion map
     I: G → G
     g → g -1

In other words, if g1 is near g2 on the manifold, we expect h • g1 to be near h • g2. Similarly, we expect g1-1 to be near g2-1.


Here are a few examples where we can add a group structure to a given manifold:

R and R× provide immediate examples.

The circle, S1, where each point p is given by an angle θ(p). Points can be "multiplied" in an intuitive fashion: θ(p • q) = θ(p) + θ(q). We could also think of this group as the set of complex phases, p ↔ ei θ(p), q ↔ ei θ(q). Literally multiplying these points together gives us the same group composition law. Note that this is an abelian group. It is called U1, for reasons that will be explained soon.

We cannot endow a general manifold with a group structure. For example, the sphere, S2 cannot be given a group structure.

The most potent examples of lie groups are matrix groups, i.e. subgroups of GLnR, or GLnC. We can readily see that GLnR as a manifold, by choosing our coordinates to be the n2 matrix entries. The only requirement is that the determinant of the matrix is nonzero. We can remove this locus to get the whole manifold. We could go through a mathematical proof to show that we are still left with an n2-dimensional manifold, but our time would be better spent exploring the various subgroups of GLnR and GLnC.

Examples of Subgroups of GLnR and GLnC

    Subgroups of GLnR:

  1. The Special Linear Group, SLnR; the group of n × n matrices whose determinant = 1. This is closed, since determinants multiply (Det[AB] = Det[A] × Det[B]). S stands for "Special" by which we mean "unit determinant". SLnR is (n2 - 1)-dimensional, since the determinant condition places a one-dimensional restriction on the coordinates.
  2. The group of rotations and reflections in n dimensions can be represented by On, the group of orthogonal matrices, AT = A -1, or ATA = 1. This is easily shown to be a closed subgroup, because (AB)T = BTAT = B -1A -1 = (AB) -1. The orthogonality condition ATA = 1 is equivalent to the condition that its columns form a set of orthonormal vectors in Rn. This is an n(n+1)/2 - dimensional condition, which means that On is an n(n-1)/2 - dimensional space.
  3. The group of rotations only (no reflections) can be represented by SOnR, the Special Orthogonal group, which is the group of orthogonal matrices restricted with the unit-determinant condition. This is not a very strong restriction, as orthogonal matrices can only have determinant ±1, since
    1 = Det[1] = Det[ATA] = Det[A] Det[A]
    Det[A]2 = 1
    SOn has the same dimensionality as On, dim[ SOn ] = n(n-1)/2. Thus, the group of rotations in three dimensions has dimensionality 3.

    Subgroups of GLnC:

  4. Un = Unitary n × n matrices. A unitary matrix satisfies UU = 1, where U = U*T (the complex conjugate of the transpose). As a special case, if we set n = 1, U1 = 1 × 1 complex matrices, which is really just a fancy way of expressing complex numbers, but subject to the condition (u*)u = |u|2 = 1. These are just complex phases, e, like those used earlier to describe the circle S1.
  5. SUn = Special Unitary matrices, i.e. unitary matrices with unit determinant.

A distinction should be made before we continue. A lie group like SO3 can act on a manifold like S2, in that SO3 can be thought of as the group of rotations of the points on the sphere. However, the manifold SO3 is not the same as the manifold S2. It has a very different shape, which we will explore eventually, but you might quickly note that SO3 has dimension 3, while S2 is only 2-dimensional. It is important to make a clear distinction between the group manifold itself and any manifold it may act on.

One of the reasons that the structure of a lie group is so useful is that we can get from any point on G to any other point via a smooth invertible map Lg: G → G. This map is simply left multiplication in G, i.e. Lg(h) = g • h. This provides us with a smoothly varying family of maps, and as we said before, we can use this map to get from any point h1 to any other point h2, simply by setting g = h2 • h1-1.

The Tangent Space of a Lie Group+

Any manifold M has a tangent space TpM at each point p. This is a vector space, so we can always add elements of TpM together. In the special case that the manifold is also a group, the tangent space has a way of inheriting a multiplication operation from the multiplicative structure of the group, meaning we can multiply two vectors together to get a new vector. This multiplication of vectors will be made explicit shortly. Since TpG is a group under addition, and also has a multiplication operation, it is what is known in mathematics as a ring or algebra. Specifically, TpG is called the Lie Algebra of G, often denoted £[G]. We will find that the lie algebra of G encodes much of the information about G itself. This is very useful, because algebras are usually much simpler to work with.

Let's make things somewhat more explicit. First, we choose our coordinates on G to be the matrix entries xij. Recall that we can think of any vector as a directional derivative, V = Vij ∂/∂xij. If you are confused by the notation, just think of "ij" as a single index, specified by two numbers. We could have written this out the usual way, like Vk ∂/∂xk, where k runs from 1 to n2, but this notation doesn't encode the multiplicative structure of matrices.

Intuitively, we will reconstruct G from its lie algebra by pushing forward our tangent space from the identity element to every other point g' ∈ G. This is acomplished by using the pushforward of the Lg map of left-multiplication by group elements. One way to think of the pushforward map is to note that Lg smoothly and invertibly maps points near h to points near gh, and therefore also maps curves through h to curves through gh. Since vector fields are the velocities of curves, the map Lg must have a natural manifestation as a map on vectors. This map, called the pushforward map or Lg ∗: ThG → TghG can be expressed in coordinate-dependent terms, as follows.

This is the coordinate-dependent form of Lg:

(Lg(h))ij = xik(g) xkj(h)

And here is the coordinate-dependent form of the pushforward map:

(Lg ∗ V)ij|h = Vαβ ∂Lgij/∂xαβ(h)

= Vαβ xik(g) δkα δjβ

= Vkj xik(g)

= (x • V)ij

In other words, the pushforward of the Lg map is just given by the matrix operation of g on the matrix-coordinate components of V. The fact that the pushforward map has such a simple form in this coordinate representation will be of great utility to us.

Left-Invariant Fields

Now that we've introduced the lie algebra as the tangent space of the lie group, let us introduce another useful manifestation of the lie algebra: the set of left-invariant vector fields of G.

Since we have a smoothly varying family of smooth maps Lg: G → G, we can push a vector forward from any point on the manifold to any other. Specifically, we can transport a vector from the identity to every point g ∈ G simply by acting with Lg ∗. This generates a vector field on G, known as a left-invariant vector field. It is called "left-invariant" because it is invariant under the pushforward map associated with left-multiplication. The vector field X being left-invariant just means that pushing the vector X|h at h to Lg ∗ X|h at gh gives exactly the same result as simply evaluating vector field X|gh at gh.

There is a one-to-one correspondence between vectors V|e defined at the identity, and left-invariant vector fields. We can generate a unique left-invariant vector field from a vector V|e by pushing V|e forward via V(g) = Lg ∗ V|e, for all g ∈ G. Going back the other direction is even easier; we can generate a unique vector V|e from a left-invariant vector field V(g) simply by evaluating V(g) at the identity, g = e. Thus, the left-invariant vector fields give us another manifestation of the vector space TeG.

Additionally, the left-invariant fields defined on G give us something else. We can now define the multiplication law required for our lie algebra: VW = £V [ W ] = [ V, W ], the lie derivative of W with respect to the vector field V. This is also called the lie bracket, or commutator of the two vector fields, as it is expressed computationally:

[ V, W ]ij = Vgkl ∂Wgij/∂xkl - Wgkl ∂Vgij/∂xkl

We will now show that this algebra is closed, i.e. the lie bracket of two left-invariant vector fields indeed gives us another left-invariant vector field. We can express these vector fields explicitly, since they are just defined by the pushforward map:

[ Vg, Wg ]ij = xin Vekl Wemj δkn δlm - xin Wekl Vemj δkn δlm

[ Vg, Wg ]ij = xin Venl Welj - xin Wenl Velj

[ Vg, Wg ]ij = (x • Ve • We - x • We • Ve)ij

But x's action on a vector is just the pushforward map:

[ V, W ]g = Lg ∗ (Ve • We - We • Ve)

And this is manifestly left-invariant. Thus, the lie derivative has provided us with a multiplication law between two left-invariant vector fields, which produces another left-invariant vector field, as seen explicitly above. However, something much more exciting is seen explicitly above. On the left side of the equation is the lie bracket, or vector commutator of the two vector fields, V and W. On the right side is the matrix commutator of the two sets of components, expressed in matrix form, Vij and Wij. So, on a lie group, the vector commutator of two left-invariant vector fields is exactly the matrix commutator of the corresponding matrix-coordinate components! Symbolically,

[ V, W ]vector = [ V, W ]matrix

From the Group to the Algebra and Back Again

It should come as no surprise that we are able to construct the lie algebra from our knowledge of the group structure. What may come as a surprise is that we can construct most of the structure of a group out of our knowledge of its lie algebra. That is, if we have a set of matrices {Xa} satisfying [ Xa, Xb ] = cabc Xc, we can reconstruct a group manifold corresponding to the lie algebra given by the coefficients cabc, which are known as structure constants. In order to get there from here, we must first study one-parameter subgroups and the exponential map.

A one-parameter subgroup of G is a connected curve in G whose elements form a subgroup. In the example of rotations SO3, a one-parameter subgroup might be rotations about a single axis. This set of transformations forms a curve parameterized by the angle θ being rotated about the axis. There is a one-parameter subgroup in SO3 for each axis-direction. Each one-parameter subgroup must, of course, contain the identity, meaning these curves all intersect at this point on the manifold.

Now, general curves in a manifold M are often used to describe tangent vectors. Naturally, we can specialize to the case of one-parameter subgroups, now benefiting from the additional group structure. First, we search for a natural way of constructing all the one-parameter subgroups H(t) of a group G.

Since all the one-parameter subgroups intersect at the identity, we should be able to find them by looking in a neighborhood of the identity. Each one-parameter subgroup H(t) has an associated velocity vector Ve = dH/dt at the identity, where we will conveniently choose t = 0. We can generate the one-parameter subgroup HV(t) from the velocity vector Ve, by making infinitesimal translations in the manifold. As ε tends to zero,

HV(ε)ij = xeij + ε Veij

= 1ij + ε Veij

Now, we've only moved an infinitesimal distance away from the identity, so we haven't yet constructed much of the subgroup at all. However, we know we can always get new group elements by multiplying old group elements together. In other words, we can translate a macroscopic distance by making a large number of microscopic translations in the subgroup, via matrix multiplication:

HV(N × ε) = [ HV(ε) ]N = [ 1 + ε Ve ]N

Now, if we take t = N × ε, we get:

HV(t) = [ 1 + t/N Ve ]N

Then take the limit as N → infinity (while ε → 0), and we find that our result is simply the matrix exponential:

HV(t) = expmatrix{t Ve}, evaluated via power series expansion:

HV(t) = 1 + t Ve + ½ t2 VeVe + ...

We have found one-parameter subgroups HV(t) by just looking at the tangent space TeM. We can be sure that this exhausts all connected one-parameter subgroups, because they must all pass through the identity, and hence be generated by this procedure. Since TeM = £[ G ] is just the lie algebra of G, we see that there is a correspondence between one-parameter subgroups and elements of the lie algebra. The relationship becomes more apparent upon realizing that the velocity of a one-parameter subgroup is always left-invariant:

LH(t) ∗ Ve = VH(t) = dH/dt.

This claim can be verified immediately, since we have an explicit formula for both H(t) and the pushforward map. The pushforward of a vector is just given by matrix multiplication on the matrix-coordinate components of Ve:

LH(t) ∗ Ve = H(t) • Ve

= (1 + t Ve + ½ t2 VeVe + ...) • Ve

and dH/dt is easily computed:

dH/dt = Ve + t VeVe + ½ t2 VeVeVe + ...

= H(t) • Ve

We could have also simply noted that the equation dH/dt = H • Ve is the differential equation often used to define the matrix exponential.

So, the velocity of a one-parameter subgroup is left-invariant. We can also go the other direction. Given a left-invariant vector field Vp, we can produce a one-parameter subgroup H(t) whose tangent vector is VH(t) at every point H(t) along the curve. Thus, we will show that this relationship between the lie algebra and one-parameter subgroups is one-to-one.

On a general manifold M, we have seen that there is a correspondence between curves and vectors; vectors are the velocities of curves. Therefore, if we have a smoothly varying nonintersecting family of curves filling M, we can produce a smooth vector field on M, by taking the velocities of these curves at each point. Now let's turn that around. Given a vector field V(p) on M, we can (at least locally) produce a unique family of curves, known as the set of integral curves of V. Each point p lies on exactly one curve, and the velocity of the curve at that point is exactly the vector given by V(p) at that point. The transition from vectors to curves in M is carried out via the vector exponential map (not to be confused with the matrix exponential used earlier, though we shall soon see that they conveniently produce the same curve in a lie group).

Computationally, expvector{tV} translating points along curves through g ∈ G can be expressed in the following manner:

expvector{tV} xij(g) = (1 + t Vkl ∂/∂xkl + ½ t2 Vkl ∂/∂xkl Vmn ∂/∂xmn + ...) xij(g)

Now, ∂xαβ/∂xγδ = δαγ δβδ, so the derivatives that land directly on xij just produce delta functions, and we get:

expvector{tV} xij(g) = xij + t Vij + ½ t2 Vkl ∂Vij/∂xkl + ...

Now, we see what this map looks like when V is a left-invariant vector field, and g is the identity, i.e. xij(e) = 1ij, Vtij = xim(t) Vemj:

expvector{tV} xij(e) = 1ij + t Vij + ½ t2 Vkl Vmj δik δml + ...

= 1 + t V + ½ t2 VV + ...

In other words,

expvector{tV} = expmatrix{tV}

When V is a left-invariant field, the abstract exponential map from tangent vectors TeG to the manifold G is just the matrix exponential acting on the matrix-valued lie algebra element. We can intuitively interpret from this that the manifold structure of left-invariant fields mimics the group structure; the vector commutator is the same as the matrix commutator, and the vector exponential map is just given by matrix exponentiation.

Given a left-invariant vector field V(p), we look at the integral curve passing through the identity. Since the vector exponential map is the same as the matrix exponential map given above, the curve generated in this fashion is always a one-parameter subgroup. Thus, we have a natural map between the lie algebra and one-parameter subgroups. We are almost ready to use this to generate the group, G.

Every group element in the connected component of a compact lie group G lies in some one-parameter subgroup, and can therefore be expressed as g = exp{Ag}.

We now appeal to a mathematical theorem in linear algebra. If A and B are matrices,

exp{A} • exp{B} = exp{A + B + ½[ A,B ] + (more commutators of A and B)}

If we can express an element of G as an exponential of an element of the lie algebra, we can derive the group multiplication structure from the commutation relations:

g1 • g2 = exp{A1} • exp{A2} = exp{A1 + A2 + commutation relations}.

Thus, we should always be able to derive the group structure of G from the algebraic structure of £ [G]. We can't quite get back the whole group, for a few reasons which will be explained in a moment, but we can come close.


  1. Let's simply start with the group G = GLnR. What would the lie algebra look like?

    We can represent an element close to the identity g(ε), like so:

    g(ε) = 1 + εA + O(ε2), where A ∈ £ [ G ].

    For sufficiently small &epsilon, g(ε) is invertible no matter what A is, so A can be any n × n matrix (including matrices with zero determinant). Thus, the lie algebra of GLnR is the set of all n × n matrices, with no restrictions.

  2. G = SLnR

    We have one restriction this time; det(g) = 1. Let's see how this manifests itself in the algebra:

    g(ε) = 1 + εA + O(ε2)

    Since most of the matrix elements are very small, the only term contributing to the determinant for small ε is the product along the diagonal:

    det(g) = (1 + εA11)(1 + εA22)...(1 + εAnn) + O(ε2)

    = 1 + ε(A11 + A22 + ... + Ann) + O(ε2).

    This second term is just the trace of A:

    det(g) = 1 + ε tr(A) + O(ε2)

    We see that the restriction det(g) = 1 is equivalent to the requirement that tr(A) = 0. Thus, the lie algebra of SLnR is the set of all traceless n × n matrices.

  3. G = SOn

    Now our group requirement is that gT • g = 1.

    (1 + εAT) • (1 + εA) + O(ε2) = 1

    1 + ε(A + AT) = 1 + O(ε2)

    Thus, our requirement is AT = -A. The lie algebra of SOn is just the set of antisymmetric n × n matrices.

  4. G = On

    It should come as no surprise that this group has the same lie algebra as SOn, since they locally look the same; SOn is just the connected component of On which is connected to the identity. Since the exponential map acts locally, in infinitesimal connected increments, it only maps to points on SOn. This is one of the ways that two distinct lie groups can have the same lie algebra.

  5. G = GLnC

    It should be clear that £ [G] is the set of all n × n complex matrices, with no restrictions.

  6. G = Un

    By analogy with our treatment of SOn, we can replace all transposes with hermitian conjugates. Our lie algebra is then just the set of all n × n anti-hermitian matrices, A = -A. Physicists often relabel these as A = iB, where B is a hermitian matrix, B = B.

  7. G = SUn

    There is an additional requirement, that det(g) = 1. This, we have seen, means that tr(A) = 0. In other words, the lie algebra of SUn is the set of all traceless anti-hermitian matrices (or, if you're a physicist, it's the set of traceless hermitian matrices).

  8. Special Case G = SU2

    The set of all 2 × 2 traceless, anti-hermitian matrices is a three-dimensional vector space, which can be spanned by the basis, {ek}:

     | 0  i/2 |   | 0 -1/2 |   |i/2  0  |  
     |i/2  0  |,  |1/2  0  |,  | 0 -i/2 |
    These are related to the pauli matrices σ1, σ2, σ3, by ek = ½ i σk (the pauli matrices are more commonly used by physicists). You may check for yourself that the lie algebra is summed up by the commutation relation:

    [ ei, ej ] = εijk ek

    where εijk is the totally antisymmetric epsilon tensor. Thus, the structure constants of this lie algebra are given by cijk = εijk.

    Another useful relation you can quickly check: e12 = e22 = e32 = -¼

    (Notice that the multiplicative structure of the lie algebra of SU2 is very similar to that of the quaternions given in "A Few Examples of Groups" near the beginning of this writeup)

    We can use the {ek} to discover the manifold structure of SU2. Look at the set of matrices that can be expressed as a real linear combination of the {ek}'s and the identity:

    A = 2x e1 + 2y e2 + 2z e3 + w 1,

    where the factors of two were added for later convenience. This set is a group if we omit matrices with determinant zero. This group is larger than SU2, seen readily by noting that it is four-dimensional, rather than 3-dimensional, as SU2 should be. Near the identity (x,y,z small, w close to 1), we can remain in the group SU(2) by setting up a proper relationship between the coefficients (x,y,z,w). In fact, we can do this in the vicinity of any element of SU2, since the group structure is the same all over the manifold. The relationship can be determined by requiring A • A = 1:

    A • A = (2x e1 + 2y e2 + 2z e3 + w) • (-2x e1 - 2y e2 - 2z e3 + w)

    The cross terms between w and the matrices cancel:

    = -(2x e1 + 2y e2 + 2z e3)2 + w2

    And the cross terms between matrices cancel, because they anticommute:

    = -4x2 e12 - 4y2 e22 - 4z2 e32 + w2

    Now, ek2 = -¼, so

    A • A = x2 + y2 + z2 + w2 = 1

    We restrict ourselves from what is essentially R4 with coordinates (x,y,z,w) to the three-dimensional subgroup SU2, using the condition that x2 + y2 + z2 + w2 = 1. Therefore, the manifold SU2 is nothing other than the three-sphere, S3.

The Rest of the Lie Algebra Story

We've seen that a lie algebra can be produced from a lie group. Can distinct lie groups give rise to the same lie algebra? The answer is yes, for a couple of reasons.

First, there is the case that G might be disconnected, as in the case with On (example 4). The lie algebra for SOn is identical to that of On just because there is some additional global structure to On, which is unreachable through the exponential map. There are more interesting examples, summed up with the following claim:

If lie groups G1, G2, ..., Gi all have the same lie algebras, then amongst all the {Gk} there is one that is simply connected. Call it G. Then all the other {Gk} can be written in terms of G as Gk = G / Dk, where Dk is a discrete normal subgroup of G. The quotient map G → Gk = G / Dk tells us how to construct Gk: Take G and create an equivalence relation by identifying g ≅ hg, for all h ∈ Dk, all g ∈ G. For example, all h ∈ Dk get mapped to the identity in Gk = G / Dk. An example of this follows.

The Manifold Structure of SO3

Recall that the lie algebra of SO3 is the set of traceless 3 × 3 antisymmetric matrices. This set has dimension three (it had better, since that's the dimension of the group). We can write down a basis for £ [ SO3 ]:

     | 0  0  0 |       | 0  0  1 |       | 0 -1  0 |
L1 = | 0  0 -1 |, L2 = | 0  0  0 |, L3 = | 1  0  0 |
     | 0  1  0 |       |-1  0  0 |       | 0  0  0 |
You can check that these three basis elements satisfy the commutation relation:

[ Li, Lj ] = εijk Lk

Exactly the same commutation relations we found for the lie algebra of SU2! In other words, the group structure of SO3 is locally identical to the group structure of SU2. To put it yet another way, there is an isomorphism between a neighborhood of the identity in SO3 and a neighborhood of the identity in SU2. Let's see if we can find a relationship between their global structures.

We can use the exponential map to find group elements in SO3 corresponding to group elements in SU2. Let's look at the one-parameter subgroups generated by e1 and L1. In the case of SU2,

exp{t e1} = Σ (1/k!) tk e1k = 1 + t e1 + ½ t2 e12 + ...

Remember, e12 = -¼, meaning if we look at powers of (2 e1), they follow the same cyclic multiplication rules as i = √-1. In other words,

exp{t e1} = exp{½t (2e1)} = cos(½t) + 2e1sin(½t).

If you are not convinced by this argument, just expand the power series for the exponential, and you will find this to be true by nearly the same proof as that which shows e = cosθ + isinθ.

We can now write this in matrix form:

exp{t e1} = cos(½t) + 2e1sin(½t)

  | 1  0 |           | 0  i |
= | 0  1 |cos(t/2) + | i  0 |sin(t/2)

  | cos(t/2)  isin(t/2) |
= | isin(t/2)  cos(t/2) |
Now, we find the same one-parameter subgroup in SO3:

exp{t L1} = Σ (1/k!) tk L1k

= 1 + t L1 + ½ t2 L12 + ...

Now, L12 is the following matrix:

| 0  0  0 |
| 0 -1  0 |
| 0  0 -1 |
which is just -1, but omitting the first row and column. This means a couple of things: (1) The first row and column will not appear in this taylor series, excepting the first term, which is just the identitiy element. (2) The 2 × 2 minor matrix left over will follow the same chain of reasoning as did 2e1 in the expansion for SU2 (again, expand it out for yourself if you need convincing). Thus, we will find that the exponential becomes:

exp{t L1} = (1 in the first row and column) + (12×2) cos(t) + L1 sin(t)

  | 1      0      0   |
= | 0  cos(t) -sin(t) |
  | 0  sin(t)  cos(t) |
We recognize this as the group of rotations by an angle t about the x-axis.

Now, we know that the exponential map gives us the same group structure when it acts on the same lie algebra. Thus, we can write down a homomorphism between these groups, based on equivocating the two exponential maps. Symbolically,

| 1      0      0   |
| 0  cos(t) -sin(t) | ↔ | cos(t/2)  isin(t/2) |
| 0  sin(t)  cos(t) |   | isin(t/2)  cos(t/2) |
Now, we can finally compare the global structures of these two groups, when viewed as manifolds. Notice what happens when we send t → t + 2π. For the group element R ∈ SO3, R → R. However, for the group element U ∈ SU2, U → -U. In other words, this mapping is two-to-one. You need to travel twice the parameter distance in SU2 to get back to the same point. Therefore, this equivalence is a two-to-one quotient map. To put it another way,

SO3 ≅ SU2 / Z2.

Since SU2 ≅ S3 is the three-sphere, SO3 is homeomorphic to the three-sphere after identifying antipodal points. You may recognize this space as RP3, three-dimensional projective space. It can also be thought of as the space of lines passing through the origin in R4.

SO3 ≅ RP3

In summary, we start with a given lie algebra, and from this, we can produce two separate groups: SU2 and SO3. Since the entirety of each group can be expressed in terms of the exponential map, they are implicitly related by a two-to-one mapping which preserves the group structure: SU2 → SO3. SU2 is the simply connected lie group with structure constants εijk. SO3 is another lie group, with the same structure constants εijk. SO3 ≅ SU2 / Z2. This relationship can be viewed as both a group-quotient and a topology-quotient.

Realizations and Representations

Throughout this discussion, we've interchangeably viewed Lie Groups from two different perspectives:

  • On the abstract level, where lie groups are manifolds with a given multiplication law.
  • Via concrete parameterizations of points in a lie group, using matrices (for which the group multiplication law is turned into matrix multiplication).
We should briefly flush out the second of these perspectives, as we've been using this description without really identifying it. To do so, we need a few definitions:

A realization of a lie group G is given by the action of G on a manifold M. An action of G on M is a differential map G × M → M which can be symbolically written (with a slight abuse of notation) as g • p, where g ∈ G, p ∈ M . The following rules must hold:

  • e • p = p (where e is the identity in G)
  • (g1 • g2) • p = g1 • (g2 • p)
Each element of G is "realized" as a particular transformation of M. For example, G = SO3, M = S2. Each element in SO3 can be realized as a rotation, which maps points on the sphere to other points on the sphere.

There are a few ways that G can have a an action on itself, setting M = G. For example, we have already seen the realization Lg, left multiplication by g. Lg always provides an action of G on itself. Another example is conjugation, Cg, where Cg(h) = g • h • g -1.

A realization of G is called faithful if it is a one-to-one mapping. That is, if g1 • p ≠ g2 • p whenever g1 ≠ g2.

A realization of G on a vector space, M = V, is called a representation. If the vector space has dimension n, then we say that it is an n-dimensional representation.

There is a concrete, useful way of thinking about representations. We can always choose a basis for V, {ei}. We use this to express any vector X = Xi ei. Then the action of g ∈ G on X ∈ V is g • X = Y = Yj ej. Then the relationship between {Xi} and {Yj} can be summarized by a matrix:

Y = g • X

Yi = Mij(g) Xj

In other words, g • ei = Mij ej

Thus, we have an explicit representation of every g ∈ G via an n × n matrix. More formally, when a lie group G acts on an n-dimensional vector space V, we get a homomorphism Φ: G → {collection of n × n matrices}.

Note: Although we defined On, SOn, Un, and SUn in terms of n × n matrices, we can have m-dimensional representations of these groups.


For a more trivial example, take G = SU4. We can construct an 8-dimensional representation by mapping each 4 × 4 matrix to a block-diagonal 8 × 8 matrix:

| M  0 |
| 0  M |
where M is a 4 × 4 matrix ∈ SU4. We could also add rows and columns by inserting the identity down the rest of the diagonal, e.g.
| M  0  0 |
| 0  1  0 |
| 0  0  1 |
but these are all rather silly examples.

For a more interesting example, take G = SU2. It has a 3-dimensional representation, {ei/2} → {Li} which we flushed out earlier. There is a subtlety, though, as this is not a faithful representation. We've already shown that this is a two-to-one mapping. ±1 in SU2 gets mapped to the identity in SO3. Thus, unit-determinant orthogonal matrices can be thought of as either an unfaithful representation of SU2 or the defining representation of SO3.

At the Level of Lie Algebras

Representations can also be described in the lie algebra of a group. A d-dimensional representation of £ [ G ] is a map Γ from elements Ai of the lie algebra £ [ G ] to d × d matrices, Γ(Ai).

This map must preserve the vector structure of £ [ G ], by being a linear operator, but it must also preserve the algebraic structure, so that [ Γ(Ai), Γ(Aj) ] = Γ ( [ Ai, Aj ] ) = cijk Γ(Ak). The lie algebras we derived above for Un, SUn, SOn, etc. can be considered the defining representations of these lie algebras, but the basic vector and algebraic structure is independent of representation.

Equivalent Representations

Let Φ(g) be a d-dimensional representation for G. Consider a new representation, Φ', given by conjugating every Φ(g) by a d × d matrix:

Φ'(g) = S • Φ(g) • S -1

You can check that Φ' has the same group structure, and since the conjugation operation is one-to-one, this is a faithful representation. Φ' is said to be "equivalent" to Φ.

A representation Φ of G is said to be completely reducible if it is equivalent to a block-diagonal representation,

| Φ1  0  0  ... |
| 0  Φ2  0  ... |
| 0  0  Φ3  ... |
| ...        Φn |
Where each of the Φk is a representation of Φ. In other words, Φ can just be built from simpler representations, using the direct sum operation (which we are about to describe).

Buiding Representations from Other Representations

There are two basic operations we can use to produce a representation Φ3 from two given representations Φ1 and Φ2:

  1. Direct Sum Representations

    Given two representations of a group, Φ1(g) and Φ2(g), we can create another representation by writing the two representations in block-diagonal form:

                | Φ1  0 |
    Φ1 ⊕ Φ2 =  | 0  Φ2 |
    This is a faithful representation of both Φ1 and Φ2, since we are really not changing any of the matrix multiplication. This gives us a d1 + d2 dimensional representation acting on the vector space, V1 ⊕ V2 (This is a direct sum of vector spaces, a related but distinct concept).

  2. Direct Product Representations

    We can also define an action on the product vector space, V1 ⊗ V2. Let's first make sure we understand the direct product of two vector spaces. Concretely, we can formulate a basis from the bases {ei}, {fj} of V1 and V2, respectively. Then, a given element of V1 ⊗ V2 can be written aij ei ⊗ fj. Thus, the set of {ei ⊗ fj} is a basis for the direct product space.

    Now, if V1 has a representation Φ1, and V2 has a representation Φ2, we can act on the direct product space by acting on the first basis vector with Φ1 and the second with Φ2:

    ΦV1 ⊗ V2 [ Wij ei ⊗ fj ] = Wij ΦV1 ⊗ V2 [ ei ⊗ fj ] = Wij Φ1 [ ei ] ⊗ Φ2 [ fj ]

    We get a d1 × d1 matrix times a d2 × d2 matrix. This can be thought of as a single large matrix, whose rows and columns are specified by two indices each: the (ij),(kl)th indices can be calculated by multiplying the (ik)'th component of Φ1 with the (jl)'th component of Φ2. This matrix representation will have dimensions (d1d2) × (d1d2).

The Adjoint Representation

As stated before, we can define an action of G on itself by conjugation. This specifies a realization, which we will call "Ad".

Adg': G → G, all g' ∈ G

h → g' • h • g' -1

We can extract from this realization an action of G on its tangent space, given by the pushforward map:

Adg' ∗: TgG → Tg' • g • g'-1G, all g' ∈ G

Now, this isn't an action unless it maps the same space to itself. Fortunately, conjugation maps the identity to itself, so if we take g = e, the result is a bona fide realization.

Adg' ∗: TeG → TeG, all g' ∈ G

In fact, since TeG is a vector space, this is a representation of G. It is fairly easy to show that the pushforward map also manifests itself via conjugation, but now acting on lie algebra elements. We show this by thinking of the pushforward map as a map from curves to curves. Specifically, we can look at Adg's action on the one-parameter subgroup corresponding to the lie algebra element, A:

Adg (eA) = g • eA • g -1

= g • (1 + A + ½ A2 + ...) • g -1

= 1 + g • A • g -1 + ½ g • AA • g -1 + ...

Now we use a common trick, which is to insert the identity 1 = g -1 • g between adjacent copies of A:

Adg (eA) = 1 + g • A • g -1 + ½ g • A • g -1g • A • g -1 + ...

Since we can do this in between every two copies of A, we can note that this is now an exponential expansion in the conjugate, g • A • g -1:

Adg (eA) = eg • A • g-1

In a nutshell, the pushforward of conjugation is still conjugation, just acting on elements of the lie algebra. Moreover, conjugation in this sense is now a representation, not just a realization. It is possible to construct matrices gij which act on the lie algebra elements Ak, via:

g Ai g -1 = gik Ak

This representation {gij} is known as the adjoint representation.

The Adjoint Representaion, at the Level of Lie Algebras

The adjoint representation must have a manifestation in the lie algebra of G. What would this look like? We can find out by setting g = exp{Aa}, and looking at the result the adjoint map acting on Ab:

Adg ∗ Ab = exp{Aa} • Ab • exp{-Aa}

= (1 + Aa + ½ Aa2 + ...) • Ab • (1 - Aa + ½ Aa2 - ...)

We can write this expression in terms of commutators, if we multiply it all out:

= Ab + [ Aa, Ab ] + ½ [ Aa, [ Aa, Ab ] ] + ...

Of course, the commutator is how we define multiplication in the lie algebra:

= Ab + AaAb + ½ AaAaAb + ...

This is beginning to look a great deal like the exponential map, but in terms of the lie algebra multiplication. We can make this more explicit by invoking the structure constants:

AaAb = cabc Ac

The expansion looks like:

Ab + cabc Ac + ½ cabc cacd Ad + ...

Now, we express the structure constants in terms of a set of matrices {Ta}, where (Ta)bc = cabc

Adg ∗ A = A + TaA + ½ Ta • TaA + ...

And we can now think of this as a matrix exponential map. In other words,

(Adg ∗)bc = exp{Ta}bc

So, at the level of the lie algebra, the adjoint representation is given by n matrices {Ta}, n × n in dimension. The matrix entries are given explicitly by (Ta)bc = cabc, where cabc are the structure constants of the algebra. Since this is a representation of the algebra, the {Ta} must satisfy:

[ Ta, Tb ] = cabc Tc

Summary and Application

The two mathematical concepts of a group and a manifold merge to form a lie group. Useful examples of lie groups can be formed as subgroups of GLnR and GLnC, in specific matrix representations. When we look at the tangent space of a lie group, we find that the group structure of the manifold endows a multiplication law onto its tangent space, in addition to its additive group structure as a vector space. Thus, the tangent space of a lie group forms an algebra. The mutiplication law is given specifically by the lie bracket of left-invariant vector fields, which are in one-to-one correspondence with tangent vectors at the identity.

The lie group structure therefore fixes the lie algebra structure, given by the structure constants cijk. The lie algebra can be used to reconstruct the group, or at least part of it. The exponential map reconstructs the manifold at the level of points, and the multiplication law of the group can be derived from the commutation relations, whenever group elements can be expressed as exponentials of lie algebra elements: eA • eB = eA + B + [ A, B ] + ....

Instead of considering a group as an abstract mathematical object, one can literally be realized by its action on a set. In the case that this set is a vector space, the realization becomes a linear transformation, and it is known as a representation. Representations are usually presented by mapping group elements to matrices, while preserving the group structure. One choice of representation that appears often in physics is the adjoint representation, where the lie group acts on its own lie algebra, by conjugation. At the level of the lie algebra, the adjoint representation can be embodied by viewing the structure constants cijk as a set of n matrices of dimension n × n.

In physics, lie groups manifest themselves as symmetries. For example, three-dimensional space exhibits the symmetry group SOn as the set of rotations in space. Quantum mechanics tells us that these symmetries also manifest themselves at the level of particles. For example, electrons transform as 2-component spinors under rotations. In fact, it is the group SU2 under which electrons transform. It is possible (and, in fact, more mathematically natural) to think of the SO3 transformations that we are so familiar with as a mere representation of the fundamental transformation that is going on; that of SU2. This is because the transformations of SO3 are an unfaithful representation of SU2; SO3 only captures half of the transformation group. This is most apparent in the fact that the electron waveform picks up a minus sign when it undergoes a rotation of 2π -- something which appears quite peculiar, unless we can believe that physical rotations are fundamentally represented by SU2.

It is a subject somewhat beyond the scope of this writeup, but each of the fundamental forces of the universe can ultimately be described by lie groups. The electromagnetic force is given by the group U1, the weak nuclear force by SU2, and the strong nuclear force SU3. These groups act on physical particles in various representations (i.e. SU2 acts on electrons, showing that they respond to weak nuclear interactions), and the representations in which the particles transform can ultimately describe how the particles interact via the various forces. Certainly there are many details to be filled in, and perhaps they will be, elsewhere in everything2.

+I normally use boldface to describe vectors, so I continued this notation as a reminder that the lie algebra of G is equivalent to its tangent space. I probably lost some consistency, as there are situations where it's better to think of A as a vector, and some where it's better to just think of it as a matrix. I hope it doesn't cause more confusion than it's worth.

Log in or register to write something here or to contact authors.