The theory of combinators is an alternative to the lambda calculus as a formulation of the theory of functions. It is basically similar to the lambda calculus except the operation of lambda abstraction is removed completely. Functions are defined solely in terms of a set of combinators which are taken as primitive. This approach was first devised by Moses Schönfinkel in 1924, a number of years before Alonzo Church developed the lambda calculus, and later independently rediscovered by the logician Haskell B. Curry. The main benefit of using combinators is that all of the problems of variable clashes and renamings because of the use of bound variables in lambda abstraction all go away.

Combinators are merely functionals whose results do not depend on the values of any free variables, i.e. they are closed, and have a fixed meaning independent of any variable. It has been discovered that two basic combinators, called K and S, are sufficient to express any partial recursive function at all; in fact the lambda calculus can itself be completely reformulated using only these two combinators instead of performing lambda abstraction.

The K combinator is the "constant" combinator (K for konstant, Schönfinkel was German). When it is applied to any argument a, it produces a constant function whose constant value is a. In the lambda calculus it is the function:

K = λx.λ.y.x

The S combinator is the "sharing" combinator. It takes two functions, an argument, and shares out the argument among the two functions. In lambda notation it is:

S = λf.λg.λx.(f x) (g x)

These two functions are sufficiently general to express any function at all.

Some would add a third combinator, the identity combinator I, which is, however, expressible in terms of S and K as follows:

I = λx.x = S K K

This is useful as convenience.

Any term t in the lambda calculus that does not involve lambda abstraction there is a term u involving only S, K, and I such that u = λx.t, i.e. u is lambda-equal to λx.t. This can be shown by structural induction on the term t. Because we have said that t does not involve lambda abstraction, there are three cases to consider:

  • If t is a variable, then there are two possibilities: if t = x, then λx.x = I. If t is some other variable y, then λx.y = K y.
  • If t is a constant c, then λx.c = K c
  • If t is a combination, e.g. s u, then there are terms s' and u' that involve only S, K, and I equivalent to s and u respectively. Then S s' u' is equivalent.

By repeatedly applying this result, expressions with lambda abstraction can also be removed.

For instance, the Church numeral for 1 can be expressed as follows

1 = λf.λx.fx

1 = λf.λx.(S f' x')

x' = I

f' = K f

1 = λf.(K f) I

1 = (S K I) I

But because ((S K) (S K K)) = I, then 1 = I.

The unlambda programming language is based on the use of the S, K, and I combinators, and it is also useful for the implementation of functional languages. These combinators can be thought of as a "machine language" into which (pure) functional languages can be compiled. Some implementations of Scheme, Lisp, and ML use this approach, and custom hardware has been created that is capable of evaluating expressions involving these combinators. They are also of great use in the study of algorithmic information theory and Kolmogorov complexity, as they provide what essentially amounts to a universal Turing machine that can be easily be encoded in binary and is about as minimal as it can get.