A concise, universal formalism suitable for reasoning about mathematical functions. All expressions are anonymous functions, that is, instead of writing

f(x) = x+2 for every x

you just write

f = λ(x).(x+2)

So lambda calculus denotes functions without naming them. (λ(x).(x+2) is lambda calculus, the f = isn't.)

An interesting aspect is that + and even 2 can themselves be defined in terms of lambda calculus: it is a complete description of discrete computation.

There are three syntactical elements in the lambda calculus:

• variables, usually named by single lowercase letters
• abstraction, written with lambda. Abstraction takes the form `λ x . body', where x is a variable and body is any expression (wff) of the lambda calculus.
• application. Application takes the form `(f g)', where f and g are any expressions of lambda calculus.

There are three basic rules for determining equality or equivalence (as opposed to identity) of expressions in the lambda calculus:

• alpha reduction: `λ x . E' is equivalent to `λ y . E{y/x}', where E{y/x} is E with each free occurence of x replaced with y. This is subject to certain restrictions on the boundness of occurrences of y in E.
• beta reduction: `((λ x . E) E2' is equivalent to `E{E2/x}', subject to certain restrictions on bound variables in E.
• eta reduction: `λ x . (E x)' is equivalent to `E', assuming x does not appear free in E.

Alpha reduction says that, subject to some constraints, you can freely rename variables. Beta reduction is analogous to function application, with a syntactic flavour: it most closely resembles the call-by-name evaluation present in Algol-60. Finally, eta reduction expresses the notion of extensionality: two functions are (extensionally) equivalent if they produce the same output given the same input.

While I was an undergraduate, I had a class with a professor specializing in supercomputing... one with a grant from the NCSA, to study and refine (if I recall correctly) Lambda Calculus. Which was used, in her case, as a system for simplifying mathematical calculations which could, in the case of her prototype, be worked into some high-level optimization heuristics for High Performance Fortran.

By way of explanation, certain kinds of resource contention are very important in parallel computing problems - specifically network and network-like resources, as nodes in the computer may actually be slower collectively than an individual node if the problem they are attempting to solve requires too much communication with each other. Hence, optimizations that can automatically simplify out node interdependencies are very attractive to the community of scientists which make regular use of supercomputers; they're generally not the best programmers in the world to start with.

1 + 1 = 2, or Revisiting Kindergarten using mathematical formalism

Seeing the comment above, "that '+' and even '2' can also be defined in terms of lambda calculus", I couldn't help myself, and had to find a way to understand this strange, strange world.

First of all, lambda calculus finds its origins in Alonzo Church's work dealing with computation. It is, in a nutshell, a "purely functional" approach to the entire field of computation (This is why most introductions to functional languages make a side note on the fact their origins lie in the lambda calculus.) Every object in lambda calculus is a function, every one of these functions returns a function, and the arguments to these functions are also functions. So, lets get started.

A "lambda"(&lambda) is expressed as a single \ character. We write a lambda followed by a function, to denote that function's usage in our function.
Example:

`\ x. `[` f x `]

So, x is simply the function we're "passing" our lambda abstraction. The function following our abstraction is being applied within our function.
Example:

`(\ x. `[` f x `]` )w = f w`

Which leads us to the understanding that `\ x. `[` f x `] is simply the function f. So, logically, we can express other abstract functions using this same basic theory.
Example:

"`f(x) = x + 3`" would be expressed simply as `\ x. `[ ` x + 3 `] where we bind x to our parameter functions.

So, how do we deal with numbers? Well, we first define to be the empty set, or `\ f x. `[` x `]. 1 would then, logically, be the set that contains , or `\ f x. `[` f x `]. We define 2 as the iteration on 1, or "twice." We express this as `\ f x. `[` f (f x) `]. When we apply this to another function, we end up with twice that function: `( \ f x. `[` f (f x) `]` )y = y (y x)` ; Or y acting twice upon x.

Now we can define a function for adding one to a function. Such a function would look like so:

`\ x y z. `[` y ((x y) z) ]`

If we applied this to "1", we would have:

`( \ x y z. `[` y ((x y) z) `]` )1 = y ((1 y) z) = y (y z)`

. . .Which is essentially the "2" / "twice" we defined above! Thus we come to the conclusion that, in fact, 1 + 1 does equal 2 (and, hopefully, that numbers are simply abstract concepts, and can be expressed purely as such!)

An interesting note about the origins of the use of the Greek letter lambda in the Lambda Calculus. Alfred North Whitehead and Bertrand Russell in their magnum opus Principia Mathematica originally used the notation t(^x) (the hat ^ should be over the x, but I have no idea how to do this in HTML) for a function of x yielding t(x). Alonzo Church later modified this to ^x.t(x), but because the typesetter for his papers couldn't place the hat on top of his x, it looked like Λx.t(x). Eventually, yet another typesetter mutated this to λx.t(x), which was the notation that eventually stuck and gave its name to the entire formalism.

A french researcher called Jean-Louis Krivine proved in 1999 that mathematics and natural languages are subsets of Lambda Calculus. This follows previous studies starting back in 1967.

## A Short Introduction To The Lambda Calculus

### History

The lambda calculus was created by Alonzo Church as part of an effort to provide a foundation for mathematics. Church's attempt failed, as his system was vulnerable to an analogue of Russel's Paradox.

However, lambda calculus proved to be useful as a model of computation. In 1936, Church used the lambda calculus to solve the Entscheidungsproblem. The problem of whether two expressions in the lambda calculus were equivalent was the first problem shown to be undecidable (even before Turing's halting problem). Later, Turing showed that machine computability (Turing computability) is equivalent to λ-definability. (see Church-Turing Thesis)

Today, the lambda calculus is being used by computer scientists, primarily in programming language design and theory. The lambda calculus is related to other areas of mathematics such as recursion theory, and category theory.

### Fundamental Concepts

The lambda calculus deals with functions every function in the lambda calculus takes in a function, and returns a function. An expression in the lambda calculus is composed from three syntactical elements: variables, application, and abstraction.

• Variables are functions usually denoted by single lowercase letters.
• Abstraction is identified by the λ character; abstraction expressions are of the form
λ x. body
where x is a variable, and body is an expression in the lambda calculus. (For programmers, it might be easier to think of λ as defining an anonymous procedure that takes one formal parameter; in this case, x.)
• Application is denoted by the concatenation of two expressions. That is, the function f applied to the argument a is denoted as f a. (again, think about it as "calling" the function f with parameter a). Application is left associative, that is f g h is equivalent to (f g) h

For an expression λx. x y, x is said to be bound and y is free

There are three basic rules for determining equivalence of expressions in the lambda calculus, alpha conversion, beta reduction and eta conversion.

• The intuition of alpha reduction is that the name of the variable does not matter (e.g. λx. x = λy. y). Formally: If X and Y are variables, and E is an expression, and E{X/Y} means that for every occurence of X in E, X is replaced by Y. Then:
λX. E = λY. E{X/Y} --- subject to the following conditions:
Y does not appear freely in E, and Y is not bound by a λ in E whenever it replaces an X.
• Beta reduction expresses the concept of a function application, for example, that (&lambda x. f x) a = f a. More formally:
(λX. E) E' = E{X/E'} --- if all free occurrences in E' remain free in E{X/E'}
• Eta conversion says that if two functions return the same value for all arguments, then the two functions are equivalent.

### Some Examples

Of course, if you are like me, the above definitions sound like mumbo--jumbo when encountered for the first time. So to clarify things, here is an example: A function traditionally written as f(x) = x + 3 is written in the lambda calculus as: λx. x + 3. So if you want to apply that function to 2, we say:

(λx. x + 3) 2
= 2 + 3 = 5
Of course, the plus expression is not really a well-formed expression, but it can be viewed as a shorthand notation for the addition lambda expression (we'll see how arithmetic can be done using the lambda calculus below).

Every function in the lambda calculus takes in one argument; multiple arguments can be expressed as nested lambda expressions applied to two successive parameters. To better understand this, here's an example. The abstraction for adding two numbers can be written as:

λ x. λy. x + y
The first lambda defines a function that takes in one argument x. When that function is applied, it binds the argument x, and returns a function that takes one parameter, y. So let's try adding 2 and 3 using this:
(λx. λy. x + y) 2 3
= (λy. 2 + y) 3
= 2 + 3
= 5

### Arithmetic

Here we show how rp's statement that "'+' and even '2' can also be defined in terms of lambda calculus" is actually defined. At this point, let's remind ourselves that everything in the lambda calculus is a function, thus, when we define numbers also as functions.

First we define the natural numbers, starting with 0

• 0 = λf. λx. x
• 1 = λf. λx. f (x)
• 2 = λf. λx. f (f x)
• 3 = λf. λx. f (f (f x))
• ...
You can think of the natural number n as a function that takes the function f as a parameter, and returns f to the nth power.

Now that we have a way to represent natural numbers, we can define addition:

PLUS = λm. λn. λf. λx. m f (n f x)
Though not a proof, we could try to convince ourselves that this was indeed the definition of addition by trying out an example. So let's try PLUS 2 3:
(λm. λn. λf. λx. m f (n f x)) 2 3
= λ n. λf. λx. m f (2 f x)) 3
= λf. λx. 3 f (2 f x)
= λf. λx. ( λf. λx. f (f (f x))) f (2 f x)
= λf. λx. ( λx. f (f (f x))) (2 f x)
= λf. λx. f (f (f (2 f x)))
= λf. λx. f (f (f ( (λf. λx. f (f x)) f x )))
= λf. λx. f (f (f ( (λf. λx. f (f x)) f x )))
= λf. λx. f (f (f ( (λx. f (f x)) x )))
= λf. λx. f (f (f (f (f x))))
= 5

We can then define multiplication as:

MULT = λm. λn. m (λk. PLUS k n) 0,
We can also define a predecessor function PRED n --> n - 1:
PRED = λn. λf. λx. n (λg. λh. h (g f)) (λu. x) (λu. u)
Try it out! (or take my word for it :-)