Predicate calculus is exactly what it says: a mathematical system to perform calculations on logical predicates.

A predicate is a logical statement. Contrary to a proposition, a predicate is generic: it applies to a whole range of things. By filling in concrete examples of these things, a predicate becomes a proposition.

So we can think of a predicate as a proposition with (free) variables.


  The sun shines.
                         (a proposition: either true or false)
  The sun shines at time t
                         (a predicate: true or false depending on t)
  The sun shines right now.
                         (a proposition)

If you find this confusing, it is because in natural language, there really is no such thing as a clear separation between propositions and predicates. Whether The sun shines. is true depends on the date and time of day, but also depends on the place on earth; it can even apply to a fictitional situation in which time and place are not those on earth. These "hidden variables" aren't indicated in the sentence itself. It is easy to find sentences for which it would be difficult to agree on which variables its truth value depends on exactly.

In formal mathematical logic, things are different. First, we have the propositional calculus, which is concerned with the logical relationships between propositions expressible through logical connectives (and, or, implies, not, etcetera). Expressions in propositional calculus consist entirely of atomic propositions and these connectives. Its calculations represent logical inferences. For example, you have things like De Morgan's Law: for all propositions P and Q, it is the case that

  P and Q   is true exactly when   not P or not Q
The calculus is a bunch of such rules; it allows sets of such propositions to be mechanically validated for consistency, brought into simpler equivalent forms, etcetera.

Predicate calculus introduces the notion of variables mentioned above. For example, we can have

to represent the predicate "the sun shines at time t, place p".

What is more interesting, it allows propositions to be stated about these predicates, using quantors. There are two quantors. The existential quantor is traditionally denoted as a reversed E; I will use a normal E here:

  E.t: E.p: S(t,p)
This states: "there is a t such that there is a p such that S(t,p)", in plain English: "sometimes, it rains somewhere".

The universal quantor, usually denoted with a reversed A, here with a normal A:

  A.t: E.p: S(t,p)
Literally: "for all t there is a p such that S(t,p)"; in English, "there is always a place in which it rains", or shorter, "it always rains somewhere". We can also say
  E.p A.t: S(t,p)
"there is a place in which it always rains".

This simple system, combined with and, or, and not, does quite a good job of capturing the meaning of words such as "and", "or", "not", "some", "any", "all", "no", and the like, as used in English.

A very common addition is equality: A.t: (not (E.p: E.q: S(t,p) and S(t.q)) or p = q) that is, at any time, there are no two places in which it rains, unless they're the same, or in plainer English: it always rains in one place.

But wait a minute ... isn't "it always rains in one place" the same proposition as "it always rains somewhere"? I don't think so, but we could argue about that. The beauty of predicate logic is that we now have a way to state our arguments precisely. This is why it is indispensable in formal (mathematical) reasoning.

English and its speakers, on the other hand, tend to be unclear and sloppy. I feed "rains in one place" to Google, and the 4th hit returns

  It rarely rains in one place for long, and it's almost always sunny somewhere.
("Affordable Paradise", Hawaii). Aren't you tempted to read that when going to Hawaii, wherever you go, it will almost always be sunny there, and even in cases where it rains, the rain won't last? Well, technically, that is not implied: it might be the case that the islands have 99% rain, but there is this tiny patch of sun that wanders around, visiting every place with some regularity. These different interpretations correspond to different formulas of predicate logic, which can then be manipulated with calculus, for example, to see if one is always true when the other is. A job that isn't all that easy, so it's best done by machines.

(The quantifiers "rarely" and "almost always" could be added just like E and A.)

The rules of propositional calculus also apply to predicate calculus, but more rules are present to deal with quantors; for example, an equivalent of De Morgan's law:

  A.x: P(x)   is true exactly when   not E.x: not P(x)
that is, a predicate holds for everything exactly when there is nothing for which it does not hold.

We speak of first order predicate calculus if predicates cannot be quantified over. A formula such as

  A.Q: E.x: (Q(x) and not P(x)) or (not Q(x) and P(x))

is a statement of second order predicate calculus.

Relational calculus, or to be more exact domain calculus, is first order predicate calculus with use of the equality primitive = on variables. It is the basis of database languages such as SQL. In such languages, more primitives are present to express properties of the domains from which the variables are drawn, e.g. the < comparison on numerical variables.

First order predicate calculus is limited in expressive power. It is impossible to express, for instance, that a predicate P holds more often than not. The attempt

  A.x: E.y: P(x) and not P(y)

fails for the reason that the same y can be used for different x. A second order expression is required, e.g.

  not E.R: A.x: E.y: not R(x,y) or
      ( P(x) and not P(y) and A.x': not R(x',y) or x'=x )

A common use for second order logic is the expression of transitive closure and other forms of iteration. For example, if P(p,c) expresses the relation "person p is a natural parent of person c", then you typically need second-order logic to say things about the ancestry relationship, for example, the circumstance that no person is his or her own ancestor.

Sometimes in logical analysis we will need to go further than just analysing sentences, as in propositional calculus. Predicate calculus provides a system of logical analysis that goes deeper than propositional calculus and analyses even within sentences.

It allows us to analyse the sentence: "Fido is a dog"

Into the "predicate": "x is a dog"

And the "designator": "Fido"


A designator is a single word or phrase that is used to refer to one single thing. Designators can be proper names, non-count nouns, singular personal pronouns, and definite descriptions. Some examples would be:

John Smith
New York
The Titanic
The Queen of England
My father


A predicate is simply a matrix into which designators can be placed to yield a declarative sentence. The spaces in the matrix into which designators are put are marked by "individual variables". These individual variables are usually letters at the end of the alphabet - "x", "y", "z", "u", "v", "w".

So we can see that "x is a cow" is a predicate as we can replace "x" with any designator and get a grammatical declarative sentence (even though it might not make any sense semantically).

Predicates can have different numbers of "places". The number of places is determined by the number of individual variables used. It is important to note that if the same individual variable is repeated then they both must refer to the same designator.

Examples of 1-place predicates:

x is a cow

The oldest man in the world is y

x loves x.

Examples of 2-place predicates:

x loves y

x is y's mother

x likes y better than y's brother

Predicates can have unlimited places. 0-place predicates also exist - they are just normal sentences, or predicates with no individual variable.

Predicates and truth functors:

Just as in propositional calculus, propositions could be linked by truth functors*, so too in predicate calculus can predicates be linked by truth functors:

[x is a cow ^ x is brown]

[x is the son of y → y is the mother of x]


In predicate calculus we do not create sentences out of predicates by replacing individual variables with designators. Instead we use "quantifiers" at the start of predicates. There are two types of quantifier:

The Existential Quantifier:

The existential quantifier is written "Ex" (NB: it is actually a backwards "E", but this does not seem to appear correctly on my browser) and is pronounced "There is x such that...", or "There is something that...", or some similar expression.


"Ex x is a cow" could read "Something is a cow"

The Universal Quantifier:

The universal quantifier is written "Ax" (NB: it is actually an upside-down "A", but this does not seem to appear correctly on my browser) and is pronounced "For all x…." or similar.


"Ax x is a cow" could read "Everything is a cow"

In predicates with a quantifier, we say that the independent variable is "bound" by the quantifier. A predicate without a quantifier is said to be "free" - i.e., it is free to take a designator. Quantifiers with "x" bind only to occurrences of the variable "x", and quantifiers with "y" bind only to occurrences of the variable "y", and so forth.

Important translation points:

"Ex x is a cow" means exactly the same as "Ey y is a cow".

"[Ex x is a cow ^ Ey y is a cow]" does NOT mean that there are two cows. There is a way to say this, which will be detailed later.

"Ex x is as heavy as x" translates to "Something is as heavy as itself".

"ExEy x is as heavy as y" does NOT translate to "Something is as heavy as something else". It actually translates to "Something is as heavy as something".


Quantifiers are similar to "¬" (NOT) in terms of scope. They effect whatever they are placed before. Placed before a predicate, a quantifier effects that predicate only. When placed in front of brackets, the quantifier effects the entire contents of the brackets.

The following is a 1-place predicate but only one of the variables is bound - the second one is free:

[Ex x is a cow ^ x likes Daisy]

We cannot have a quantifier with an "x" (for example) in it within the scope of another "x" quantifier. So the following is not allowed:

Ex[x is a cow ^ Ex x likes Daisy]

However, the following would be allowed:

[Ex x is a cow ^ Ex x likes Daisy]

Ex[x is a cow ^ Ey y likes Daisy]

ExEy[x is a cow ^ y likes Daisy]

All these expressions are equivalent, but it is important to note with respect to the last example, it is not always possible to move all the quantifiers to the front of the expression without altering its meaning somewhat.

There is a problem with the following expression:

[Ex y is a cow ^ x is a cow]

We call this sort of quantifier "vacuous", meaning that is not bound to any variables. Here, the "Ex" is not bound to the variable "x", rather it is bound to "y" - a different variable to that which is meant to be quantified. The problem in this case can be solved by moving the quantifier outside the brackets:

Ex[y is a cow ^ x is a cow]

Some simple translations:

"Some cows are brown" - Ex[x is a cow ^ x is brown]

(Or "There is something that is a cow and brown" - if there are "some" cows, then there is "a" cow, so we don't need to take into account the number of cows (which is ambiguous anyway) in the translation).

"All cows are brown" - Ax[x is a cow → x is brown]

(Or, "If anything is a cow, then it is brown". This expression comes out true even if there are no cows. This is because "[x is a cow → x is brown]" is equivalent to "[¬x is a cow v x is brown]" (they have the same truth-table) so if there are no cows, then that is taken into account).

"No cows are brown" - ¬Ex[x is a cow ^ x is brown]

(Or, "It is not true that some cows are brown ", or even, "There is nothing that is a cow and brown").

Another way of translating the above is - Ax[x is a cow → ¬x is brown] - (Or, "All cows are not brown").

Some common errors:

Ax[x is a cow ^ x is brown] does NOT mean "All cows are brown". It actually means "Everything is a brown cow".

Ex[x is a cow → x is brown] does NOT mean "Something is a brown cow". This expression has a somewhat peculiar meaning. It is equivalent to saying "Ex[¬x is a cow v x is a cow]", or "There is something that is either not a cow or is brown". This expression will be true if there is a brown cow, or a brown mushroom, or a purple badger, and so forth.

The Domain of Quantification:

It can be helpful, if not quite important, to restrict the domain of quantification somehow, that is, to set the range of thing to which the quantifier apples. It is not possible to specify multiple domains for multiple quantifiers - all quantifiers must be referring to the same domain. So, if we set the domain to encompass "all cows", then all quantifiers are referring to "everything that is a cow". So;

"Ex x is brown" translates to "Some cow is brown"

"Ax x is brown" translates to "All cows are brown"

Restricting the domain of quantification allows us to more simply translate "All" and "Some" sentences.

The relationship between Ax and Ex:

The relationship between "Axa" and "Exa" (where "a" is a predicate containing "x").

"Ax¬a" is equivalent to "¬Exa"

"Ex¬a" is equivalent to "¬Axa"

So, "Nothing is purple" can be translated "¬Ex x is purple", or "Ax ¬x is purple" ("There is nothing such that it is purple" and "Take anything: it is not purple").

And, "Everything is purple" can be translated "Ax x is purple", or "¬Ex ¬x is purple" ("Take anything: it is purple" and "There is nothing such that it is not purple").

Empty domains:

If the domain of quantification is empty (contains no things), then any expression in the form "Exa" is false, since there is nothing that is "x". Conversely, any expression in the form "Axa" is always true, as it is equivalent to "¬Ex ¬a". So too is any expression in the form of "¬Exa" or "A¬xa" always true, and any in the form of "¬Axa" or "Ex¬a" is always false.

Predicate Formulae:

As with propositional formulae, in predicate calculus we symbolise sentences to produce a predicate formula. The process of creating a predicate formula is called "formalisation". A formalisation requires a domain to be specified and a "key" giving the symbols for each predicate and designator.

It is best to describe through example. Let's formalise the sentence:

"All boys like football"

We can specify our domain as "everything" (the "universal set"). The first stage of formalisation produces:

Ax[x is a boy → x like football]

We can now provide a key of symbolisation of the predicates and designator. The convention is to use lower case letters at the beginning of the alphabet to represent designators (e.g., a, b, c). Capital letters followed by an appropriate number of lower case letters from the end of the alphabet are used to symbolise sentences with their individual variables (e.g., Fx, Gy, Hxy, Rxyz).

A key for the above expression could be:

a: football

Bx: x is a boy

Lxy: x likes y

(Notice that "Lxy" stands for "x likes y". We replace the designator "football", and any designator that we have in the sentence, with a variable.)

This process arrives at the following formula:

Ax[Bx → Lxa]

(The "y" variable is replaced by the designator "a")

It is important to say at this point that it is necessary for designators in the interpretation to stand for something definite. So, for instance, the noun phrase "the largest even number" could not be represented by "a" (for instance) since the largest even number does not exist.

It will be helpful to show some examples of other formalisations. We will use the following interpretation:

Domain: everything

a: Daisy

Cx: x is a cow

Px: x is a person

Lxy: x likes y

1) "Only cows like Daisy"

Various ways of expressing this:

Ax[¬Cx → ¬Lxa] - "Anything which isn't a cow doesn't like Daisy"

Ax[Lxa → Cx] - "Anything which likes Daisy is a cow"

¬Ex[Lxa ^ ¬Cx] - "There is nothing which likes Daisy but isn't a cow"

2) "Daisy likes all cows that like themselves"

Ax[[Cx ^ Lxx] → Lax] - "If anything is a cow and likes itself, then Daisy likes it" (the gist is there)

3) "If anyone that likes a cow likes Daisy, then someone likes Daisy"

[Ax[Px ^ Ey[Cy ^ Lxy]] → Lxa] → Ex[Px ^ Lxa]] - "Something is a person and likes daisy, provided that, if anything is a person and something is a cow and likes them, then they like Daisy" - (That's the best I can do to translate the expression "word for word" - formalisations can be very difficult to do as predicate logic is not nearly as expressive as English)

A point about scope:

The order in which quantifiers are placed can make significant differences to the meaning of a formalisation. Take the following two examples using the same interpretation as in the previous section:

1) AxEy Lxy

2) EyAx Lxy

The first formula reads "Everything likes something" whereas the second formula reads "There is something such that everything likes it". The first formula is ambiguous. It could mean that there is one thing that everything likes (i.e., what formula two expresses), or everything likes its own "thing". There is no ambiguity with the second formula.

Predicate Tableaux:

As with propositional calculus, we can analyse the validity of arguments of predicate logic using tableaux. All the same rules apply (as detailed in my propositional calculus write-up), except with predicate tableaux there are four new rules:

The "Axa" rule:


This rule allows us to add "b" to the branch, where "b" is the result of replacing every free occurrence of "x" in "a" with a designator that has already occurred in the branch.

The rule can be applied thus:


So here we can take the "a" and replace the "x" in "Fx" with it. Another application of the rule follows:

Ax[Gx ^ Rxx]
[Ga ^ Raa]

The rule applies when "Ax" is at the front of the expression and only effects the variables it is bound to:

Ax[Gx → Ey[Rxy v Pxy]]
[Ga → Ey['Ray v Ray]]

The "Exa" Rule:


This rule allows us to replace all free occurrences of "x" in the predicate "a" with some designator that has not yet appeared in the branch. "b" is the result of this action. This rule is derived from the fact that since the predicate is referring to "something" we will not introduce any inconsistency by giving that something a name.

The following is a correct application of the rule:


So is:

Ex[Fxx ↔ ¬Ey[Gyx v Rx]]
[Faa ↔ ¬Ey[Gya v Ra]]

The "¬Axa" Rule:

Ex ¬a

The basis of this rule - the equivalence of "¬Axa" and "Ex ¬a" - has already been established.

The "¬Exa" Rule:

Ax ¬a

The basis of this rule - the equivalence of "¬Exa" and "Ax ¬a" - has already been established.


There is no limit to how many times these new predicate tableaux rules can be applied. In fact, it is sometimes necessary to repeat the rules in order to close branches. In reality, the only rule that is really of any use when applying multiple times is the "Axa" rule as it allows you to take any designator from a branch and apply it to a new expression that may serve to close the tableau.

It is important to highlight an important difference between predicate tableaux and propositional tableaux. Propositional tableaux will always finish, either by all the branches being closed, or because no more rules can be applied. Predicate tableaux are not bound to finish in this way as there predicate tableaux rules can be reused over and over again. If in the construction of a predicate tableau this situation arises, there are two possible conclusions. Firstly, the argument being tested may be invalid, or secondly, you have made a mistake or need to apply the rules differently. Constructing predicate tableaux is a much more creative process as you need to think about each step, looking back at what you have already derived in each branch and seeing how you can apply rules in order to close branches. Propositional tableaux are pretty well automatic and will close or not close no matter how you apply the rules.

Example of a Predicate tableau proof of validity:

Say we have the following argument, formalised into the predicate calculus:

Ax[Fx → Ay[Gy → Hxy]]
Ax[Fx → Ay[Jy → ¬Hxy]]
Therefore, Ax[Gx → ¬Jx]

So, the counter-example set** is:

Ax[Fx → Ay[Gy → Hxy]]
Ax[Fx → Ay[Jy → ¬Hxy]]
¬Ax[Gx → ¬Jx]

And the tableau is:

Ax[Fx → Ay[Gy → Hxy]]
Ax[Fx → Ay[Jy → ¬Hxy]]
¬Ax[Gx → ¬Jx]
Ex ¬[Gx → ¬Jx]
¬[Gb → ¬Jb]
¬ ¬Jb
[Fa → Ay[Gy → Hay]]
|                   |
    ¬Fa             Ay[Gy → Hay]
                     [Gb → Hab]
                    |              |
                     ¬Gb             Hab
                                      [Fa → Ay[Jy → ¬Hay]]
                                    |                  |
                                        ¬Fa          Ay[Jy → ¬Hay]
                                                       [Jb → ¬Hab]
                                                        |              |
                                                        ¬Jb            ¬Hab

Lines 1-4: The members of the counter-example set.

Line 5: From line 4 by the "¬Axa" rule. The "Exa" rule could have been applied from line 3. Note that we can't apply the "Axa" rule to lines 1 and 2 until we have some designators in the branch.

Line 6: From line 3 by the "Exa" rule.

Line 7: From line 5 by the "Exa" rule. It would have been possible to apply the "Axa" rule to line 1 or 2, but it is usually more beneficial to apply the "Exa" rule first, as this supplies new designators from which we can apply later using the "Axa" rule.

Lines 8-10: It is helpful to apply the rules here to simplify things. Breaking thinks up as far as we can will help in spotting contradictions.

Line 11: Application of the "Axa" rule to line 1. Using "a" should help, since we have "Fa" in line 6. It is helpful to try and proceed in this sort of way.

Line 12: Here we benefit from using "a" in line 11. It has allowed us to close the left branch, and also get "Ay[Gy → Hay]" from lines 6 and 11. In general, when we have "a" in one line and "[a → b]" in another we can get "b", because when we apply the "[a → b]" rule the left branch closes.

Line 13: Here it is best to use "b", as we have "Gb" in line 8.

Line 14: From that we get Hab.

Line 15: In the application of the "Axa" rule to line 2 using "a" is the sensible option given that we have "Fa" in line 6.

Line 16: We obtain "Ay[Jy → ¬Hay]"

Line 17: The presence of the "Jb" in line 10 means that using "b" is a good idea here.

Line 18: We get "¬Hab" here and now the right branch closes as well as the left one. Actually we can see that the left hand branch could have closed because of the presence of "¬¬Jb" in line 9: so as it turned out we didn't need to take the step in line 10 at all, but doing so is still good practice.

Predicate Calculus with Identity:

Many sentences in English have some sort of numeric quantification to them. We know how to say, for instance, "There is a dog" (where Dx: x is a dog):


So what if we wanted to say; "There are two dogs"? Initially it seems that it would suffice to say:

ExEy[Dx ^ Dy]

While this is true if there are two (or more) cows, it is also true if there is only one cow. The problem is that there is nothing here that says the "x" and "y" variables cannot be the exact same thing. We need to use the notion of "identity", using expressions "x = y" ("x" and "y" are one and the same thing) and "¬x = y" ("x" and "y" are not the same thing). We negate "x = y" by simply putting the "¬" sign in front of it - we do not need brackets as it is nonsense to just be negating the "x" (thereby saying "something that is not "x" is the same as "y"). These identities are also predicates themselves. So now we can effectively say "there are two dogs":

ExEy[[Dx ^ Dy] ^ ¬x = y]

This translates literally as something like this; "There is something that is a dog, and something that is a dog, and those two things are not one and the same". Technically, this says that there are at least two dogs, but if there are two dogs, then there are also at least two of them. If we want to say "There are at least three dogs" we simply add another variable:

ExEyEz[[Dx ^ [Dy ^ Dz]] ^ [¬x = y ^ [¬x = z ^ ¬y = z]]]

Note that we limit each pair of brackets to containing two related terms. Sometimes it doesn't really matter (e.g., with the AND function), but it is best to impose this limitation at all times to avoid error.

That is "at least" dealt with - all we need to do is just have an appropriate number of variables. What about saying "at most"? This is simple. If we want to say "There is at most one dog", it is equivalent to saying "There is not at least two dogs". So, we just negate "There is at least two dogs":

¬ExEy[[Dx ^ Dy] ^ ¬x = y]

This can also be said using the universal quantifier:

AxAy[[Dx ^ Dy] ^rarr; x = y]

This says that if x is a dog and y is a dog, then x and y are one and the same thing - the essence of "At most one dog". Of course another way of saying the same thing again is by changing the type of quantifier:

AxAy¬[[Dx ^ Dy] ^ x = y]

Its all very well being able to say "more than" and "at least", but how does this help us if there are exactly one dog? Some reasoning on this issue can provide the answer. "Exactly one" is equivalent to saying "at least one, and at most one". So "Exactly one dog" can be said:

[ExDx ^ ¬ExEy[[Dx ^ Dy] ^ ¬x = y]

Or alternatively (and more concisely):

Ex[Dx ^ ¬Ey[Dy ^ ¬x = y]] -- "There is at least one dog, and there isn't another"

ExAy[x = y ↔ Dy]

However, it is very easy to make a fairly drastic error with this last expression:

ExAy[x = y ↔ Dx]

This means that there is only one thing in the whole universe and that thing is a dog!

We can now add identities to our logical armoury and formalise the following sentence:

"Farmer Giles owns exactly one brown cow"


Domain: everything

Cx: x is a cow

Bx: x is brown

Oxy: x owns y

a: Farmer Giles

Ex[[Cx ^ [Bx ^ Oax]] ^ Ay[[Cy ^ [By ^ Oay]] → y=x]]

Leibniz's Law:

Here is an opportune moment to define "Leibniz's law". It is important for the next section. Leibniz's law states that if two designators, "a" and "b" are equivalent, then anything that is true of "a" must also be true of "b". So any true predicate containing "a" is also true if "a" is replaced by "b". It is that same with false predicates.

However, there is an important point to make here related, although not necessarily part of, Leibniz's law. Say we have the sentence:

"Lois Lane thinks Clark Kent wears glasses"

Now, Clark Kent is Superman, so presumably the designators "Clark Kent" and "Superman" are equivalent and interchangeable. But is it true to say:

"Lois Lane thinks Superman wears glasses"

There is a problem. Lois Lane does not think Superman wears glasses. She doesn't after all know that Superman and Clark Kent are one and the same person. Therefore she does not think that Superman wears glasses. This point is purely academic and we do not complicate our predicate calculus interpretations by considering such things. Technically, the designator "Clark Kent" is not a purely referential occurrence.

Tableau Rules for Identity:

Before I said there were four additional rules for predicate tableaux. However, there are actually six in total. The two new rules are very closely related and are practically identical to each other. The rules are based on "Leibniz's law".

The Left-Hand Rule:


The Right-Hand Rule:


Where in both cases "b" is the result of replacing the designator "P" in "a" with "Q" (which is equivalent to "P").

So the following can now be accomplished in predicate tableaux:



Identities can be a useful tool for closing branches:


And if we have "a=b" and "¬a=b" in the same branch, we can close the branch due to contradiction. The following - substituting identities into each other - is also a clever trick to close a branch:


* - truth functors are: ^ (AND), v (OR), → (IF), ↔ (IF AND ONLY IF), ¬ (NOT) - I have detailed them in my "propositional calculus" write-up.

** - the counter-example set is needed to test for validity, as if the counter-example is invalid (all the branches of the tableau close) then the original argument must be valid. Again, this is detailed in my propositional calculus write-up.

Wilfred Hodges, "Logic"

Log in or register to write something here or to contact authors.