A function f:I_{1}×...×I_{k}→O, where I_{1}, ..., I_{k}, O are all ordered sets or partially ordered sets (posets), is called *monotone ascending (or descending)* if whenever

x_{1} ≤ y_{1},

...

x_{k} ≤ y_{k},

we also have that f(x

_{1}, ..., x

_{k}) ≤ f(y

_{1}, ..., y

_{k}) (or that f(x

_{1}, ..., x

_{k}) ≥ f(y

_{1}, ..., y

_{k})).

For a function g of one variable, g is ascending if x≤y implies g(x)≤g(y), and descending if x≤y implies g(x)≥g(y). Note that a product of ordered sets (or of partially ordered sets) always has a "natural" partial ordering. So we can consider the function f above as a function of one variable on the poset (I_{1}×...×I_{k}). Both definitions of monotonicity coincide for this view.

For real functions (functions taking arguments and values in the real numbers), and more general for functions defined on sets which also have a topology, ``monotonicity'' is sometimes taken to mean ``*local* monotonicity''. For instance, monotonicity of real Mobius functions holds in an appropriately local setting: the function f(x)=1/x is locally descending, yet f(-1) < f(1).

Monotone functions aren't quite as boring as their name might imply. Some examples:

- Let f:U->
**R** be a monotone function defined from some open subset U of **R** to the real numbers. Then, for any x∈U the one-sided limits
f(x^{+})=lim_{t->x+}_{t>x} f(t)

and
f(x^{-})=lim_{t->x-}_{t<x} f(t)

exist, and indeed f(x^{-})≤f(x^{+}). (Understand "f(x^{+})" and "f(x^{-})" as complete symbols, not composed of parts, just like phrases "lim..." can only be understood as complete symbols; *there are no x*^{+} and x^{-}!)
- In Computer Science boolean functions f:{0,1}
^{n}->{0,1} are important.for obvious reasons. Various measures of the *computational complexity* of such functions exist. Unfortunately, all useful measures are extremely difficult to prove (a breakthrough in general computational complexity of boolean functions might go a long way towards resolving whether P = NP...). One such measure is the size of the smallest circuit computing f; frankly, modern Computer Science has no leads on how to get a super-linear bound the size of almost any circuit for computing almost any f.

But if f is *monotone*, we can compute it with a "monotone circuit" (circuits using OR and AND gates, but no NOT gates). Nonlinear bounds on the size of the smallest monotone circuit computing many monotone functions f are known. For instance, the majority function MAJ_{n} is clearly monotone: increasing the number of `1' voters may change the decision from `0' to `1', but never from `1' to `0'. And it is known that exponentially large monotone circuits are required to compute MAJ_{n}. (MAJ_{n} can easily be computed with circuits of polynomial size, e.g. by adding up all the input bits; this shows the power of the NOT gate).