A function f:I1×...×Ik→O, where I1, ..., Ik, O are all ordered sets or partially ordered sets (posets), is called monotone ascending (or descending) if whenever

x1 ≤ y1,
...
xk ≤ yk,
we also have that f(x1, ..., xk) ≤ f(y1, ..., yk) (or that f(x1, ..., xk) ≥ f(y1, ..., yk)).

For a function g of one variable, g is ascending if x≤y implies g(x)≤g(y), and descending if x≤y implies g(x)≥g(y). Note that a product of ordered sets (or of partially ordered sets) always has a "natural" partial ordering. So we can consider the function f above as a function of one variable on the poset (I1×...×Ik). Both definitions of monotonicity coincide for this view.

For real functions (functions taking arguments and values in the real numbers), and more general for functions defined on sets which also have a topology, ``monotonicity'' is sometimes taken to mean ``local monotonicity''. For instance, monotonicity of real Mobius functions holds in an appropriately local setting: the function f(x)=1/x is locally descending, yet f(-1) < f(1).

Monotone functions aren't quite as boring as their name might imply. Some examples:

  • Let f:U->R be a monotone function defined from some open subset U of R to the real numbers. Then, for any x∈U the one-sided limits
    f(x+)=limt->x+t>x f(t)
    and
    f(x-)=limt->x-t<x f(t)
    exist, and indeed f(x-)≤f(x+). (Understand "f(x+)" and "f(x-)" as complete symbols, not composed of parts, just like phrases "lim..." can only be understood as complete symbols; there are no x+ and x-!)
  • In Computer Science boolean functions f:{0,1}n->{0,1} are important.for obvious reasons. Various measures of the computational complexity of such functions exist. Unfortunately, all useful measures are extremely difficult to prove (a breakthrough in general computational complexity of boolean functions might go a long way towards resolving whether P = NP...). One such measure is the size of the smallest circuit computing f; frankly, modern Computer Science has no leads on how to get a super-linear bound the size of almost any circuit for computing almost any f.

But if f is monotone, we can compute it with a "monotone circuit" (circuits using OR and AND gates, but no NOT gates). Nonlinear bounds on the size of the smallest monotone circuit computing many monotone functions f are known. For instance, the majority function MAJn is clearly monotone: increasing the number of `1' voters may change the decision from `0' to `1', but never from `1' to `0'. And it is known that exponentially large monotone circuits are required to compute MAJn. (MAJn can easily be computed with circuits of polynomial size, e.g. by adding up all the input bits; this shows the power of the NOT gate).

Yes, there are Boolean monotone functions where the smallest monotone circuit is exponentially larger than the smallest circuit that uses NOT gates.  However, the majority function does not have that property.

Here's how to build a polynomial-size monotone circuit for the majority function.  First, build a box with two inputs and two outputs, where one output is the AND of the inputs, and the second output is the OR of the inputs.  Clearly, this is a monotone circuit, since it doesn't have any NOT gates.  But it is also a 2-input sorting function.  If you send in two bits, the OR output will be the maximum of those two bits, and the AND output will be the minimum of those two inputs.  (Just check all 4 cases, if it isn't obvious that that is true).

Now look up sorting circuits in Knuth.  He shows a huge number of different ways of sorting N inputs using only a polynomial number of 2-input sorting functions.  Since his circuits use nothing but 2-input sorting functions, and since each of them is a monotone circuit, the entire thing will be a monotone circuit.

Now it's easy to build a majority circuit.  Given N input bits, use Knuth's circuits to sort them.  Then output the middle bit of the sorted list.  That's the majority.

The first known example of a monotone function whose circuit reduces from exponential size to polynomial size by using NOT gates was more complicated than a simple majority function.  It was something about finding perfect matchings on a bipartite graph, if I remember correctly.  It wasn't terribly simple.  I don't know of any examples that are as simple as a majority function.

Log in or register to write something here or to contact authors.