I'm proposing the concept of Kolmogorov Quotient as a calculable number that represents the amount a high-level language reduces the complex. That is, the amount of expressivity of a programming language. This idea of "simplification" is a factor of text-wise reduction (fewer characters needed to express a complex concept, à la Algorithmic Information Theory) and another, less easy to quantify concept of maintainability. Fleshing out this latter concept, it is clear it has to do with how easily one can establish programmer consensus for the given task (i.e. how many programmers of the language would put it back the same way you've expressed it or otherwise agree on the best implementation of a given problem (even if they didn't come up with it)?).

It is a Quotient so that higher Kolmogorov numbers for a given language denote a reduction in the complexity of solving the problem in the given language.

Once the basic premise/methodology above is agreed to, it is only a matter of a rough constant addendum of difference for any specific implementation. (That is, as long as the implementation is the same across all measurements, the number should be valid and comparable.) But it could go something like this: Pick a language "close to the machine", like C or Assembly, and measure the amount of bytes of machine code it used to implement a standard "suite of common, independent programming tasks" (*) (base_language_count). Then code the exact same functionality in the language you are wanting to measure (without using external libraries) and count the number of bytes of source code (test_language_count).

KQuotient = base_language_count / test_language_count.

This should always be greater than 1.0.

(*) A "suite of common programming tasks" can be broken down into two categories:
  • Data Processing Suite (i.e. simple or no text I/O) -- computation towards the machine
  • GUI Suite (tasks requiring graphical I/O) -- computation towards the user 

I believe this list is exhaustive.

An end to the language wars!!!