A (real) number x is called a (first-order logic) (real) definable number if it is the only real number satisfying a first-order logic formula f, which has one free variable. Really, this means all the numbers you could specify exactly (by describing what equations etc. hold for them). Numbers are also sometimes called "definable" according to other logical languages.

All the algebraic numbers are trivially definable, but so is every other mathematical constant and number you've bumped into. This should come as no surprise: we've pretty much included every number which mathematics can "call by name" (in the sense of unambiguously singling out a specific number). On the other hand, definable numbers must be countably infinite, as they may easily be Godel numbered (each one is described by a finite logical formula).

So where do all the *other* real numbers come from? Where else? From the Axiom of Choice (or one of its friends). If you allow yourself not only to specify *exactly* what number (or set, or whatever) you want, but also to discuss "some number" satisfying a property, you open the door to diagonal arguments and whatnot, and uncountability ensues. Fortunately, a unique value satisfying a first-order formula doesn't let you do Cantor's thing. Disallow "Choice", and you can still keep all the numbers you can "single out", and yet remain with countable sets only! This is more fully discussed in computable number. Computable numbers are a (proper) subset of the definable ones: when one turns to discussing computability, there are various interesting numbers which can be defined, but not computed, such as Chaitin's constant. So halting problem-type situations add numbers which are definable but not computable.

You might opt to do your maths with definable numbers only. You seem to gain "a few" more numbers (and answers) over using computable numbers, still without the horrors of accepting the existence of uncountable sets, but of course you give up the clean constructivist approach for mere definability.