Algorithmic Information Theory defines the extent to which a sequence of numbers is random by the length of the shortest algorithm (i.e. programme) that outputs it. So "11111111111111" could be output by a little programme like

for i=1 to 15
{print "1"}

while the shortest programme to output "346357538323523627567" might actually be

print "346357538323523627567"

so the second sequence is "more random" than the first. In fact it defines a "random sequence" as one for which the shortest algorithm is just "print the sequence".
You can make this idea more concrete by considering a Turing machine made to output the sequence rather than just a random programming language.
Note that this ties in nicely with our common understanding of randomness - if there's a nice pattern there, then you can exploit that to compress the sequence into a little algorithm, and hence it is not by this definition random.