The asymptotic equipartition property is about the only thing I remember from an information theory course. In the book Elements of Information Theory by Cover and Thomas (Wiley), we can read the following:

In information theory, the analog of the law of large numbers is the Asymptotic Equipartition Property (AEP). It is a direct consequence of the weak law of large numbers. The law of large numbers states that for independent, identically distributed (i.i.d.) random variables, (1/n)sum(Xi) is close to its expected value EX for large values of n. The AEP states that (1/n)log(1/p(X1,X2,...,Xn)) is close to the entropy H, where X1,X2,...,Xn are i.i.d. random variables and p(X1,X2,...,Xn) is the probability of observing sequence X1,X2,...,Xn. Thus the probability p(X1,X2,...,Xn) assigned to an observed sequence will be close to 2-nH.

(...)

What really impressed me is that the authors sums this up this way:

We summarize this by saying, "Almost all events are almost equally surprising."

Log in or register to write something here or to contact authors.