From a statistical analysis
point of view:
The IQ score is mapped to the standard distribution
curve (see that writeup for a full explanation) like this:
.. | ..
. | .
.25 . | .
.. | ..
.... | ....
IQ: - <- 100 -> +
The mean IQ is 100, so we place that at the center of the curve since the mean is supposed to have the highest probability of occurrence.
According to the normal distribution, we can calculate the following:
- The 25th percentile (i.e. first quartile) is 89.9 points.
- The 50th percentile is 100 (obviously).
- The 75th percentile is is 110.11.
(See how that's 89.9 but rotated over to the other side of the curve? That's because the curve is equal on both sides.)
- The 85th percentile is 115.5.
(This is a curve, not a triangle, so don't expect a linear change!)
- The 90th percentile is 119.2.
- The 95th percentile is 124.7.
- The 99th percentile is 134.9.
- The probability that a randomly selected person's IQ will be 140 is 0.3%.
If there are seven billion people in the world, about 21 million can join Mensa. (21 million sounds like a lot, but there are about that many living in Southern California. Compare that to everyone living everywhere else in the world.)
- The probability that a randomly selected person's IQ is greater than 100 is 47.3%.
Now, if you interview 100 people and 90 say they're above average in intelligence, the probability of them all being right is a little trickier. We must carry out a hypothesis test to determine whether those 90 people are likely to be right. Anyway, in said node (hypothesis test), I have done all the hard work and have arrived at this conclusion:
If you select 100 people at random, the probability that 90 of them will have IQs of 101 or above is 4.02 x 10-14, which is a really, really, really small probability.
Anyway, as the sample size increases, the probability decreases, so the probability that seven billion people have a 101+ IQ score is probably ten zillion to one against.
JerbolaKolinowski says that I should explain the assumption that IQ measures intelligence. If you have ever taken an IQ test, you may remember what the questions were like: Word problems, 3-D spatial analysis, moving numbers around, identifying language syntax, etc. I think it may be fair to say that the IQ test is really a measure of how good you are at being a fancy parsing calculator. It's also fair to assume that most people who know what IQ is probably believe it is directly related to intelligence (subjective reality), and that fits the test of "90% of people think they are of above average intelligence".
An acquaintance of mine, having administered IQ tests to the same people at different ages, has noted that as people age, their IQ tends to drop, but not their problem-solving capability - so they're just as good, but not as fast.
Note that the amount of time you take to finish the test affects your score. You aren't given forever to take it, or an unreasonably long amount of time to take it. The result depends on the time you spend taking the test as well as the accuracy of your answers. I think we can infer that the "just as good, but not as fast" theory is valid. Personally, I think speed and accuracy are as good a measure of computational ability as any.
As far as relevance is concerned, IQ and "intelligence" may not be as relevant in a third-world, agrarian country as they are in a first-world country. Let me put it to you this way. You've grown up in the middle of a farm in a province with an agrarian economy. Which of these would you rather be?
- Terrible farmer, with an IQ of 130
- Highly skilled farmer, with an IQ of 70
I'll take #2 if I plan to stay in town, and #1 if I have an opportunity to go to university and do something other than farming (and if I'm in a third-world country that probably won't happen). There's no point in being an ultra-smart starving beggar who can't put food on the table.