I guess a number between 1 and 100. You try to guess my number, and each time all I say back is whether the

value you guessed is

higher or

lower than my number. How many guesses is it going to take you to get my number?

My number is 57.

First guess: 50. I say higher.

Second guess: 75. I say lower.

Third guess - 62. I say lower.

Fourth guess - 56. I say higher.

Fifth guess - 59. I say lower.

Sixth guess - 57. Bingo.

That was a binary search. See how efficient it was? We got the **one** value we wanted out of a possible **one hundred** in only **six** guesses. Compare that to a linear search.

### Clarification on how a binary search (or binary chop) works

If we have a set of **ordered** data, then the computer first checks the middle value. Because the data is ordered, its possible to see whether the middle value is higher or lower than the thing I'm searching for. I'm now left with a subset I can totally discount - for instance, in the example above, after my first guess I was able to totally discount all numbers below 50.

Now I look at the value in the middle of that subset. Is this higher or lower? I can now discount a subset of that subset (in the example above, all values higher than 75). This narrowing down process continues until we get to the value we want.

If you can't see why this would be useful, or want help conceptualising it, imagine a phone book. I'm looking for the phone number of the person with the name M. Brown. A phone book is sorted alphabetically by name, so the search I perform is on the name. This is known as the key field.

I open the phone book in the middle, in the M section. Too high.. so I continue my binary chop, discounting **huge** subsets of the phone book (thousands of numbers) with each chop. Pretty neat.