Why Binary Search is so Fast
The running time of binary search is O(log n), where n is the number of elements in the haystack.
This means that the number of time units is logarithmic in the size of the input, which is actually a very very good time complexity. Under the O-notation, the base of the logarithm is not relevant, but let’s fix it to base 2.
Base-2 logarithm is a function that, given a number (such as n, the size of the haystack), tells us how many times we can halve n until it reaches 1.
For example, log_2 (1024) is 10, since:
1024 / 2 = 512 \
512 / 2 = 256 |
256 / 2 = 128 |
128 / 2 = 64 |
64 / 2 = 32 | 10 divisions necessary to reach 1
32 / 2 = 16 |
16 / 2 = 8 |
8 / 2 = 4 |
4 / 2 = 2 |
2 / 2 = 1 /
To understand intuitively just how efficient O(log n) is, let us imagine an adversary whose goal is to make our algorithm run for as long as possible.
So assume the adversary gives you a needle and a haystack that make binary search take a long time. They would indeed have a difficult job ahead: to make binary search take an extra time unit, they would have to provide you with twice the haystack.
In fact, even for haystacks that are as large as the working memory of a modern computer (say 128GB), the number of time units required by binary search to finish on such a haystack would be very small:
log_2 (128 GB) = log_2 (128 * 1024 * 1024 * 1024) = 37 time units.
Even if you are reading this well into the future on a machine with 1024TB of RAM, binary search would take
log_2 (1024 TB) = log_2 (1024 * 1024 * 1024 * 1024 * 1024) = 50 time
units on a haystack the size of the entire memory.
Go to the next page to read up on how to implement binary search in the C language.
- Page 1 – Introduction
- Page 2 – How It Works – Overview
- Page 3 – Example 1
- Page 4 – Example 2
- Page 5 – Why Binary Search is so Fast
- Page 6 – Implementation
- Page 7 – Conclusion