Cache-oblivious
algorithms are
algorithms that have asymptotically
optimal cache complexity. Unlike
cache-aware
algorithms,
cache-oblivious
algorithms contain no tuning parameters for the
cache size and line length and, indeed, are oblivious to nature of the
cache, but exploit
locality of reference to achieve
cache-optimality nonetheless. (Memory accesses which occur within a short time of each other tend to refer to memory addresses which are close to each other.) In other words,
cache-oblivious algorithms have no specific knowledge about the cache of the machine they are running on but have optimal cache performance nonetheless.
The simplest cache-oblivious algorithm is divide and conquer matrix multiplication. (Straightforward matrix multiplication has poor cache complexity, but divide and conquer does not.) The MIT LCS discovered cache-oblivious algorithms in 1994, when they realized that divide and conquer matrix multiplication is cache-optimal and requires no cache-specific tuning parameters. The term "cache-oblivious" was not adopted by the LCS until 1997.
A good introductory treatment of cache-oblivious algorithms is the Master's Thesis of Harold Prokop (also of MIT LCS).