A model developed by the MIT LCS to analyze the cache complexity of an algorithm.
In the model, there are two levels to memory: A cache consisting of Z words, and an arbitrarily large main memory pool. (The size of the word, as long as it is a small, fixed size, is irrelevant to asymptotic analysis.)
The cache is partitioned into cache lines of W consecutive words. We assume that the cache is "tall", i.e. that Z=Ω(L2).
We can only reference words that reside in the cache. If we reference a word that is in the cache, we have a cache hit. If we reference a word that is not in the cache, we have a cache miss.
When a cache miss occurs, we must fetch the line from the main memory into the cache and, if the cache is full, evict a cache line. We choose the cache line to evict according to the optimal offline strategy of evicting the cache line whose next access is the furthest in the future. (The ideal-cache model is a fully-associative cache, meaning lines can be fetched and stored anywhere in cache.)