In computer systems, a cache is a small, high-speed memory store that is more efficient than the main store. The main store contains the full set of data; the cache contains a small subset of it, chosen by one of several methods to maximise the overall speed of program execution. When writing to a cache, some existing data in the cache may need to be replaced with the new data. In a direct-mapped cache, this isn't really an issue since there is only one possible location to cache the new data block in. However in other schemes, there is a choice of possible cache locations to write to. A good Block Replacement scheme must be selected in these cases.

Clearly, the best block to replace is the one that will not be read for the longest time. This is an impossible scheme to implement, since it's never possible to predict which blocks will be needed in the future. For this reason, it's often assumed that the least recently used, or LRU block is the best one to replace. If the overheads of keeping track of how recently a cached block has been read are too high, a random block is sometimes replaced. This is more effective than one might think.

Log in or registerto write something here or to contact authors.