Browse Prior Art Database

Cache Replacement Algorithm Based on the Inclusion Property and the Least Recently Used Policy for Shared Secondary Caches

IP.com Disclosure Number: IPCOM000117821D
Original Publication Date: 1996-Jun-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 4 page(s) / 138K

Publishing Venue

IBM

Related People

Hicks, D: AUTHOR [+3]

Abstract

Cache replacement policy has become a highly critical concern since it is directly linked to the overall system performance. When a block (data item in the cache) is needed in a processor's working set, it is desired that the block exist in either the primary or secondary caches, associated with that processor, since caches are closer to the processor and have much smaller access time than main memory. When a cache miss occurs, the requested block coming from main memory is placed in the caches for the likelihood of future accesses (locality of reference). To place the new block in the cache, another block must be discarded. If this victim block is randomly selected, the probability of it being in the current working set is very high.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 51% of the total text.

Cache Replacement Algorithm Based on the Inclusion Property and the
Least Recently Used Policy for Shared Secondary Caches

      Cache replacement policy has become a highly critical concern
since it is directly linked to the overall system performance.  When
a block (data item in the cache) is needed in a processor's working
set, it is desired that the block exist in either the primary or
secondary caches, associated with that processor, since caches are
closer to the processor and have much smaller access time than main
memory.  When a cache miss occurs, the requested block coming from
main memory is placed in the caches for the likelihood of future
accesses (locality of reference).  To place the new block in the
cache, another block must be discarded.  If this victim block is
randomly selected, the probability of it being in the current working
set is very high.  By consistently discarding blocks at random from
the cache(s), the system performance is negatively affected.  To
improve the random replacement effect, the Least Recently Used (LRU)
policy is often used since it attempts to preserve the lines in the
active working set.  However, since there is no optimal cache
replacement policy and as demands increase for reliable high
performance systems, computer architects are urged to turn to more
efficient replacement policies.

      Although the LRU policy usually performs well, when used in
secondary caches, it sometimes proves undependable.  In systems such
as that shown in Fig. 1, the primary cache is accessed on the basis
of processor reference while the secondary cache is accessed whenever
there is a primary cache miss.  Therefore, a frequently used block in
the primary cache is very likely to become a Least recently used
block in the secondary cache.  Moreover, if a cache miss occurs, with
the LRU policy being used in the secondary cache, there is a high
probability that the victim LRU block is one that is frequently used
in the primary cache.  Discarding such block from the secondary cache
will result in the termination of the active programs using this
block.  This will cause some degradation in the overall performance.
This above scenario is further worsened when the block size in the
secondary cache is a multiple of the block size in the primary cache
which is a common practice in systems with shared (secondary) caches.
The reason for this being is that, the secondary cache block becomes
a collection of primary cache blocks that could be independently
accessed by various processing units and discarding such a secondary
cache block may leave multiple processors idle waiting for the data
required in their working set to be r...