Browse Prior Art Database

Cache Replacement Methods Using an Address History Stack

IP.com Disclosure Number: IPCOM000117979D
Original Publication Date: 1996-Aug-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 8 page(s) / 289K

Publishing Venue

IBM

Related People

Campbell, JE: AUTHOR [+3]

Abstract

Disclosed is a method to dynamically modify a Cache replacement algorithm based on Cache Cast-out address history. In this method, a mode is provided that can alter the way the choice is made for line allocation. Two methods are shown.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 23% of the total text.

Cache Replacement Methods Using an Address History Stack

      Disclosed is a method to dynamically modify a Cache replacement
algorithm based on Cache Cast-out address history.  In this method, a
mode is provided that can alter the way the choice is made for line
allocation.  Two methods are shown.

      Programs that loop repetitively accessing the same Cache set
can exceed the cache set associativity.  This can result in
situations where the cache replacement algorithm is continually
replacing the next  (to be used) member of the set, resulting in
excessive cache misses.

      More specifically, in a set associative cache addressed by
"N" address lines, data is mapped into one of 2**N sets based on the
data's address bits.  These N address lines are generally selected
from the most frequently changing address bits to effect a uniform
mapping amongst the cache sets.  This usually means the least
significant end of  the address starting at the cache line size bit
through more significant  bits sufficient to address (total cache
lines/associativity) cache sets.  As an example, if the most-to-least
significant address bits are  0:63 and the line size is 128 bytes,
bits 57:63 would index into a cache  line.  Then the sets of Cache
lines may be addressed by x:56 where "x"  would be 50 for a 64 Kbyte,
4-way set associative cache.  For caches mapped in this manner data
references that are separated by an exact multiple of (cache size /
associativity) map into the same set.  In the  previous example,
where the cache set address bits are 50:56, any address  where 50:56
are the same, would map to the same cache set.

      If, in a given pass of a program loop, the number of unique
line references in the set exceeds associativity subsequent loop
passes which again reference the same lines will find none in cache.
The reason is the set's replacement algorithm chooses the Least
Recently Used (LRU) or nearly LRU line in the set when a miss occurs
causing the data next needed or soon needed to be replaced with
what's now needed.  For purpose of example assume the following:
  - Cache associativity = 4 Way (A, B, C, and D)
  - Line size = 128 Bytes
  - Cache discipline = store back
  - Cache replacement algorithm = Replace Least Recently Used
      (LRU) line on miss, new line becomes
      Most Recently Used (MRU) line when loaded and whenever
      a subsequent reference hits it.
  - Memory data fields (j, k, l, m, and n) each contain
      64 doubleword elements, i.e., the data for each field can
      be contained in 4 lines assuming arrays begin on a line
      boundary.
  - The fields are some exact multiple of (cache size /
      associativity) apart (they map to the same Cache set).
  - A program loop for summing respective data field elements.
      do i = 1 to 64;
      j(i) = k(i) + l(i) + m(i) + n(i);
      end;
      (Note: i = 8 Byte increment)...