Browse Prior Art Database

Method of Determining Multiple Cache Targets With Hashing

IP.com Disclosure Number: IPCOM000120367D
Original Publication Date: 1991-Apr-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 2 page(s) / 90K

Publishing Venue

IBM

Related People

Liu, L: AUTHOR

Abstract

A technique is described whereby a mechanism provides a means for determining access targets in multiple cache environments, based on histories recorded in hash tables.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Method of Determining Multiple Cache Targets With Hashing

      A technique is described whereby a mechanism provides a
means for determining access targets in multiple cache environments,
based on histories recorded in hash tables.

      In certain computer processor designs, multiple caches are
used.  An example is the L0/L1 hierarchy.  Occasionally, it is
beneficial to be able to determine quickly, for storage access
purposes, which cache it will hit.  Without such a mechanism, in
order to avoid delay in the L1 access, the access may need to be sent
to both directories.  For instance, when it hits L0, the request to
L1 will be cancelled.  This arrangement will cause unnecessary busy
cycles on the L1 directory when the L0 hit occurs (although no L1
array access will be in effect).  Such busy directory cycles could be
detrimental when L1 is very busy (e.g., when the pipeline/decode is
doing excessive storage access, or when the L1 is shared by multiple
instruction streams).  The concept described herein provides a
mechanism to predict, with high accuracy on the target cache, that an
access may hit.

      Utilizing L0/L1 for illustration, the techniques can generalize
to other design environments.  The general design criteria is to
satisfy a storage access to L0, if possible. At first, assume that L0
cannot be stored into (i.e., a store will cause invalidation of the
line if found in L0). With this assumption there is no architectural
ambiguity in accessing data from L1 when it is actually residing in
L0. Next, assume hash table H has one bit per entry.  The number of
entries is a multiple (power of 2) of the number of cache congruence
classes in L0.  H is indexed by the virtual (real sometimes) line
address from the address generator (AGEN), or I-fetch unit.  For a
given address L, certain bits (usually lower order ones) are used to
index to H.  The bit value index is denoted as H[L].  When H[L] is
ON, the line is predicted to be in L0. The management of H is as
follows:
   (a) When a line is fetched into L0 via a (virtual/real) line
address L, the bit H[L] is turned ON.
   (b) When a line with (virtual) address L is replaced from L0, the
         H(9) bit is turned OFF.

      For an L0 access from AGEN to a line with (virtual) address L,
the following operation i...