Browse Prior Art Database

Reducing Latency in Associative Caches

IP.com Disclosure Number: IPCOM000118212D
Original Publication Date: 1996-Nov-01
Included in the Prior Art Database: 2005-Apr-01
Document File: 2 page(s) / 35K

Publishing Venue

IBM

Related People

Venkatramani, J: AUTHOR

Abstract

The speed of set-associative caches is limited by their associativity. Typically, any cache action (such as decode, data return on reads, data updates on writes, etc.) depends on which element of the set was selected for a particular transaction. This particularly limits the speed at which data can be returned to the requestor on cache hits.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 100% of the total text.

Reducing Latency in Associative Caches

      The speed of set-associative caches is limited by their
associativity.  Typically, any cache action (such as decode, data
return on reads, data updates on writes, etc.) depends on which
element of the set was selected for a particular transaction.  This
particularly limits the speed at which data can be returned to the
requestor on cache hits.

      This invention proposes a solution that optimizes this
latency.  On any transaction, the L2 cache machine predicts the
element (or way) that will be selected.  If the prediction was
incorrect, there  is an additional cycle of penalty.  The algorithm
used for prediction can either be fixed (use way 0 always) or
variable (use the way that was selected for the previous operation).
A timing diagram for read hits is shown in the Figure in the case of
a correct prediction as well as an incorrect prediction.