Browse Prior Art Database

Method for redundant ways to improve cache access latency

IP.com Disclosure Number: IPCOM000028867D
Publication Date: 2004-Jun-04
Document File: 3 page(s) / 47K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for redundant ways to improve cache access latency. Benefits include improved functionality and improved performance.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 54% of the total text.

Method for redundant ways to improve cache access latency

Disclosed is a method for redundant ways to improve cache access latency. Benefits include improved functionality and improved performance.

Background

              The performance of a cache is directly related to its size and the access latency. A large cache usually has higher access latency.

              Conventional caches are designed without functional redundant ways. Any redundant ways that may have been introduced are for fault tolerance.

General description

              The disclosed method improves the access latency of a cache by exploiting the expected access pattern. The disclosed method introduces additional access ways. The most recently used way is copied to one of the redundant ways mapping. The redundant ways have lower access latency due to their layout and result in improved performance whenever an access hits the redundant way. The redundant way is expected to hit 85-90% of the time.

Advantages

              The disclosed method provides advantages, including:

•             Improved functionality due to introducing additional cache access ways for the purpose of improving performance

•             Improved performance due to improving cache access latency

Detailed description

              The disclosed method introduces redundant access ways to improve cache access latency. For example, a k-way set-associative cache is modified to include additional r ways for each n*k ways. The value of n, r could also be 1. The most recently accessed way among the n*k ways is copied to one of the additional ways. The cache is organized so that the redundant ways can be accessed faster than the other ways due to a smaller size requirement for all the redundant ways. Most recently used (MRU) lines are expected to be used more often and result in an overall improved access latency (see Figure 1).

              A request can hit in a redundant way as well as another way in the cache. The redundant way hit occurs much earlier and data is forwarded from the redundant way. Data stores that hit the redundant way update the redundant way as well as the original way in the cache. A line victimized from the nonredundant way invalidate a redundant way if a matching line is found.

              Alternatively, the cache can be designed so that its typical access latency is the same as the earlier-described (original) cache but the cache is larger. For example, a 4-way cache can be modified so that the redundant way and three ways are designed t...