Browse Prior Art Database

Method for a cache tag buffer

IP.com Disclosure Number: IPCOM000018761D
Publication Date: 2003-Aug-06
Document File: 5 page(s) / 224K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for a cache tag buffer (CTB). Benefits include improved power performance.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 43% of the total text.

Method for a cache tag buffer

Disclosed is a method for a cache tag buffer (CTB). Benefits include improved power performance.

Background

        � � � � � Set associative caches have higher hit rates than direct-mapped caches, and are used widely in embedded applications. Set-associative caches consume more power than direct-mapped ones, a disadvantage for processors used in power sensitive applications. Caches consume a significant amount of the power consumed by a processor.

General description

        � � � � � The disclosed method is a CTB that stores the most recently read cache tags, eliminating the requirement to read the identical tags on accesses to the same cache lines. Each tag buffer stores the cache tag and the corresponding cache-way where the line is stored in the data array. When an access hits in the tag buffer, the Way information is used to access the data array. Cache tag array access is avoided, saving power. Simulations indicate that just two data cache-tag buffer registers eliminate over 50% of tag accesses in common speech codecs. The result is a reduction in the dynamic power consumption of set-associative data caches, which is beneficial for processors used in hand-held applications.

Advantages

        � � � � � The disclosed method provides advantages, including:

•        � � � � Improved power performance due to reducing the power consumed by the set-associative L1 cache

Detailed description

        � � � � � A CTB is introduced into the cache lookup that stores the most recently accessed tag (along with any other line-specific information) and the corresponding hit-way (see Figure 1). The tag stored in the tag buffer is larger than the tags in the tag array because the index bits must also be included. The modification to the cache operation is specified in the following basic algorithm for a cache read (see Figure 2).

        � � � � � Any tag buffer algorithm must observe the following rules to guarantee correct operation:

1.        � � Consistency: A valid tag buffer entry must always correspond to a valid cache tag entry.

2.        � � LRU correctness: Bits in the cache tag array must be in the same state as they would if the tag buffers are not used.

        � � � � � Rule 1 is a fundamental requirement because a tag buffer hit that points to the incorrect datum may likely result in a program error. This rule is observed by the basic algorithm. An important side effect of using the tag buffer scheme is the effect on the least recently used (LRU) bits in the tag array. The most-recent status of the hit-way is unchanged because any miss results in the tag buffer updating itself to the most-recently accessed tag. Violating Rule 2 does not effect correctness but may reduce the cache hit rate.

        � � � � � Using the disclosed method, the total number of cache tag array comparisons for the program shown in figure 3 is :

256 + 3*256/16 = 304

        � � � � � This computation translates into a savings of more than 300% in cache tag array accesses for this case. Because the energy required...