Browse Prior Art Database

Efficient Selective Caching Through Lazy Cache Promotion

IP.com Disclosure Number: IPCOM000008849D
Publication Date: 2002-Jul-17
Document File: 3 page(s) / 99K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method that filters traffic to the lower-level cache without requiring a new specialized cache structure (such as the assist cache or the annex cache). In addition, it uses existing information stored in the upper-level cache to decide whether a block should be stored in the lower level.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 50% of the total text.

Efficient Selective Caching Through Lazy Cache Promotion

Disclosed is a method that filters traffic to the lower-level cache without requiring a new specialized cache structure (such as the assist cache or the annex cache).  In addition, it uses existing information stored in the upper-level cache to decide whether a block should be stored in the lower level.

Background

Typical microprocessors use multiple levels of caches in a memory hierarchy in order to reduce the average memory latency. This approach takes advantage of temporal and spatial locality in the data access patterns of programs. A lower-level (LL) cache holds the most critical and frequently-accessed data; however, because a data block is usually entered into the LL cache upon the first access, the LL cache may be polluted with read-once or read-rarely data blocks.

These data blocks may displace critical, high-reuse data blocks already in the cache, causing subsequent LL cache misses and performance loss.

General Description

The disclosed method is more selective when placing data blocks into the LL cache.

On LL and upper-level (UL) cache misses, data is read from memory (or the next higher cache), then written to the UL cache and sent to the processor; it is not immediately written (eagerly promoted) to the LL cache.  Instead, data is only written to the LL cache upon a UL cache hit. In this case, the least recently used (LRU) or most recently used (MRU) information may be checked to ensure that only data that is accessed in the UL cache with sufficient frequency is written into the LL cache. Since this occurs only after multiple demands for the same data, it is termed “Lazy Promotion” (see Figure 2). Figure 1 shows an “Eager Promotion,” which is...