Browse Prior Art Database

Second Level Cache With Compact Directory

IP.com Disclosure Number: IPCOM000038653D
Original Publication Date: 1987-Feb-01
Included in the Prior Art Database: 2005-Jan-31
Document File: 3 page(s) / 39K

Publishing Venue

IBM

Related People

Martin, DB: AUTHOR [+2]

Abstract

This article describes a second level cache (L2) in a storage hierarchy which utilizes a relatively simple directory, which is managed in a way different from that of the first level cache (Ll). Both the first and second level caches are private to each CPU in a multiprocessor (MP) to avoid CPU contention for either cache and minimize the average storage delay time for each CPU request. They also are useable in a uniprocessor. The L2 cache is a store-through cache (i.e., L2 is fetched only by its CPU). The Ll cache may be a store-in cache or a store- through cache. Each CPU storage request is directed to the first level cache. The great majority of these references (called "storage traffic" or simply, "traffic") are for data found in the first level cache (Ll).

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 42% of the total text.

Page 1 of 3

Second Level Cache With Compact Directory

This article describes a second level cache (L2) in a storage hierarchy which utilizes a relatively simple directory, which is managed in a way different from that of the first level cache (Ll). Both the first and second level caches are private to each CPU in a multiprocessor (MP) to avoid CPU contention for either cache and minimize the average storage delay time for each CPU request. They also are useable in a uniprocessor. The L2 cache is a store-through cache (i.e., L2 is fetched only by its CPU). The Ll cache may be a store-in cache or a store- through cache. Each CPU storage request is directed to the first level cache. The great majority of these references (called "storage traffic" or simply, "traffic") are for data found in the first level cache (Ll). These are called "cache hits" and are desirable because they provide the fastest data return to the CPU. "Data return" means the number of cycles be tween the cycle in which the CPU request is made and the cycle in which the CPU receives the requested datum. Ll hits typically have a data return time of one cycle. Those storage references which are not cache hits (cache misses) are directed to the next fastest level in the storage hierarchy. In conventional designs the next fastest level is the main storage (MS), which provides a data return time which is much longer than Ll. How ever, this article also directs cache misses to the next fastest level in the storage hierarchy, which is the second level cache (L2), which provides a data return, for example, in 7 cycles (e.g., 6 cycles longer than Ll). By comparison in this example, main storage provides its fastest data return in l7 cycles which can be increased by MP conten tion for MS services. The figure shows the L2 cache directory as being organized and addressed in much the same way as a conventional Ll directory. The L2 directory comprises a two-dimensional matrix having congruence classes shown as rows containing two way set associative entries shown as columns. Each L2 directory entry consists of a page identifier (PID) and a line map vector (LMV). The length of the PID is determined by the number of congruence classes in the L2 directory. In a 32-bit address word (in which bits 00 - 07 are zero), bits 20-3l address a specific byte within its page. If the L2 directory has (for example) l28 congruence classes, then bits l3 - l9 are used to select the L2 congruence class. It is necessary to store only bits 08 - l2 in the L2 PID. The value of a PID, together with the L2 directory congruence class from which it came, uniquely identifies a specific 4,096-byte page in real main storage which has a 4,096-byte area reserved in L2. The L2 directory is always addressed by an absolute address, so there is no cache synonym ambiguity. The line map vector (LMV) is 32 bits long in each L2 entry. Each 4,096-byte page is composed of 32 l28-byte lines. The 32-bit positions in each LMV are respec...