Browse Prior Art Database

Second Level Cache Fast Access

IP.com Disclosure Number: IPCOM000042023D
Original Publication Date: 1984-Mar-01
Included in the Prior Art Database: 2005-Feb-03
Document File: 2 page(s) / 44K

Publishing Venue

IBM

Related People

Brenza, JG: AUTHOR

Abstract

This article describes an improved second level cache design in a three level hierarchy in which L1 is a CPU cache, L2 is the second level cache dedicated to the CPU, and L3 is system main storage. The L2 serves as an intermediate buffer between L1 (a conventional cache) and main memory. L2 is typically larger but slower than L1, yet smaller and faster than main memory (L3). In general, the CPU controls initiate a C1 cache cycle for each machine clock cycle as determined by the T1 read gate signal of Fig. 1. This will cause both the L1 cache and the L1 directory to be cycled. If the desired data is present in L1 cache, the L1 directory sends a select line to the L1 cache array to gate the addressed data to an output register for the CPU.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Second Level Cache Fast Access

This article describes an improved second level cache design in a three level hierarchy in which L1 is a CPU cache, L2 is the second level cache dedicated to the CPU, and L3 is system main storage. The L2 serves as an intermediate buffer between L1 (a conventional cache) and main memory. L2 is typically larger but slower than L1, yet smaller and faster than main memory (L3). In general, the CPU controls initiate a C1 cache cycle for each machine clock cycle as determined by the T1 read gate signal of Fig. 1. This will cause both the L1 cache and the L1 directory to be cycled. If the desired data is present in L1 cache, the L1 directory sends a select line to the L1 cache array to gate the addressed data to an output register for the CPU. If the desired data is not present in the L1 cache, an L1 "miss" is sensed by the L1 directory and the L2 cache is interrogated for the same data. An L2 "hit" indicates data presence in L2 and results in a select line between the L2 directory and the L2 cache array to gate the desired data to the L1 cache and to the output register. The dual read gate of this disclosure starts the L2 access cycle on each first L1 cycle (C1) in anticipation of an L1 "miss" (assuming L2 is not otherwise busy). Should it become known later in the cycle that the data is available in L1 (the usual case), the L2 access cycle is "aborted" and the process is repeated during the next machine cycle having a new CPU request. Should it become known in the midst of a particular C1 cycle that the data is not available in L1 (L1 miss), then a second read gate (T2 of Figs. 1 and 2) is sent...