Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Interleaved Second-Level Cache Structure that Can Use Slow Static Rams

IP.com Disclosure Number: IPCOM000102431D
Original Publication Date: 1990-Nov-01
Included in the Prior Art Database: 2005-Mar-17
Document File: 3 page(s) / 109K

Publishing Venue

IBM

Related People

Bakoglu, HB: AUTHOR

Abstract

This disclosure describes a Second-Level Cache implementation that does not require very fast SRAMs. Let us first describe why a second-level cache is useful. As CPU cycle time improves, the number of cycles it takes to reload the cache and the associated performance penalty increase. Second-level caches are used to reduce this performance degradation because the second-level cache access time is shorter than the main memory access time. Servicing the first-level cache misses from the second-level cache is much faster than servicing them from the main memory. This results in the following memory hierarchy: very fast access to the first-level cache (usually single cycle), relatively fast access to the second-level cache (a few cycles), slow access to main memory (10-30 cycles).

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Interleaved Second-Level Cache Structure that Can Use Slow Static Rams

       This disclosure describes a Second-Level Cache
implementation that does not require very fast SRAMs.  Let us first
describe why a second-level cache is useful.  As CPU cycle time
improves, the number of cycles it takes to reload the cache and the
associated performance penalty increase.  Second-level caches are
used to reduce this performance degradation because the second-level
cache access time is shorter than the main memory access time.
Servicing the first-level cache misses from the second-level cache is
much faster than servicing them from the main memory.  This results
in the following memory hierarchy:  very fast access to the
first-level cache (usually single cycle), relatively fast access to
the second-level cache (a few cycles), slow access to main memory
(10-30 cycles).  One ends up with a small but fast first-level cache,
a larger but slower second-level cache, and a much larger but also
much slower main memory.

      There are a number of key design parameters for the
second-level cache (L2 cache).  These include cache size, cache line
size, set associativity, latency, bandwidth between the CPU and the
L2 cache, and the line replacement algorithm.  This disclosure
concentrates on maximizing the bandwidth between the CPU and the L2
cache and minimizing the latency of the CPU accesses to L2 cache.
The method described in this disclosure achieves this while keeping
the complexity of the interface and cost of the L2 cache to a
minimum.

      The proposed structure is to implement an interleaved L2 cache.
This has multiple advantages.  It reduces the access time
requirements of the SRAMs.  The level of interleaving will be
determined by the CPU cycle time and the access times of the
available SRAMs.  For example, by using a two-way interleaved L2
cache, one can use SRAMs with roughly twice the CPU cycle time and
still get data from the L2 cache every cycle.  The interleaved L2
cache also keeps the pin count requirements for the CPU to a minimum.
Because an interleaved L2 cache provides data on the bus every cycle,
the bandwidth requirement can be satisfied by a narrower bus.  This
is especially important if the CPU is a single-chip microprocessor.
The int...