Browse Prior Art Database

High-Performance Cascaded Cache Memory

IP.com Disclosure Number: IPCOM000113307D
Original Publication Date: 1994-Aug-01
Included in the Prior Art Database: 2005-Mar-27
Document File: 4 page(s) / 114K

Publishing Venue

IBM

Related People

Schorn, E: AUTHOR

Abstract

Disclosed is a method for providing a flexible cache memory, including a number of cascaded cache subsystems, which is expandable after the manufacture of a computing system.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

High-Performance Cascaded Cache Memory

      Disclosed is a method for providing a flexible cache memory,
including a number of cascaded cache subsystems, which is expandable
after the manufacture of a computing system.

      Since a high-performance microprocessor typically provides more
physical address space than the address space actually used in a
system, cache controllers often place a limit on the size of
cacheable memory, at a level which is typically a power of two.  This
limit, which is implemented by requiring the unused address bits to
be a constant, typically zero, before responding to a request, saves
space in the tag RAM.  Since the compare operation needed to
determine if this requirement has been met is relatively fast, it is
not in the critical timing path.  In other words, this operation does
not slow the cache controller.

      Fig. 1 shows how bits are used in a direct-mapped 128KB cache
with 16 byte lines and 64MB of cacheable memory space, using fixed
compare with each bit A31 through A26 compared with a value of zero.
This figure applies to double-word (32-bit) accesses.

      Fig. 2 is a similar chart for a look-aside cache controller
with a programmable compare feature, which compares the address bits
A31 through A26 to a programmable value before responding to a
request.  Since, this compare operation is also relatively fast, it
is not in the critical path.  Again, this Figure applies to
double-word (32-bit) accesses.

      Fig. 3 shows an example of a multiple cache system, in which
each cache has its own region of cacheable memory.  This type of
system can be formed by placing, in the same system, a number of
cache controllers of the type described in reference to Fig. 2.  The
number of controllers is a power of two.  Each controller is
programmed to respond to a unique and complimentary region of memory,
with the regions being located so they are stacked on top of each
other.

      Fig. 4 shows how, within each cacheable region, the memory can
be organized into pages mapping into the data SRAM of the cache.
These pages are also stacked on top of each other.

      Fig. 5 shows a method for wiring the address lines from the
processor to each cache subsystem of the type discussed with
reference  to Fig. 2, so that some of the bits in the line select
region are switc...