Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Second-Level Shared Cache Implementation for Multiprocessor Computers With a Common Interface for the Second-Level Shared Cache And the Second-Level Private Cache

IP.com Disclosure Number: IPCOM000120378D
Original Publication Date: 1991-Apr-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 4 page(s) / 205K

Publishing Venue

IBM

Related People

Bakoglu, HB: AUTHOR

Abstract

Shared caches can be used to implement tightly coupled shared memory multiprocessors without requiring complicated cache consistency proto- cols. In a tightly-coupled multiprocessor system, the main challenge is to keep the caches of the processors consistent with each other. This comes about because a given data can be cached in multiple caches which are being updated by multiple processors that own these caches. It needs to be ensured that when a processor reads a data, it gets the most recent copy of that data. With multiple caches in the system, it is possible to read stale data either from one's own cache or from main memory if one of the other processors has a modified version of the data in its cache.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 35% of the total text.

Second-Level Shared Cache Implementation for Multiprocessor Computers
With a Common Interface for the Second-Level Shared Cache And the
Second-Level Private Cache

      Shared caches can be used to implement tightly coupled
shared memory multiprocessors without requiring complicated cache
consistency proto- cols.  In a tightly-coupled multiprocessor system,
the main challenge is to keep the caches of the processors consistent
with each other.  This comes about because a given data can be cached
in multiple caches which are being updated by multiple processors
that own these caches. It needs to be ensured that when a processor
reads a data, it gets the most recent copy of that data.  With
multiple caches in the system, it is possible to read stale data
either from one's own cache or from main memory if one of the other
processors has a modified version of the data in its cache.  In
comparison with multiple caches that need to be kept consistent, a
shared cache is a simpler way of providing coherent data because, by
definition, if the shared data can exist only in a shared cache, it
is not possible to have multiple copies of a given shared data.

      One can provide a private local cache where the data that is
not shared by the processors can reside.  This provides a low-latency
access to nonshared data.  Since this data is not shared, only one
processor can use it, consequently, that particular processor can
place this nonshared data in its private cache without having to be
concerned with the coherency with the private caches of other
processors.  In addition to private caches, one common shared cache
is provided which contains all the shared data, and since there is
only one copy of the shared data, it is coherent by definition.  This
practically eliminates the cache consistency problem because only the
nonshared data is placed in the private caches, which, as a result,
does not have any consistency problem, and the shared data is placed
in the shared cache, which is consistent by definition.  The data has
to be separated as shared and nonshared by software either
automatically or by using user directives, and the memory space has
to be divided into shared and nonshared by, for example, defining a
bit in the translation tables that the operating system maintains.
This bit would indicate if a given page contains shared data (it has
to be placed in the shared cache) or nonshared data (it has to placed
in the private cache).

      Another concept that needs to be described is a second-level
cache, also known as a L2 cache.  In general, second-level caches are
used to improve the performance of uniprocessor and multiprocessor
systems because the second-level cache access time is shorter than
the main memory access time.  Servicing the first-level cache misses
from the second-level cache is much faster than servicing them from
the main memory.

      As CPU cycle time improves, the number of cycles it takes to
reloa...