Browse Prior Art Database

Control Queueing Structures for the Memory Directories in Multiprocessor Systems

IP.com Disclosure Number: IPCOM000116311D
Original Publication Date: 1995-Aug-01
Included in the Prior Art Database: 2005-Mar-30
Document File: 2 page(s) / 84K

Publishing Venue

IBM

Related People

Hicks, D: AUTHOR [+2]

Abstract

In a cache-based shared memory MultiProcessor (MP) system, each processor has a cache, it is necessary to have a communication structure among the caches and the memory (possibly consists of multiple memory modules) to perform the following two functions: 1) to shuffle the data and requests and 2) to synchronize any concurrent accesses (in the sense of read and write) to the same piece of data in the memory.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 53% of the total text.

Control Queueing Structures for the Memory Directories in Multiprocessor
Systems

      In a cache-based shared memory MultiProcessor (MP) system, each
processor has a cache, it is necessary to have a communication
structure
among the caches and the memory (possibly consists of multiple memory
modules) to perform the following two functions: 1) to shuffle the
data
and requests and 2) to synchronize any concurrent accesses (in the
sense
of read and write) to the same piece of data in the memory.

      The synchronization includes arbitration of simultaneous
requests, staging of outstanding requests, and resolution of
dependency requests between caches.  They are easily attained when
the caches are connected to the memory through a centralized bus.
However, in a high performance MP system in which a switch is used to
connect the caches and the memory, these functions are difficult to
implement.  Most system architect employs a directory for a memory
module such that every directory entry corresponds to a line, in the
module, currently loaded to one of the caches.  Through the directory
it can be rapidly determined if a cache miss be served, i.e., data
load, by the memory module or a cache which currently contains the
latest copy of the missing line.  But the directory alone is not
adequate to perform the second function above.

      Disclosed is a queueing structure to efficiently control all
the concurrent memory accesses from the caches to a memory module and
its memory directory.  There are two types of memory accesses: 1)
requests - fetch or store misses, or memory updates, and 2) replies -
responses or acknowledgments from the cache.  The components and
respective operations in the queueing structure are described below:
  1.  The queue contains requests of the following types: a) memory
       requests but can not be served at their arriving cycles, and
b)
       requests which hit remote caches and are waiting for the
remote
       caches to take one or more of the following MP cache actions:
      a.  transfer of the latest version of the missing line to the
           memory module or the requesting cache,
      b.  change from an exclusive to shared state, and
      c.  invalidation of the line being requested.
  2.  The basic fields of an entry of the queue are:
      a.  valid bit: indicates if the entry contains a valid requests
           or not,
      b.  tag: request address,
      c.  ready...