Browse Prior Art Database

Array Address Relocation Mechanism

IP.com Disclosure Number: IPCOM000100864D
Original Publication Date: 1990-Jun-01
Included in the Prior Art Database: 2005-Mar-16
Document File: 4 page(s) / 137K

Publishing Venue

IBM

Related People

Christensen, NT: AUTHOR

Abstract

This disclosure applies to a large associatively-addressed array structure with an associativity of one. Such arrays do not have multiple compartments in each congruence class which can be used when one of the compartments is deleted. It describes a mechanism for deleting elements of the array and assigning the deleted addresses such that alternate entries become shared between two, or more, addresses. No array entries are reserved as spares for the purpose of relocating deleted addresses.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 51% of the total text.

Array Address Relocation Mechanism

       This disclosure applies to a large
associatively-addressed array structure with an associativity of one.
Such arrays do not have multiple compartments in each congruence
class which can be used when one of the compartments is deleted. It
describes a mechanism for deleting elements of the array and
assigning the deleted addresses such that alternate entries become
shared between two, or more, addresses.  No array entries are
reserved as spares for the purpose of relocating deleted addresses.

      An example of such an array may be a cache directory, or a
cache in which the cached data resides in the same arrays as the
associative address.

      Of particular concern are large arrays (e.g., 64K entries).  It
is often not practical to provide a deletion (or delete) bit for each
entry due to the size of storage required for the Delete Array
itself.  Some previous designs put the delete bits within the data
array or directory, but this means that the determination of
alternate locations must be done serially as only one entry is read
at a time. One of the benefits presented in this disclosure is that
detection of deletion of entries and determining an alternate
location is done in one access of the Delete Array.  To reduce the
size of the Delete Array, the data array is logically divided into
segments; entire segments of the array are deleted instead of single
entries.  Since the array is large, modestly sized segments may be
deleted without significantly impacting the array performance (if it
is a cache).  In fact, it may make a certain amount of sense to
delete a number of entries corresponding to the effect of a common
array failure mechanism.  For instance, a bit line failure may affect
a block of addresses simultaneously.  The segment size could be
chosen to be equal to the number of addresses affected by such a
failure.  For an array size of 64K (64 x 1024) entries and a segment
size of 64 entries, the size of the Delete Array is 1024 bits.

      When an element is deleted, the address of that element must be
redirected to an alternate entry in the array.  Note that it is
another benefit of this invention that there are no reserved, or
spare, entries for the purpose of this relocation.  Instead, a
deleted address now shares an entry with some other address.  If this
assignment is allowed to be arbitrary, then all of the address bits
now become associative.  Consider that the data of interest in the
array is associated with a 20-bit address (bits 0-19).  For an array
having 64K entries, only 16 address bits are used (4-19) to access
the array.  The remaining bits (0-3) are associative bits and are
stored in the array or directory for associative addressing.  With an
arbitrary relocation of deleted addresses, all 20 address bits must
be stored in the array.  This is costly.  The solution proposed here
is to group the segments of the array into clusters...