Browse Prior Art Database

Data Steering Mechanism for Reconfigurable Memory

IP.com Disclosure Number: IPCOM000106014D
Original Publication Date: 1993-Sep-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 4 page(s) / 144K

Publishing Venue

IBM

Related People

Bishop, JW: AUTHOR [+6]

Abstract

Disclosed is a mechanism that will allow Storage Sub-Systems the flexibility to meet availability, reliability and performance requirements. The tasks that this mechanism choose to address are as follows:

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 43% of the total text.

Data Steering Mechanism for Reconfigurable Memory

      Disclosed is a mechanism that will allow Storage Sub-Systems
the flexibility to meet availability, reliability and performance
requirements.  The tasks that this mechanism choose to address are as
follows:

1.  Dynamic Memory Reconfiguration, the ability to move data from one
    physical location to another while the system is running.
2.  Memory Mirroring, used for high availability systems.  It
    supports multiple copies of the data in main memory, updating all

    copies and retrieving one copy.  If the fetched data is damaged,
    another is available for use.
3.  Sub-System Partitioning, being able to move critical data in main
    memory and take part of the sub-system off line for concurrent
    maintenance.

      To achieve these tasks and maintain sub-system performance, the
Storage Controller (SC) was split.  The splitting of the SC gives
greater bandwidth to main memory and greater sub-system flexibility,
but also makes it more difficult to manage memory.

      The management of data in a Storage Sub-System has to be
flexible to a point that the data in the Sub-System can be
manipulated while the system is functioning.  This flexibility will
provide hooks for system reconfiguration, sub-system partitioning and
memory mirroring without system power on reset.

      In the data steering mechanism, pointer arrays are placed at
the entry points of the memory sub-system so that requests can be
steered into the correct port.  These points are at the processor and
at the I/O.  In this example the pointer arrays are placed in the
Level 2 Cache (L2) for the processor and I/O Switch (IOS) for the
I/O.  At these points, the request to memory will be directed to the
correct Storage Control Unit (SC).  The number of SCs that this
mechanism supports determines the width of the pointer arrays that
the L2 or IOS is supporting.  Each bit in the array will be set to
indicate which SC has access to the requested data.  This system
structure for this example can be seen in Fig. 1.

      If the processor has a cache associated with it, like in this
example, the pointer arrays at this level should be used only to
steer the requests to the SC ports.  They should not be used as an
address to the cache.  If the pointer bit was used for addressing the
cache, all fetch requests and cache cast outs would be tied together.
This will make replacement management easier, but this would reduce
performance by locking parts of the cache to a particular SC port.
If memory localization occurred, then only a subset of the cache
would be available for use.  Also, with the pointer array bits being
used as part of the cache's address, this increases the possibility
of making this path's timing critical.  For these performance
reasons, the pointer array is not used in the cache's addressing in
this example.  Since the pointer bit is not involved with the c...