Browse Prior Art Database

Bin in Transit Mechanism

IP.com Disclosure Number: IPCOM000041355D
Original Publication Date: 1984-Jan-01
Included in the Prior Art Database: 2005-Feb-02
Document File: 3 page(s) / 61K

Publishing Venue

IBM

Related People

Krygowski, MA: AUTHOR [+2]

Abstract

This article describes a mechanism to provide narrow bus addressing to avoid performance penalties due to interprocessor cache cross-interrogation servicing delays and asynchronism. This is done by having a system controller (SC) send a short cache address (e.g., nine bits) to the processor rather than a long data address (e.g., 31 bits) for the data it wishes castout or invalidated. This address is obtained from a copy directory in the SC and is substantially narrower in width than a data address. In a tightly coupled multiprocessor environment, there are many main storage accesses occurring concurrently. A store-in-cache processor (CP) design usually dictates the use of copy directories located in a storage controller (SC) to control interference with data in caches that respectively reside within other CPs.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Bin in Transit Mechanism

This article describes a mechanism to provide narrow bus addressing to avoid performance penalties due to interprocessor cache cross-interrogation servicing delays and asynchronism. This is done by having a system controller (SC) send a short cache address (e.g., nine bits) to the processor rather than a long data address (e.g., 31 bits) for the data it wishes castout or invalidated. This address is obtained from a copy directory in the SC and is substantially narrower in width than a data address. In a tightly coupled multiprocessor environment, there are many main storage accesses occurring concurrently. A store-in-cache processor (CP) design usually dictates the use of copy directories located in a storage controller (SC) to control interference with data in caches that respectively reside within other CPs. Each copy directory is updated for each miss in the associated processor cache, which causes a line fetch access command to main storage to locate and transfer the most recent copy of the line of data and/or invalidate any copy of the same data line in any other CP's cache. Because of storage access parallelism, it is possible for plural CPs to be requesting data from a cache in another processor, causing a queue of cache castout requests for the requested processor to build up rapidly. It is thus possible for a castout request to be serviced by the requested CP long after the requested CP had overlaid the requested line in its cache, because the SC copy directory and its associated CP directory can operate asynchronously for periods of time. The SC can present to a requested (hit) CP an address of the data line the SC wishes castout or invalidated for another CP. However, this presents two problems to the system design: (1) width of address bussing, and (2) cache interference. Communication signalling bus width is highly constrained by pin scarcity in the LSI technology package. Bus width can be reduced by time slicing, but that introduces access delay while waiting for the total address transfer. Once the SC requested address arrives in the requested CP, its cache directory must be searched for the presence of that data. This search interferes with the normal local CP cache accessing, which in turn affects performance. Fig. 1 shows in an SC an invalidation bus request (IBR) stack for each CP to support the rapid build- up of inter-cache activity for each CP caused by requests from other CPs and channels. Those requests are serviced in first-in/first-out (FIFO) order. In the m...