The InnovationQ application will be updated on Sunday, May 31st from 10am-noon ET. You may experience brief service interruptions during that time.
Browse Prior Art Database


IP.com Disclosure Number: IPCOM000013739D
Original Publication Date: 2000-Nov-20
Included in the Prior Art Database: 2003-Jun-18

Publishing Venue



In an environment where a storage adapter or storage controller shares memory with the host processor (as in the S/390 Internal Disk implementation), a new instruction would provide quicker access to data which resides in the storage "cache" by having the processor access it directly rather than performing an I/O operation to discover that it is in the controller cache. Furthermore, based on a reasonable cache hit ratio, the utilizations of the processors and the adapter/controller can actually be lowered because it takes less overhead to retrieve the data this way than to build and execute an I/O operation and take an I/O interrupt. In an implementation such as S/390 Internal Disk, this is important because the total capacity of the controller function is limited by the capability of the engine where it is implemented. Use of this instruction distributes much of the work among the larger number of normal processors, leaving less work for the engine performing the controller function, and hence increasing the total capacity that the controller can handle. A new processor instruction is defined which checks if the identified record is in the disk cache in the shared memory; if it is, the record is then moved by the instruction to the target buffer specified. If the record is not in the cache, the instruction sets a condition code causing the normal software path to be taken, which involve actually constructing a channel program and issuing an I/O request to the control unit; this will result in a cache-miss, causing the disk control unit to stage the data to the cache. For sequentially accessed data, a subsequent issuance of the instruction for the next record should find it in the cache. Service time for an I/O request is improved because data which resides in the "cache" is obtained synchronously and with less overhead than performing an I/O operation, which involved task-switching, additional path lengths, an I/O interrupt, and processor cache disruption. Controller and processor utilization is also reduced, improving total system capacity. When a cache miss occurs, the new instruction could be extended to signal the controller to initiate a "pre-fetch" of data, overlapping the fetching of the data with the construction of the channel program and initiation of the channel program . 1