Browse Prior Art Database

Utilizing a Client's I/O Buffer to Stage Data to a Shared Cache

IP.com Disclosure Number: IPCOM000108915D
Original Publication Date: 1992-Jul-01
Included in the Prior Art Database: 2005-Mar-23
Document File: 2 page(s) / 107K

Publishing Venue

IBM

Related People

Clark, CE: AUTHOR [+2]

Abstract

Both on-line transaction processing and background batch job data processing are common types of operations at large brokerage institutions such as Merrill, Lynch, Pierce, Fenner and Smith Inc. and Shearson Lehman Hutton Inc. Typically, after the close of business each day, a number of batch jobs are scheduled to execute against data collected during first shift operations by the transaction systems. The time between the close of business on one day and the start of business on the next day is commonly referred to as the "batch window".

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Utilizing a Client's I/O Buffer to Stage Data to a Shared Cache

       Both on-line transaction processing and background batch
job data processing are common types of operations at large brokerage
institutions such as Merrill, Lynch, Pierce, Fenner and Smith Inc.
and Shearson Lehman Hutton Inc.  Typically, after the close of
business each day, a number of batch jobs are scheduled to execute
against data collected during first shift operations by the
transaction systems.  The time between the close of business on one
day and the start of business on the next day is commonly referred to
as the "batch window".

      The batch window problem occurs when many batch jobs require
concurrent access to the same data resident on external direct access
storage devices.  Because a large number of jobs require access to
the shared data, significant performance degradation results from
delays inherent in the serial nature of the I/O processing against
the data.  As the amount of data processed within the batch window
increases, it becomes critical to reduce the time required to process
the data in order to be ready to start the transaction systems for
the next day's business.

      In order to reduce the time required to complete the batch
jobs, a new facility was designed to cache the data in processor
storage and eliminate much of the I/O activity responsible for the
performance degradation.  To accomplish this, it was necessary that
the use of the facility be transparent to the application programs
accessing the data.  It was also important to minimize modifications
to the access methods used by the applications if programming
development costs were to be acceptable.

      Modifications were made to the access methods to place data
read from a file by one batch job into a shared data object resident
in processor storage such that subsequent attempts to access the same
data by other batch jobs could be satisfied without the need for
physical I/O to the external storage device.  By allowing the data to
be read into the first job's I/O buffer, and then copied to the
shared cache, much of the existing access methods I/O code could be
used without the need for error-prone modification.
PROBLEM DESCRIPTION

      I/O buffers are allocated to a batch job when the application
requests initial access to the file.  These buffers are allocated in
the private region of the address space and are accessed under the
authority of the requesting program.  Since each batch job executes
in a different address space,...