Browse Prior Art Database

Solutions to Hot Spot Problems in a Data Sharing Transaction Environment

IP.com Disclosure Number: IPCOM000120901D
Original Publication Date: 1991-Jun-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 3 page(s) / 136K

Publishing Venue

IBM

Related People

Mohan, C: AUTHOR [+3]

Abstract

This article describes a mechanism by which frequently updated blocks of a database can be handled in a multimachine, shared disks environment (Data Sharing) (1). The invention provides solutions for high- performance applications which make frequent updates to the same records from multiple systems. In a single-system environment, such applications achieve high performance by using a main storage database (such as IMS Fast Path MSDB) and does field level updates which require operations logging (2). An example is the banking application of debit/credit (3). Problem

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 50% of the total text.

Solutions to Hot Spot Problems in a Data Sharing Transaction Environment

      This article describes a mechanism by which frequently
updated blocks of a database can be handled in a multimachine, shared
disks environment (Data Sharing) (1). The invention provides
solutions for high- performance applications which make frequent
updates to the same records from multiple systems.  In a
single-system environment, such applications achieve high performance
by using a main storage database (such as IMS Fast Path MSDB) and
does field level updates which require operations logging (2).  An
example is the banking application of debit/credit (3).
Problem

      Assuming that one has done everything to reduce hot spots in a
single system,  data sharing can cause hot spots because of
intersystem contention on a page.  The hot spots can degrade
performance in data sharing.  This article describes solutions to the
intersystem hot spot problem, such as the same record is updated by
different systems or different records on the same page are updated
by different systems.
Assumptions

      This invention assumes that a merged log, which is obtained by
merging the local logs produced by the different systems sharing the
database, is available.  In such a log, every log record is uniquely
identified by a log sequence number which is obtained by
concatenating a global time stamp to the identifier of the system
which wrote the log record.
Key Idea

      The key idea is that if a record is updated, such that updates
to its fields are commutative, for example, an increment of an
existing value (as opposed to a new value assignment), then the
record is neither read nor locked. The update is sent via the log to
a tracking process which may reside in any one of the systems.  It is
the tracking process which actually performs reads and writes
involving a disk and the affected data.  This process applies the log
records (from all systems) in time sequence before writing the page
containing the affected data to disk.

      For commutative updates, where read is an infrequent operation,
the read may have to wait for the tracking process to catch up with
the current log position and merge the intervening log records.

      When the update of a record is such that updates to its fields
are noncommutative, then the record must be locked and read before
the update.  The read operation is a frequent occurrence in this
case.  The key idea here is that a copy of the record should be kept
in the global locking function.  That way, the database process can
get the current state of the record with its lock grant.  Records
would be kept by the lock function for some time (seconds, minutes)
after locks were released.  If the lock function were unable to keep
a copy for space reasons, then the current version would be available
from the tracking process.  Therefore, the record can be read from
one of the following possible sources:  the loc...