Browse Prior Art Database

Hierarchical Latching Method -- Splitting Locks/Latches to Increase Concurrency

IP.com Disclosure Number: IPCOM000119806D
Original Publication Date: 1991-Mar-01
Included in the Prior Art Database: 2005-Apr-02
Document File: 3 page(s) / 114K

Publishing Venue

IBM

Related People

Boyd, CT: AUTHOR [+4]

Abstract

Increased concurrency of multiprocessing systems can be introduced by creating a hierarchical locking/latching structure that represents the activity on a single lock/latch that has been partitioned. A device is needed to allow the lock/latch to be taken efficiently to perform necessary operations. This article describes an innovative and efficient technique to reduce the latch contention in Tightly Coupled Multi-Processor (TCMP) systems.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Hierarchical Latching Method -- Splitting Locks/Latches to Increase
Concurrency

      Increased concurrency of multiprocessing systems can be
introduced by creating a hierarchical locking/latching structure that
represents the activity on a single lock/latch that has been
partitioned.  A device is needed to allow the lock/latch to be taken
efficiently to perform necessary operations.  This article describes
an innovative and efficient technique to reduce the latch contention
in Tightly Coupled Multi-Processor (TCMP) systems.

      TCMP Serialization Problem:

      In a TCMP, there exists multiple CPUs on the same system.
Therefore, the access to any shared control structure by multiple
CPUs in the same CEC has to be serialized.  Otherwise, we could end
up with partially updated records or broken chains, causing integrity
exposure.

      This is a general problem area for TCMP.  If the serialization
is not done carefully, it usually becomes a performance bottleneck
for the system.
        Shared Data Structure protected by a single LATCH

      Considering the above figure as an example, the shared data
structure resides in the virtual storage and can be accessed by
multiple CPUs; therefore, they need to be protected.  Typically, a
single latch is used to protect the entire data structure.  A latch
is essentially a flag which is set using C/S to indicate a busy
condition.  Only the CPU that sets the latch has the authority to
access the shared data structure.  If any other CPU attempts to set
the latch while it is busy, i.e., if we have a latch contention, the
requestor will have to be suspended and, waiting for the latch to be
freed, will cause a big performance overhead.

      It is evident that if the latch holding time is long, that is,
if the instruction path length running under the latch is large, or
if there are many CPUs, this approach will cause a bottleneck.

      The Hierarchical Latching Method:

      On the next page the hierarchical latching method is
illustrated.  Primarily we like to break up the single latch (called
main latch, ML) into multiple latches, each controlling a much
smaller portion of the data structure. These smaller latches are
called Secondary Latches; SLj, j = 1,2,3,4,.....,N.

      In this picture, instead of usin...