Browse Prior Art Database

Queueing and Serialization Method for Software Pipes on Large Scale Operating Systems

IP.com Disclosure Number: IPCOM000118300D
Original Publication Date: 1996-Dec-01
Included in the Prior Art Database: 2005-Apr-01
Document File: 4 page(s) / 138K

Publishing Venue

IBM

Related People

Bobak, RA: AUTHOR [+4]

Abstract

Disclosed is a method for managing a FIFO queue in a multiprocessing system using a shared latch with obligation passing to minimize serialization and to enable multiple asynchronous requests to delete and add elements.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 35% of the total text.

Queueing and Serialization Method for Software Pipes on Large Scale
Operating Systems

      Disclosed is a method for managing a FIFO queue in a
multiprocessing system using a shared latch with obligation passing
to minimize serialization and to enable multiple asynchronous
requests to delete and add elements.

      This method is used specifically to manage a pipe where there
may be multiple readers and writers contending for access to the pipe
and associated data.  Specifically, there must be space in the pipe
for writers to place data, and data must be there for readers to
read.

      For this discussion, the logical pipe is a FIFO queue with a
depth of n, where n is the number of active elements allowed in the
queue.  Operational states for the logical pipe are:
  1.  EMPTY: no data in the queue
  2.  FULL: all storage allocated for queue in use
  3.  Not empty/not full

Writers are processes that add elements to the queue.  Readers are
processes that delete elements from the queue.  Writers and readers
have the following operational states:
  1.  IDLE: not accessing the queue
  2.  WAIT:
      o  writer waits when queue full (n elements in queue) -
          waiting for reader
      o  reader waits when queue empty - waiting for writer
  3.  ACTIVE: accessing queue

The physical pipe is comprised of the following components:
  1.  LIFO ADD FREE queue - free elements
  2.  LIFO ADD BUSY queue - elements associated with data
  3.  LIFO DELETE FREE queue - free elements
  4.  LIFO DELETE BUSY queue - elements associated with data
       for readers
  5.  Four queue headers:  ADD FREE and BUSY, and DELETE FREE
       and BUSY
  6.  n elements on the ADD FREE queue
  7.  n elements on the DELETE FREE queue
  8.  n data elements - pointed to first by ADD BUSY elements,
       then later by DELETE BUSY elements
  9.  Physical pipe states
      a.  EMPTY: no elements on the ADD BUSY and DELETE BUSY
           queues
      b.  FULL: no elements on the ADD FREE queue
      c.  BUSY: elements being moved from ADD BUSY to
           DELETE BUSY queue i.e., queue reversal lock held
      d.  NOT_EMPTY: elements exist in queue and queue not full
 10.  Shared latch for each queue and each queue element
 11.  Queue reversal lock for serializing movement of elements
       from ADD BUSY to DELETE BUSY queue.

      The objective of the solution is to minimize serialization
between ADD and DELETE operations.  The multiple queues facilitate
this.

      Based on the following element flow, add and delete requests
can be performed asynchronously of each other and without any
serialization.  Atomic operations exist for enqueueing and dequeueing
elements on each of the queues.  There is minimal interference in
this activity.  The exception is when there are no elements on the
delete busy queue and there are e...