Browse Prior Art Database

Fast Algorithm for Queue, Circular Buffer and Stack Operations that minimises Lock Contention in Multi-threaded Applications.

IP.com Disclosure Number: IPCOM000019620D
Original Publication Date: 2003-Sep-23
Included in the Prior Art Database: 2003-Sep-23
Document File: 13 page(s) / 98K

Publishing Venue

IBM

Abstract

This article describes an existing algorithm (fast Queue) and extends it to apply to Circular Buffers and Stacks. It has benefits to multithreaded applications by reducing lock-contention compared to the obvious "get-Mutex; add/remove from Queue/Buffer/Stack; release-Mutex" approach.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 15% of the total text.

Page 1 of 13

  Fast Algorithm for Queue, Circular Buffer and Stack Operations that minimises Lock Contention in Multi-threaded Applications.

An algorithm is disclosed which extends a pre-existing algorithm so that it applies to Circular Buffers and Stacks. The pre-existing algorithm applies to Queues, and a Java* example is downloadable from the Internet (see Ref.1.)

    Classical implementations of Queues, Circular Buffers and Stacks have a problem in multi-threaded applications. When one thread is pushing data into the Queue/CircularBuffer/Stack and another thread is popping data off it, the traditional solution is to use a Mutex lock, so only one thread can access the Queue/CircularBuffer/Stack at one instant. This is because the same control-variables are used to control the process of "putting" data in as well as "getting" data out. An improvement to this simple approach is shown in Ref 1 (where a Java implementation for a fast Queue is given).

    The "obvious" solution applies a Mutex to the Queue/Buffer/Stack as a whole. Any thread wanting to "put" or "get" must acquire this Mutex, which serialises all actions. This solution is not optimal, and is not discussed further.

    The solution in Ref 1 is a Java implementation of the algorithm described below, but this approach can be extended to apply to Circular Buffers and Stacks. Figs 1.1 through to Fig 4.3 describe in essence the algorithm used in Ref 1. Figs 5.1 to 10.2 describe how to extend this approach to cover Circular Buffers and Stacks, including random-access/delete/insert operations.

    The algorithm given here uses a "rolling-dummy" LinkObject in the Queue/CircularBuffer/Stack structure, which prevents the "put" operations from interfering with the "get" operations. This permits the use of two Mutex locks in the most general case, instead of one. One Mutex serialises access for all the "getting" threads. The other Mutex serialises access for all the "putting" threads. When only one "getter" is active at any time, no Mutex is required to protect "get" operations, and similarly when only one "putter" is active at any one time, no Mutex is required to serialise "put" operations. So the simplest scenarios of one putter and one getter reduces to an algorithm where no mutexes are required at all. Even in the most complex cases of many putters and many getters, the requests to acquire/free mutexes is split between two mutexes rather than one (as is the case with the "obvious" solution), so the chances of a get-mutex request suceeding is increased, leading to greater system throughput. "put" operations can proceed at the same time as "get" operations, further increasing throughput over the traditional "obvious" approach.

How it works A simple FIFO Queue example works like this:

1) Construct the Queue, with one "Dummy LinkObject" in it. This can be thought of a being a "used-up, old" entry in this empty queue.

    LinkObjects hold two pointers in them. "nextPointer" points to the next LinkObject in the Qu...