Browse Prior Art Database

Algorithm for Managing Multiple First-In, First-Out Queues From a Single Shared Random-Access Memory

IP.com Disclosure Number: IPCOM000036060D
Original Publication Date: 1989-Aug-01
Included in the Prior Art Database: 2005-Jan-28
Document File: 5 page(s) / 56K

Publishing Venue

IBM

Related People

Engbersen, AP: AUTHOR

Abstract

In the following, a technique is described for efficiently managing a plurality of first-in, first-out (FIFO) queues in a single shared random-access memory (RAM). This technique obviates the necessity for garbage collection procedures, i.e., operations to reorganize memory containing scattered records and to gather unused memory cells.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 28% of the total text.

Page 1 of 5

Algorithm for Managing Multiple First-In, First-Out Queues From a Single Shared Random-Access Memory

In the following, a technique is described for efficiently managing a plurality of first-in, first-out (FIFO) queues in a single shared random-access memory (RAM). This technique obviates the necessity for garbage collection procedures,
i.e., operations to reorganize memory containing scattered records and to gather unused memory cells.

FIFO queues can be implemented in a RAM, with associated control to maintain the head and tail pointers of each queue. When the requirement arises to implement multiple FIFO queues, the classical approach is to accomplish some physical or logical separation of the RAM and have each queue implemented in such a partition. The total required memory is then the sum of these partitions. The size of a partition is determined by the estimated maximum (or average) queue size for a particular queue. A disadvantage of this method is that the collection of queues can not effectively share the whole RAM. It is, for example, not possible that a queue which temporarily exceeds its allocated space 'steals' some RAM space from a queue which is not using its allocation. In general, this leads to a waste of RAM space, because every partition is sized for its maximum expected length. This implementation is denoted as the fixed partitioned case.

From some operating systems, examples of queues implemented in shared RAM are known. However, the algorithms usually require a 'garbage-collection' operation to reorganize fragmented memory after some period of operation. These garbage collection operations are not suited for implementation in hardware.

An algorithm is proposed which allows a full sharing of the total available RAM space among a set of queues. An extension to this algorithm is given which guarantees for every queue a certain amount of 'private' space. The proposed algorithm does not require garbage collection, and, therefore, lends itself easily for hardware implementation. Key to the algorithm is that the collection of not occupied cells in the shared RAM is treated as an extra queue: for every element which leaves one of the data queues, this 'empty-queue' grows by one element, and for every element which enters one of the data queues, the empty-queue shrinks by one element.

In the figure, a hardware implementation of the algorithm is shown. The shared data RAM 19 contains the queue elements. The n_part RAM 20 together with the data RAM 19 forms a linked list for every queue. In the n_part RAM 20 the address of the next element in a queue is found on the same address where the current element in that queue is stored. Thus, whenever an element is taken from a queue, the corresponding n_part entry points to the next element in that queue. A reading of zero indicates that the queue is empty. This holds also for the 'empty-queue', and therefore the n_part RAM 20 is initialized such that the contents of every addr...