Browse Prior Art Database

Eliminating a bottleneck in resource allocation for multiprocessor systems

IP.com Disclosure Number: IPCOM000015820D
Original Publication Date: 2002-Oct-29
Included in the Prior Art Database: 2003-Jun-21
Document File: 2 page(s) / 38K

Publishing Venue

IBM

Abstract

Problem In multiprocessor systems, when resources (such as empty records from a free record list, or free blocks on a storage device) have to be shared between many processes, a lock on the list is needed to avoid cases where two or more processes get the same resource, or the count of free resources is corrupted. The lock serializes access to the list and prevents such data corruption, but also slows down the processes that have to wait for the lock to be released. This may become a real bottleneck when there are many processes competing for the same resource.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Eliminating a bottleneck in resource allocation for multiprocessor systems

Problem - In multiprocessor systems, when resources (such as empty records from a free record list, or free blocks on a storage device) have to be shared between many processes, a lock on the list is needed to avoid cases where two or more processes get the same resource, or the count of free resources is corrupted. The lock serializes access to the list and prevents such data corruption, but also slows down the processes that have to wait for the lock to be released. This may become a real bottleneck when there are many processes competing for the same resource.

    In particular, this problem may arise when processes are added to a multiprocessor system that is already working. The extra lock contention may reduce the benefit of the extra processing power, and even cause response time to be worse than in the original system.

Solution - This invention describes a method that could reduce interprocess synchronization overhead in such cases.

    The invention can simply be described using the example of a hash list that takes or returns empty records from a global free record list. The generalization to other applications is straightforward.

    The basic idea for reducing the synchronization overhead is for each process or processor to maintain a small pool of empty records for itself. All hash table operations use the local pool, so there is minimal contention on the lock. In fact, in non-preemptive systems, no lock is required at all. When the pool size dips below a low threshold, the process gets a number of free records from the global free record list. When the pool size passes...