Browse Prior Art Database

Stream locking algorithm for synchronizing global configuration switches Disclosure Number: IPCOM000240948D
Publication Date: 2015-Mar-13
Document File: 6 page(s) / 141K

Publishing Venue

The Prior Art Database


Disclosed is a global resource sharing algorithm that minimize number of costly resource state changes in a concurrent environment. It is most suitable for synchornizing global configuration switches e.g. Java virtual machine or operating system settings that affect all running threads and cannot be easily isolated. In such scenario the algorithm provides several benefits over standard synchronization mechanisms.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 50% of the total text.

Page 01 of 6

Stream locking algorithm for synchronizing global configuration switches


A multi-threaded computer program operates on a concurrent environment with any number (can be infinite) which are created and become active at random time intervals.

Assumptions: • Each thread requires specific setup of global/shared system resources (e.g. JVM SSL settings, application/OS settings) to work properly;

• Each global/shared system resource:
◦ must be properly set up prior to use (exactly once) and cleaned up after use (exactly once);

    ◦ has a finite number of states (values), in which it can be; switching between the states requires performing a cleanup and setup cycle.

• Once the right resources are set up, any number of threads that require this system configuration can work concurrently without the need for further synchronization.

Synchronization algorithm that performs the following tasks is necessary:
• Minimizing number of global resource switches (costly setup and clean up operations) ;

• Maximizing thread throughput (threads finishing their jobs per second) ;

• Minimizing thread waiting (idle) times.

Known solutions:

A - simple classical lock/mutex mechanism.

This mechanism uses the easiest way to manage multiple threads that are accessing multiple resources.

Only one thread can work at a time and has exclusiveness for given resources.

• Easy to implement.

• Each thread is obligated to set up and clean the resource, which is redundant a operation from global point of view;

• Long waiting time (idle) time causing overall long program run time in pessimistic scenarios.

B - Use any other popular algorithms and synchronization mechanisms available (monitors with conditions, semaphores, read-write locks etc.)

• Known, documented solutions;

• All solutions let only one or a fixed number on threads to be executed at a time. Custom group lock is created for each resource that can be shared.


Page 02 of 6

Custom group lock operation logic:
• It acquires the lock for a given resource state;

• The call is blocked indefinitely until the lock is acquired or the thread is interrupted;

• When the lock is successfully acquired, a resource is set up by calling one-time setup handler provided by a user;

• if the same resource is already locked by a different thread or no resource is currently locked at all, the call returns immediately.

Custom group unlock operation logic:
• t releases the resource, indicating that calling thread no longer needs it. If the thread

was the last one using the resource, it performs resource cleanup by calling one-time cleanup handler provided by a user.

• A number of global resource switches (costly setup and cleanup operations) is minimized.

• Thread throughput (threads finishing their jobs per second) is maximized.

• Thread waiting (idle) times is minimized.

• Unlimited number of threads can share a resource without additional cost.

Formal desc...