Browse Prior Art Database

Technique for efficient feedback driven synchronization between multiple threads on shared data in Java

IP.com Disclosure Number: IPCOM000239298D
Publication Date: 2014-Oct-28
Document File: 9 page(s) / 75K

Publishing Venue

The IP.com Prior Art Database

Abstract

This publication outlines a technique to comprehensively (or intelligently) wait for a lock under contention, by leveraging the instantaneous progression information of the lock owner. It also shows how this approach leads to optimal usage of computer resources by threads participating in locking, and thereby improve the overall efficiency of the software application.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 29% of the total text.

Page 01 of 9

Technique for efficient feedback driven synchronization between multiple threads on shared data in Java

Java language provides synchronization primitives for protecting shared data from cluttered access by multiple threads. These primitives come in two forms: synchronized blocks and synchronized methods. While the first form can be used to have any object as a locking medium and second form uses the invoker object (also known as the receiver object) as the locking medium. Both the forms essentially block any other threads from executing the code which is

protected through the lock, if entered through the same path.

Variety of optimization techniques exists in the plurality of Java virtual machine implementations to improve the overall performance of the application with respect to the synchronized blocks. Few of them are: i) lock elimination (whereby the lock object is proved to be accessible only to the owing thread in the current context through a compiler technique called escape analysis, and thereby the locking code is eliminated), also called as lock elision, ii) adaptive locking whereby the locking code make use of profiling information collected by the virtual machine on the past history of the lock contention, and take runtime decisions as to whether to spin around the lock (in expectation of an imminent release by the owner) or yield the

processor and wait for notification by the system when the lock is released, iii) mixed-mode spin-block where the waiter thread tries out a few spinning iterations in expectation of an early release of the lock, and if fails, falls back to the block. While blocking the thread is costly as it involves virtual machine abstractions, operating system routines and hardware support, the spinning is inherently costly, as it consumes processor power, and if not employed properly, will

prove to be highly inefficient. Few of these techniques are discussed in this article [*]

While all of the above mentioned techniques are useful in their own ways and help improve the

performance of the application by optimizing around the locks, each of them has limitations, and below sections illustrates that.

In the lock elision technique, the lock has to be proven to be entered by single thread at a time, through code analysis. This means only a few subset of contention scenarios come under the

purview of lock elision, because, a well designed application will employ a synchronization code for true concurrent access by many threads.

In adaptive locking, the waiter thread relies on the profiled data collected by the virtual machine in the past contention scenarios. The code which accesses the shared data in a synchronized manner can consume arbitrary amount of time when executed each time. Apart from static dependencies such as number of instructions in the block, the execution time of the synchronized code depends on the number of threads currently executing, instantaneous load on the processor, input parameter...