Efficient queue locking during syncpoint of a set of message operations.
Original Publication Date: 2002-Jul-14
Included in the Prior Art Database: 2003-Jun-21
High performance server applications processing persistent messages sent to a queue tend to batch multiple persistent operations into a single unit of work to reduce the number of log forces required to guarantee the updates. Typically each operation involves getting a request message and putting a reply message. The request messages are typically got from a single server queue, while the reply messages are typically sent to one of many destinations. When the unit of work is committed then other applications must typically see all of the messages in the correct chronological order. The technique disclosed herein is a suggested efficient locking policy for syncpointing in a messaging environment using coarse grained queue locking. On a queue manager using a coarse grained queue locking policy a single lock per queue may be used to serialize access to the messages within that queue. At syncpoint all of the queues involved in the syncpoint are locked concurrently before any of the messages are made visible(this prevents other applications from seeing the messages out of sequence). The set of locks involved in a transaction are acquired in a well known order to avoid the possibility of deadlock. The technique disclosed herein takes advantage of the fact that the locks can be released in any order. A common implementation is to process all of the message operations in chronological order, and then release all of the locks.