Browse Prior Art Database

Algorithm to reduce lock contention and provide generic cache interfaces to allow caching with minimal lock contention

IP.com Disclosure Number: IPCOM000014169D
Original Publication Date: 2002-Nov-09
Included in the Prior Art Database: 2003-Jun-19
Document File: 2 page(s) / 43K

Publishing Venue

IBM

Abstract

This disclosure presents a scheme for managing serialization to a shared cache of objects that provides extremely low lock contention on the cache data structures even under high stress loads and yet provides very strong consistency guarantees needed by most multi-processor applications when manipulating objects in a cache. Additionally, it provides features that allow for efficient grouping of objects in the cache into subsets and/or provides a means of alternate indexes into a cache, also with little or no lock contention.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 46% of the total text.

Page 1 of 2

  Algorithm to reduce lock contention and provide generic cache interfaces to allow caching with minimal lock contention

This disclosure presents a scheme for managing serialization to a shared cache of objects that provides extremely low lock contention on the cache data structures even under high stress loads and yet provides very strong consistency guarantees needed by most multi-processor applications when manipulating objects in a cache. Additionally, it provides features that allow for efficient grouping of objects in the cache into subsets and/or provides a means of alternate indexes into a cache, also with little or no lock contention.

Essentially, the cache is managed by providing an LRU queue of objects and two hash tables. One hash table is the main hash table which is used to locate items and the other is a pending hash table used when an item is being created in the cache but is still undergoing initialization. Each row of the main hash table has a standard lock to protect concurrent access which can be obtained in a shared mode (read) or an exclusive mode (write). Each row of the pending hash table similarly has a lock to protect concurrent access. The LRU queue is also protected by a lock. This lock is obtained in the standard read or write mode much like the hash table locks, but it is also manipulated in asynchronous fashion.

Each object (also referred to as an item) in the cache is protected by a lock and also has a reference count which is used to track how many program threads and data structures refer to the object and is used to determine when an object can be reused to represent another entity in the cache or when it can be physically deleted from the cache. Each object contains its key, and also a pointer to a pending key. This "pending key" is the key for the pending hash table. Essentially, when an item is undergoing reuse, the original logical entity the item represents is marked for cleanup and the new logical entity is represented by the pending key. The item will reside in both hash tables when undergoing reuse, and when the original entity is cleaned up the item will be removed from the pending hash table and inserted into the main hash table using the pending key as the key (the items key is set to the pending key).

The reference count contained in the object data structure has two reserved bits: LOCKED, which means the object is locked for reuse; and PEEK, which means the object is locked for "consideration for reuse". Each object also has a state field which describes which state the object is in. Each object has a cleanup_waiters field which is a count of threads waiting for cleanup to occur.

The above described locks are obtained in this order ehwn more than one is needed:
1. item lock
2. hash row lock
3. pending hash row lock
4. LRU queue lock

A straightforward solution simply uses a single lock and a single hash table to protect access to the cache, and is obtained in exclusive mode when updating th...