Browse Prior Art Database

Method for improving cache utilization during potentially slow update operations Disclosure Number: IPCOM000125126D
Original Publication Date: 2005-May-19
Included in the Prior Art Database: 2005-May-19
Document File: 3 page(s) / 128K

Publishing Venue



The utilization of a cache can be negatively impacted when the updates to cache entries take an indeterminate amount of time. The cache utilization can be improved by setting a state value in the cache entry to indicate that an update is pending and releasing resources used to serialize access to the cache entry being updated. This allows other threads or processes in a multi-threaded or multi-tasking environment to continue to reference cache entries that otherwise may have been serialized by those same resources that serialize updates and accesses to the entry being updated.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Method for improving cache utilization during potentially slow update operations

The problem addressed here is the performance of a cache that has multiple users (a multi-threaded or multiprocessor environment), which can be limited by mechanisms that are intended to protect the cache consistency. To maximize the performance by allowing high levels of concurrency while maintaining consistency of the contained data and the data structure that implements the cache, it is necessary to have locks on data elements that can be changed before any operation that can potentially change the cache data. However, if the update operation can take a long time, then access to the cache data is restricted and performance of the cache can be affected. This is especially true when the lock required to update an element also protects other entries in the cache; i.e., a lock of a cache line that contains multiple cache entries.

The solution presented here is that the element is marked as being updated and all locks are dropped on the cache when updating a cache element via an operation that may take a relatively long time (may access remote resources, etc.). Any subsequent operations for the same cache element will not attempt to update the element if this flag is set (this indicates that the update is still occurring - ASSUMES that there is only one mechanism for updating cache data). When the new data for the cache element has been retrieved, locks for the cache data are obtained and the data is updated. This may slightly increase the size of the cache (if there is no data field in the cache element that can be used for a bit flag), but it should improve cache performance and utilization when update operations may be slow.

The method described here assumes the following properties of the cache:

There is only one method of updating the elements of a cache. For example, if the




data in cache entry A is being updated to insure that it is accurate, then there is only one function, UPDATE(), that is used to generate the data for the cache entry. This allows the assumption that if an update of A is currently being executed, there is no need to perform a second update if the first has not finished, since they should generate the same data value.

The update of a cache element can take an indeterminate amount of time. (During

the time of the update, the action that must be taken by an accessor of the cache element is application specific: it may be acceptable to use potentially stale cached data, or it may be required that the accessor pend awaiting the updated data or return an error if the data is not available in an acceptable time frame.)

The lock(s) necessary to update a cache element affect the access to other

elements in the cache.

Caches are very useful for improving performance, but periodically the cache data must be refreshed to insure that the data in...