Browse Prior Art Database

Cache Replacement Algorithm With Fine-Grained Locking

IP.com Disclosure Number: IPCOM000201622D
Publication Date: 2010-Nov-16
Document File: 3 page(s) / 22K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is an invention for a cache replacement algorithm which allows highly parallel access and good hit rate characteristics. The method incorporates both a hash map (concurrent) and least-frequently-used algorithm to achieve this.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 39% of the total text.

Page 01 of 3

Cache Replacement Algorithm With Fine-Grained Locking

A large variety of cache replacement algorithms exist which offer excellent hit rate

characteristics for various types of workloads. Consideration of the cost of managing the replacement algorithm is usually secondary to hit rate considerations.

In today's multiprocessor systems, this equation needs to change. With multiple concurrently active threads, operations need a caching algorithm where the replacement algorithm is not only inexpensive to manage, but also supports highly concurrent access. Otherwise, locking and thread contention can become a performance bottleneck for the cache. This is particularly true for caches which are tuned well: a well-tuned cache may have a hit rate over 95%. In this case the performance of the replacement algorithm becomes critical - and particularly performance for multithreaded access.

There are a large number of cache replacement algorithms based on discarding the least-recently-used (LRU) item in the cache.

Variations of these algorithms may

implement precise least-recently-used tracking, such as a list of cache entries ordered by time of use. Other variations attempt to approximate an LRU algorithm. One example,

well known in the current art, approximates an LRU algorithm through access

counting. Each time a system accesses an item, it retrieves the access count. Cache replacement is based on a pointer which cycles through the cache, examining one item at a time. The system discards an item if its access count is 0 (or below some threshold), and clears or reduces by some factor items with a higher access count.

Note that replacement algorithms are only one part of the cache.

also implement the key-to-value lookup code. For in-memory caches, this is usually implemented through a standard Map data structure, such as a hash table or a lookup tree. For an in-memory cache implemented in a specific programming language, developers can use the hash map class (concurrent) that provides good single-threaded performance and allows multithreaded concurrent access in a safe

way.

The invention concerns a cache replacement algorithm which allows highly parallel access and good hit rate characteristics.

The core data structure is the access table. When the system creates the cache, it allocates this table with its size equal to the maximum size allowed for the cache (in terms of entries). Thus, a cache with a size limit of 2,000 entries has an access table of 2,000 entries. Each entry in the table is a pointer to a cache entry information structure. This holds the cache key, value, access count, and any other metadata required (such as lifetime information).

The other data structure used is the key-to-value (k-v) lookup table.

lookup table can be used. The key requirement is that the value placed in the lookup

A real cache must

Any existing k-v

1


Page 02 of 3

table is not the cache value itself, but rather the cache entry information structure.

On a re...