Browse Prior Art Database

Least Frequently Used (LFU) Cache Replacement Policy With O(1) Algorithmic Complexity

IP.com Disclosure Number: IPCOM000199282D
Publication Date: 2010-Aug-28
Document File: 144 page(s) / 2M

Publishing Venue

The IP.com Prior Art Database

Abstract

A high performance computer memory cache class is introduced. The cache is fully associative. The cache class implements the Least Frequently Used (LFU) cache element replacement policy. The LFU policy may be used in conjunction with an implementation of the Least Recently Used (LRU) policy, or it may be used standalone. Implementations of the Least Frequently Used (LFU) cache replacement policy are typically O(log(n)) in algorithmic complexity for insert and retrieve operations. For this reason, such implementations of LFU may be avoided in situations where high performance is needed. The methods presented here may be used in such situations. Firstly, an implementation of LFU is presented that uses a doubly-linked list of frequency sets, where each abstract set class is instantiated as a tree. This implementation is O(log(n)) in complexity, but allows for fast cache operations (insert and retrieve) due to the use of a tree structure. Secondly, an implementation of LFU is presented that uses a doubly-linked list of frequency sets, where each abstract set class is instantiated as a bit map. This implementation is O(1) in complexity for retrieves/gets. It is however O(n) in complexity for the insert operation where a replacement is done, where n is the number of elements in the least frequently used frequency set (the LFU set). The insert operation involves an O(n) (worst-case) search of the bit map that represents the LFU set of cache entry indices. (The LFU set contains one or multiple cache entry indices). The cache index that is found is used to choose a cache element victim for eviction from the cache. Thirdly, an implementation of LFU that is O(1) for all basic operations, after the cache reaches the "cache full" state is presented. This implementation is O(1) in worst-case algorithmic complexity for insert and retrieve operations. This approach makes use of an additional doubly-linked list of nodes, each of which contains a cache index, called the "frequency list". The frequency list is separate from the cache and from the list of frequency sets. This implementation involves the addition to each frequency set of a pointer data member; this pointer points to the start of the sub-list of the frequency list that corresponds to the frequency set. During cache operation in the "cache full" state, this pointer allows for tracking of the sub-lists of the master frequency list, so that when a cache operation (cache entry access) takes place, the frequency list node representing the accessed cache entry can be relocated within the master frequency list, without doing a search of the current frequency sub-list in order to get to the end of the current sub-list and to the beginning of the next more frequent sub-list (the insertion point).

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 1% of the total text.

Copyright 2010 Glenn Hofford. All rights reserved.

Contents:

1. Disclaimer

2. Brief Descriptions of the Drawings

3. Description of the Preferred Embodiments

4. Figures 1 through 10

5. Appendix 1: Source Code

6. Appendix 2: Test Summary and Test Results

7. Appendix 3: Memory Requirements for LFU when Set is instantiated as a Bitmap

Disclaimer:

The methods and techniques described herein are hereby placed in the public domain. All computer source code presented herein is covered by the following:

Copyright 2010 Glenn Hofford. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are

permitted provided that the following conditions are met:

   1. Redistributions of source code must retain the above copyright notice, this list of

      conditions and the following disclaimer.

   2. Redistributions in binary form must reproduce the above copyright notice, this list

      of conditions and the following disclaimer in the documentation and/or other materials

      provided with the distribution.

THIS SOFTWARE IS PROVIDED BY GLENN HOFFORD ``AS IS'' AND ANY EXPRESS OR IMPLIED

WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL GLENN HOFFORD OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

BRIEF DESCRIPTIONS OF THE DRAWINGS

Figure 1 is a schematic diagram of a cache, a map, and a doubly-linked list of frequency sets;

Figure 2 is a schematic diagram of a doubly-linked list of frequency sets where a frequency set is instantiated as a binary tree;

Figure 3 is a schematic diagram of a doubly-linked list of frequency sets where a frequency set is instantiated as a bit map;

Figure 4 is a schematic dataflow view of the cache operation;

Figure 5 is a schematic dataflow view of the cache operation while the cache is not full;

Figure 6 is a schematic dataflow view of the cache operation while the cache is full;

Figure 7 is a schematic dataflow view of the insert operation while the cache is not full;

Figure 8 is a schematic dataflow view of the insert operation while the cache is full;

Figure 9 is a schematic dataflow view of the retrieve operation;

Figure 10 is a schematic diagram of a cache, a frequency list, and a single set of a list of frequency sets.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

A method for a high performance computer memory cache is introduced. The method in the first place implements a cache entry replacement policy commonly referred to as “Least Freque...