Browse Prior Art Database

Cache Data Update to MRU

IP.com Disclosure Number: IPCOM000038783D
Original Publication Date: 1987-Mar-01
Included in the Prior Art Database: 2005-Feb-01
Document File: 1 page(s) / 11K

Publishing Venue

IBM

Related People

Iskiyan, J: AUTHOR [+3]

Abstract

With the standard Least Recently Used (LRU) algorithm in a cache store, for instance, a directory entry is made Most Recently Used (MRU) every time the record associated with the directory entry is accessed. Much time can be saved between accesses if the data is placed into the MRU category after a plurality of accesses rather than after each access. A counter is incremented each time the record is accessed. When the counter reaches a set count, the directory entry is placed into the MRU category. The set count can be determined by the user, for instance, by trying different values and comparing the information returned to determine if accesses to cache were made that missed because data was made LRU and deleted before the data was set to the MRU state.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 100% of the total text.

Page 1 of 1

Cache Data Update to MRU

With the standard Least Recently Used (LRU) algorithm in a cache store, for instance, a directory entry is made Most Recently Used (MRU) every time the record associated with the directory entry is accessed. Much time can be saved between accesses if the data is placed into the MRU category after a plurality of accesses rather than after each access. A counter is incremented each time the record is accessed. When the counter reaches a set count, the directory entry is placed into the MRU category. The set count can be determined by the user, for instance, by trying different values and comparing the information returned to determine if accesses to cache were made that missed because data was made LRU and deleted before the data was set to the MRU state. The value of the set count could be varied and determined by the type of workload being processed to ensure cache hits with data requests.

1