Browse Prior Art Database

Working Set Development Through Dual Layered Caching

IP.com Disclosure Number: IPCOM000045751D
Original Publication Date: 1983-Apr-01
Included in the Prior Art Database: 2005-Feb-07
Document File: 5 page(s) / 44K

Publishing Venue

IBM

Related People

Dixon, JD: AUTHOR [+4]

Abstract

There is a general notion of desiring cache read hit ratios to be above 60 percent. In general data base caching, this desired result is achieved when the amount of cache is on the order of 1/10 of 1 percent of the size of the associated data base. In the case of caching paging data sets, the amount of cache required for 60 percent hit ratios is much higher -- on the order of 20 percent of the size of the paging data set. Often, when the amount of cache provided is less than this minimum amount, there is a marked reduction in the hit ratio for even slight amounts of reduction in cache size relative to data set size.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 29% of the total text.

Page 1 of 5

Working Set Development Through Dual Layered Caching

There is a general notion of desiring cache read hit ratios to be above 60 percent. In general data base caching, this desired result is achieved when the amount of cache is on the order of 1/10 of 1 percent of the size of the associated data base. In the case of caching paging data sets, the amount of cache required for 60 percent hit ratios is much higher -- on the order of 20 percent of the size of the paging data set. Often, when the amount of cache provided is less than this minimum amount, there is a marked reduction in the hit ratio for even slight amounts of reduction in cache size relative to data set size.

While the problem of efficient cache management is always important, it is particularly acute when cache size is less than the target budget figures given above. Given current packaging considerations, the ability to provide adequate amounts of cache is more severely limited in the paging data set case which has the 20 percent target budget figure. Thus, the special management of cache is of greatest interest in the case of supporting paging data sets.

The mechanism described herein provides for a special grouping of items maintained in the cache. Such action provides the means of making additions or removals from the cache on the basis of involving multiple blocks at one time. Processing multiple blocks is in sharp contrast to normal LRU (least recently used) management of a cache. In the normal LRU scheme, items are processed on a single item basis. Each user request for data, no matter how many bytes long, is manipulated and processed as though it were a single item.

Normal LRU action dictates that on a read miss for a block, that block is to be allocated into the cache and placed in the MRU (most recently used) position. In order to make room for this added single block, there is a deletion from the cache of the single block presently at the end (LRU position) of the LRU list.

The notion of sequentiality is needed in order to clarify the distinction between single and multiple block management. By definition, all of the individual data items in a given block are required to be in sequential locations on the parent disk data set. By way of contrast, the data items in one block need not be sequential relative to the items of an associated block. Thus, data items are sequential within a block, but not sequential across different blocks. An important feature of the present invention is its ability to deal with non-sequential blocks.

Returning to discussion of the block method of processing, management policy still involves initial consideration of a single block consisting of data items put forth by the user. The new feature is that the algorithm acts to associate additional blocks of data together with the single block identified by the user. The new collection of multiple blocks is allocated in the cache on a read miss relative to the identified block. To make room...