Browse Prior Art Database

Cache Hash Overflow Table for Computers Using Cache Memory

IP.com Disclosure Number: IPCOM000035915D
Original Publication Date: 1989-Aug-01
Included in the Prior Art Database: 2005-Jan-28
Document File: 8 page(s) / 321K

Publishing Venue

IBM

Related People

Oliver, JK: AUTHOR

Abstract

A technique is described whereby the management of cache hash tables, as used in computer cache memory functions, is simplified so as to reduce cache algorithm execution time and to minimize the exposure to microcode errors. Utilized is an extension of a cache hash table into an overflow table which reduces the complexity in the handling of hash table entries.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 45% of the total text.

Page 1 of 8

Cache Hash Overflow Table for Computers Using Cache Memory

A technique is described whereby the management of cache hash tables, as used in computer cache memory functions, is simplified so as to reduce cache algorithm execution time and to minimize the exposure to microcode errors. Utilized is an extension of a cache hash table into an overflow table which reduces the complexity in the handling of hash table entries.

Typically, cache memory attachments are utilized in computer operations to enhance the performance of the data transfer functions. The attachments generally implement cache algorithms to try to maintain the most recently or most often requested data in the cache. When the operating system requests data, many times the request can be met entirely with data contained in the cache, thereby reducing access time required to retrieve the data. The algorithms used in cataloging and maintaining the data in the cache are often complex and can affect the overall performance of the system. In addition, the complexity of the algorithm can lead to an increase in microcoding errors. The concept described herein provides a method of simplifying the management of the cache hash operation so as to improve the performance of the computer.

Cache management algorithms generally utilize a table which contains an entry for each data block in the cache storage unit. When a data item is to be read, it is necessary to determine whether or not the item is already resident in the cache. This could be performed by searching every entry in the table. However, methods have been implemented to speed up this searching procedure by implementing hashing, which generally involves the use of an index, based on a data block address. This index is used to point to the position in the table where the entry for the particular data block would normally be stored if the data block were in cache. The entry is then checked to determine whether or not that particular data block is resident in the cache.

Because there are more data blocks than hash codes, though, a number of data blocks can generate the same hash code. Thus, another data block with the same hash index may be using the table entry pointed to by the hash index. In this case, the entry would have to be checked to determine whether it is "chained" to any other entries belonging to data blocks with the same hash code. These entries would also need to be checked to determine whether any of them correspond to the particular data block involved.

The hashing is generally performed by dividing the relative block address (RBA) of the data block by a prime number. The remainder is then used to index the hash table, and the quotient is used for comparing purposes to determine whether the entry in the hash table is for that particular RBA. It is possible for two or more RBA's with the same hash index, i.e., when dividing by the prime number each RBA yields a different quotient, but the same remainder to be re...