Browse Prior Art Database

Prefetch Cache for Data Search With Limited Multiple-Porting

IP.com Disclosure Number: IPCOM000044442D
Original Publication Date: 1984-Dec-01
Included in the Prior Art Database: 2005-Feb-05
Document File: 3 page(s) / 82K

Publishing Venue

IBM

Related People

Affinito, FJ: AUTHOR [+2]

Abstract

Cache prefetching schemes have the advantage that they overlap CPU processing with memory-to-cache data transfers. In order to maximize the value of this overlap, processor access to the cache must be allowed while memory-to-cache transfer is taking place. A "two-port cache" is a cache that allows such multiple access. Two-port caches are generally much more expensive than single-port caches. This disclosure describes an invention that permits a certain class of prefetching without the full expense of a general two-port cache. A significant amount of database activity involves searching (e.g., for the correct record to process). A specific portion of each record is examined for a match against a given criterion. If the criterion is not met, the next record is located from information in the current record.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Prefetch Cache for Data Search With Limited Multiple-Porting

Cache prefetching schemes have the advantage that they overlap CPU processing with memory-to-cache data transfers. In order to maximize the value of this overlap, processor access to the cache must be allowed while memory-to- cache transfer is taking place. A "two-port cache" is a cache that allows such multiple access. Two-port caches are generally much more expensive than single-port caches. This disclosure describes an invention that permits a certain class of prefetching without the full expense of a general two-port cache. A significant amount of database activity involves searching (e.g., for the correct record to process). A specific portion of each record is examined for a match against a given criterion. If the criterion is not met, the next record is located from information in the current record. This invention allows prefetching of the next record while the current record is being examined to determine if it meets the given criterion. The described architecture consists of (1) a cache, a portion of which has two ports, with simultaneous CPU access and memory transfer possible; (2) a database design rule insuring that the information needed by the search routine falls in the region of the cache having two ports; (3) a new instruction for triggering prefetching of the next database record; and (4) a software protocol that utilizes (1) - (3) to increase the speed of the search. A specific implementation will be outlined. However, the architecture can be extended to different implementations in a straightforward manner. All of the following assumptions are based on current design points or trends. Assume that the machine has a 64K cache with a 128-byte line and 4 associative entries per congruence class. There are 128 congruence classes. It is assumed that all addresses are 32 bits, with the leftmost bit being numbered 0. Bits 19-24 of the address determin...