Browse Prior Art Database

Simple Prefetching Scheme for Memory Hierarchy

IP.com Disclosure Number: IPCOM000046122D
Original Publication Date: 1983-Jun-01
Included in the Prior Art Database: 2005-Feb-07
Document File: 3 page(s) / 29K

Publishing Venue

IBM

Related People

So, K: AUTHOR

Abstract

This prefetching scheme uses simple rules for determining, with a high degree of accuracy, whether or not to prefetch from a higher-level memory M2 lines of data which are in proximity to other lines that already have been fetched from M2 into a lower-level memory Ml for referencing by a processor. The prefetching decisions are made in accordance with the settings of certain control bits in each of the lines concerned, these bits comprising a reference bit and two prefetching bits.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Page 1 of 3

Simple Prefetching Scheme for Memory Hierarchy

This prefetching scheme uses simple rules for determining, with a high degree of accuracy, whether or not to prefetch from a higher-level memory M2 lines of data which are in proximity to other lines that already have been fetched from M2 into a lower-level memory Ml for referencing by a processor. The prefetching decisions are made in accordance with the settings of certain control bits in each of the lines concerned, these bits comprising a reference bit and two prefetching bits. Whenever the first of two consecutive lines in M2 has been fetched to Ml, and the second of the two lines has not yet been referenced by a CPU, the prefetching bits of the second line are set to a tentative pattern which represents a "guess" that this second line is likely to be missed in Ml within a relatively short time while the first line is still in Ml. Subsequent events will either confirm or disprove this guess, and the prefetching bits of the second line accordingly are set to a pattern which indicates whether or not the second line thereafter should be prefetched whenever the first line is fetched.

The concept is described in the context of a memory hierarchy in which Ml is a small cache accessible to the CPU and M2 is a larger main memory. Each memory cell or line in Ml is assumed to have the same size as the traffic unit between Ml and M2. The prefetching scheme is employed to reduce the miss penalty which tends to occur when the CPU references a section of data in M2 that is larger than a traffic unit, consisting of several consecutive lines. A "miss" is the event which occurs when the CPU needs a line that is not already present in Ml and therefore must be fetched from M2 to Ml. Since these misses cost cycles of processor time, it is desirable to minimize them. The prefetching scheme utilizes a simple algorithm to establish a prefetching relation among consecutive lines in M2 so that these lines ultimately can be fetched successively from M2 to Ml without causing misses.

The rules of the prefetching algorithm will be explained with reference to the diagram. In the higher-level memory M2 there are shown three consecutive lines L0, L and Ll, of which the first line L0 already is in Ml and has been referenced by the CPU. In the absence of any contrary information, the algorithm initially assumes that soon after L0 is referenced, the next line L will be missed.

Each line in M2 contains two prefetching bits p and q, each having an initial value of 0. When L0 is fetched to Ml, the bit p of the next line L is set to 1, thereby indicating a "guess" by the algorithm that L will be missed soon. As long as bit q of L retains a 0 value, however, this guess is unconfirmed. As subsequent events show the guess to be right or wrong, the pq setting will be changed to 1...