Browse Prior Art Database

Modified Replacement Algorithm for Reducing Cross-Interrogate Penalties

IP.com Disclosure Number: IPCOM000047298D
Original Publication Date: 1983-Oct-01
Included in the Prior Art Database: 2005-Feb-07
Document File: 4 page(s) / 42K

Publishing Venue

IBM

Related People

Liu, L: AUTHOR

Abstract

A modification of the Least-Recently-Used (LRU) replacement algorithm is proposed for use in multiprocessor systems to reduce the performance penalties which tend to occur as the result of cross-interrogate (XI) activities between processors which share data lines. The proposed modification causes XI-sensitive data lines to have altered replacement priorities so that such lines are replaced sooner than they would have been replaced under a conventional LRU algorithm, thereby reducing the probability that these lines will remain in the cache long enough to become involved in cross-interrogations. Data lines which are not in XI-sensitive category are assigned replacement priorities in the normal manner when they are referenced or fetched.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 31% of the total text.

Page 1 of 4

Modified Replacement Algorithm for Reducing Cross-Interrogate Penalties

A modification of the Least-Recently-Used (LRU) replacement algorithm is proposed for use in multiprocessor systems to reduce the performance penalties which tend to occur as the result of cross-interrogate (XI) activities between processors which share data lines. The proposed modification causes XI-sensitive data lines to have altered replacement priorities so that such lines are replaced sooner than they would have been replaced under a conventional LRU algorithm, thereby reducing the probability that these lines will remain in the cache long enough to become involved in cross-interrogations. Data lines which are not in XI-sensitive category are assigned replacement priorities in the normal manner when they are referenced or fetched. In a pure or unmodified LRU replacement algorithm, the data lines contained in a column or "congruence set" of a cache are assigned priorities in accordance with the order in which the lines have been most recently used. If a congruence set is fully occupied when a new data line is fetched into the cache, the least recently used line is replaced by the new line, which then is given the status of most recently used line.

Similarly, if a line is referenced in the cache, regardless of its status at the time of the reference, it now becomes the most recently used line. In the present description it will be assumed that priority numbers are assigned to the lines in a cache according to the order of most recent use, so that the most recently used line has priority number 1, and the least recently used line has the highest priority number in the series. If a line other than the line numbered 1 is referenced or fetched into the cache, it becomes line number 1 (assuming that a pure LRU replacement algorithm is being used), and the numbers of all lines which formerly were of lower value are now incremented by 1. At any given instant, the cache line having the lowest priority number is the one most likely to remain in the cache for the longest interval, and the line having the highest priority number is the one least likely to remain in the cache for an appreciable period. When the pure LRU algorithm just described is employed in a multiprocessor environment where some data lines are shared among several processors, it can cause the system to function inefficiently in situations where a data line needed by one processor is residing in the cache of another processor. Before the first processor can use that line, it must consult the second processor to ascertain whether the line was modified in the remote cache. This is called a "cross-interrogate" (XI) action. An XI causes a castout if the referenced line has been modified. Such XI's and castouts can seriously penalize system performance. The condition is aggravated by the pure LRU replacement algorithm because invariably it tends to keep the most frequently used lines in the local caches...