Browse Prior Art Database

Prefetching Pacing Buffer to Reduce Cache Misses

IP.com Disclosure Number: IPCOM000043872D
Original Publication Date: 1984-Oct-01
Included in the Prior Art Database: 2005-Feb-05
Document File: 2 page(s) / 41K

Publishing Venue

IBM

Related People

Pomerene, JH: AUTHOR [+4]

Abstract

To reduce the occurrence of memory subsystem lockouts caused by excessive prefetching of instruction lines, a bit in each prefetch target line is used as a tag that is set if a prefetch of that line takes place in the same interval that a cache miss occurs. Tagged lines then are placed in a prefetching pacing buffer where they are held until the next cache miss occurs, thereby insuring that the cache miss will be serviced by the memory subsystem before it does any more prefetching. The purpose of prefetching instruction lines into a cache may be partially defeated if a large number of the prefetched lines are not actually used.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 87% of the total text.

Page 1 of 2

Prefetching Pacing Buffer to Reduce Cache Misses

To reduce the occurrence of memory subsystem lockouts caused by excessive prefetching of instruction lines, a bit in each prefetch target line is used as a tag that is set if a prefetch of that line takes place in the same interval that a cache miss occurs. Tagged lines then are placed in a prefetching pacing buffer where they are held until the next cache miss occurs, thereby insuring that the cache miss will be serviced by the memory subsystem before it does any more prefetching. The purpose of prefetching instruction lines into a cache may be partially defeated if a large number of the prefetched lines are not actually used. Excessive prefetching causes lockout of the memory subsystem resources (memory bus, cache bus bandwidth) so that other instruction lines which are needed in response to real cache misses cannot be fetched during the lockout intervals. The solution proposed herein is to set a tag bit in the prefetch target line whenever an intervening miss is taken by the cache. Each tagged line then enters the pacing buffer, from which none of the tagged lines can leave until the next miss occurs. Prefetch target lines which are fetched during intervals in which no cache miss occurs remain untagged and enter the cache directly, as shown in the figure. The pacing buffer prevents excessive use of memory subsystem resources for the prefetching of instruction lines (not all of which are needed) at the expense of...