Browse Prior Art Database

Selective Prefetching Based on Miss Latency

IP.com Disclosure Number: IPCOM000106289D
Original Publication Date: 1993-Oct-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 2 page(s) / 63K

Publishing Venue

IBM

Related People

Kaeli, DR: AUTHOR [+2]

Abstract

Disclosed is a mechanism that only recognizes caches misses with certain characteristics as candidates for presentation to an existing prefetching mechanism such that prefetches can be initiated in advance of ensuing fetches.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 56% of the total text.

Selective Prefetching Based on Miss Latency

      Disclosed is a mechanism that only recognizes caches misses
with certain characteristics as candidates for presentation to an
existing prefetching mechanism such that prefetches can be initiated
in advance of ensuing fetches.

      A cache is a small, fast memory that holds recently accessed
instructions or data.  Prefetching cache lines is one method to mask
the memory access latency in computer systems.  One hardware
mechanism used to prefetch cache lines is called a Cache Miss History
Table (CMHT) [*].  A CMHT saves history of past misses and uses this
information to predict future misses.

      Entry replacement strategies can significantly impact the
effectiveness of any hardware prefetching mechanism.  The goal of any
replacement strategy is to avoid replacing "useful" entries with
"useless" entries.  A useful entry is one that causes the initiation
of a prefetch that reduces execution time.  Additionally, usefullness
is gauged by re-use of the entry.  A useless entry is one that causes
the initiation of a prefetch that does not affect or even increases
execution time.

      For memory architectures such as shared memory, distributed
memory, or multilevel cache structures, cache misses have different
latencies.  The cost of fetching a cache line is the latency of a
miss to the cache line multiplied by the frequency of missing.
Ideally, high cost cache lines should be prefetched.  If there ar...