Browse Prior Art Database

Tying Data Prefetching to Branch Prediction

IP.com Disclosure Number: IPCOM000106163D
Original Publication Date: 1993-Oct-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 2 page(s) / 58K

Publishing Venue

IBM

Related People

Kaeli, DR: AUTHOR [+2]

Abstract

Disclosed is a methodology for how to tie prefetching for a data cache to branch prediction.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 56% of the total text.

Tying Data Prefetching to Branch Prediction

      Disclosed is a methodology for how to tie prefetching for a
data cache to branch prediction.

      A data cache is a fast, local memory used to hold frequently
accessed data.  Misses in a data cache introduce large latencies that
reduce processor pipeline efficiency substantially.  By prefetching
data into the data cache before it is actually needed, the frequency
of misses is greatly reduced.  Issuing these prefetches in
conjunction with predicting branches can reduce the complexity
associated with such a mechanism, while improving its accuracy.

      All program execution is made up of sequences of contiguous
instruction execution, separated by taken branch instructions.
Associated with these sequences are memory accesses to data.  There
may be a single or multiple data accesses issued to the memory
subsystem within a single contiguous instruction execution sequence.

      A data cache is commonly used to satisfy requests for data from
the memory subsystem.  When data-cache misses occur, the efficiency
of the processor pipeline execution is greatly reduced.  It is highly
desirable to reduce the frequency of data-cache misses to as low as
possible.

      Implicit in the execution of a contiguous sequence of
instructions is the relationship between the beginning of the
sequence and the associated data accesses that are made during the
sequence's execution.  A common mechanism used to predict if...