Browse Prior Art Database

Method for pipelining line predictors

IP.com Disclosure Number: IPCOM000018664D
Publication Date: 2003-Jul-30
Document File: 6 page(s) / 71K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method for pipelining line predictors (LPs). Benefits include improved functionality and improved performance.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 31% of the total text.

Method for pipelining line predictors

Disclosed is a method for pipelining line predictors (LPs). Benefits include improved functionality and improved performance.

Background

        � � � � � For this disclosure, assume each predictor is a line predictor using return prediction information.

General description

        � � � � � This disclosed method is pipelining line predictors, enabling a higher processor clock rate and/or larger and more accurate line predictors. The LP caches nonsequential line predictions. On a LP cache miss, a line's successor's address is predicted to be sequential from the line's address. On a cache hit, cache provides the successor's address.

        � � � � � Although the pipelining technique is illustrated on a line predictor using return prediction information, the disclosed method can be applied to any predictor that uses a cache to make its predictions.

Advantages

        � � � � � The disclosed method provides advantages, including:

•        � � � � Improved functionality due to enabling pipelined line prediction

•        � � � � Improved functionality due to enabling the budget and deluxe restart techniques

•        � � � � Improved performance due to enabling a higher processor clock rate

•        � � � � Improved performance due to larger and more accurate line predictors

Detailed description

        � � � � � The detailed description of the disclosed method includes several topics, including:

•        � � � � Operation

•        � � � � Implementation

•        � � � � Restart

Operation

        � � � � � The operation of a single-cycle nonpipelined line predictor and a two-cycle pipelined line predictor is illustrated (see Figure 1). Each circle represents the final (or only) cycle of a prediction. The arrows indicate the order of the predictions. To the left of each circle is a letter which represents the address used to index the cache. This address is generated by the first-previous (nonpipelined) or second-previous (pipelined) prediction. Within each circle is a pair of letters separated by a colon. This pair represents the cache entry (both tag and data) at the index. The first letter represents the tag. The second represents the data, such as the successor's address.

        � � � � � For the nonpipelined predictor, the current fetch address is used to index the cache. At the end of the cycle, the current address is compared to the tag. If they match, the successor's address is selected as the prediction.

        � � � � � For the pipelined predictor, the current fetch address is used to index the cache to obtain the prediction for the fetch two cycles in the future. At the beginning of the first cycle, the current address is used to index the cache. At the end of the first cycle, the address of the current line's immediate successor is predicted. This prediction is generated by, in the zero-th cycle, indexing the cache with the address just prior to the current fetch address. At the end of the second cycle, the cache entry has been accessed. The address of the immediate successor, which was compute...