Browse Prior Art Database

Cache Line Prefetch

IP.com Disclosure Number: IPCOM000044258D
Original Publication Date: 1984-Nov-01
Included in the Prior Art Database: 2005-Feb-05
Document File: 1 page(s) / 11K

Publishing Venue

IBM

Related People

Jeremiah, TL: AUTHOR

Abstract

Reducing the cache miss performance penalty and the wait time for instruction accesses is accomplished by the implementation of separate caches, one containing instructions and the other containing data. Adding an instruction buffer on the output of the instruction cache allows that cache to contain several instructions more than the one currently executing. Hardware is then added to the instruction buffer, decoding the instructions within, to determine whether the instructions remaining to be executed may cause a branch out of the current instruction stream. The hardware also determines where, in the currently used cache line, the program is executing.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 100% of the total text.

Page 1 of 1

Cache Line Prefetch

Reducing the cache miss performance penalty and the wait time for instruction accesses is accomplished by the implementation of separate caches, one containing instructions and the other containing data. Adding an instruction buffer on the output of the instruction cache allows that cache to contain several instructions more than the one currently executing. Hardware is then added to the instruction buffer, decoding the instructions within, to determine whether the instructions remaining to be executed may cause a branch out of the current instruction stream. The hardware also determines where, in the currently used cache line, the program is executing. When the current cache line is used up, the hardware receives a cache miss while attempting to load the instruction buffer, thereby accessing main memory for the requested new cache line. Such action permits the main memory access time to be overlapped, allowing remaining instructions in the instruction buffer to be executed, thereby eliminating all or part of the main memory access penalty. The separate cache for data, reducing the interference for data access, further enhances the feasibility of this design.

1