Browse Prior Art Database

Cache Miss Leading Edge Processing

IP.com Disclosure Number: IPCOM000037325D
Original Publication Date: 1989-Dec-01
Included in the Prior Art Database: 2005-Jan-29
Document File: 1 page(s) / 12K

Publishing Venue

IBM

Related People

Emma, PG: AUTHOR [+5]

Abstract

Improvements in the operation of a memory hierarchy are derived by utilizing the "leading edge" of a cache miss to anticipate additional hierarchical requirements. Current high end processors are limited in several ways by the occurrence of an operand cache miss. In some processors the architectural requirements that operands must be accessed in sequence precludes accessing beyond the point where a cache miss has been recognized. Even if this limitation is relaxed, the accessing of the cache is limited by the available queueing facilities within the processor. In the following neither of these limitations apply as the results of the cache accessing are utilized for the sole purpose of instructing the memory hierarchy.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 64% of the total text.

Page 1 of 1

Cache Miss Leading Edge Processing

Improvements in the operation of a memory hierarchy are derived by utilizing the "leading edge" of a cache miss to anticipate additional hierarchical requirements. Current high end processors are limited in several ways by the occurrence of an operand cache miss. In some processors the architectural requirements that operands must be accessed in sequence precludes accessing beyond the point where a cache miss has been recognized. Even if this limitation is relaxed, the accessing of the cache is limited by the available queueing facilities within the processor. In the following neither of these limitations apply as the results of the cache accessing are utilized for the sole purpose of instructing the memory hierarchy.

In the event of an operand cache miss, the processor is delayed until the operand is returned from the memory hierarchy. The processor can continue to decode and generate addresses of operands and send these to the cache in a mode that indicates their purpose is instructive as to future operand requirements of the processor. As no operands are returned for such accesses there is no need to queue these requests or their results.

The advantages associated with such requests occur in the following situations: 1. If a second cache miss is detected and the

processor:

can initiate the second miss immediately if more

than one miss can be concurrently processed and an

increase in miss overlap is achieved, or

can initiate t...