Browse Prior Art Database

Prefetching from the L2

IP.com Disclosure Number: IPCOM000105164D
Original Publication Date: 1993-Jun-01
Included in the Prior Art Database: 2005-Mar-19
Document File: 2 page(s) / 81K

Publishing Venue

IBM

Related People

Rechtschaffen, RR: AUTHOR [+3]

Abstract

In memory hierarchies with lengthy leading-edge-miss times the concept of a prefetch as arising from the processor is not as powerful as that of a prefetch as arising from the L2. In such a circumstance the prefetch should not simulate a miss from the processor but use an independent means of providing the data to the processor without incurring a leading edge or transmission penalty.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Prefetching from the L2

      In memory hierarchies with lengthy leading-edge-miss times the
concept of a prefetch as arising from the processor is not as
powerful as that of a prefetch as arising from the L2.  In such a
circumstance the prefetch should not simulate a miss from the
processor but use an independent means of providing the data to the
processor without incurring a leading edge or transmission penalty.

      To understand the concept of prefetching we must first grapple
with the manner in which a processor handles a miss.  When a
processor accesses its cache it anticipates that the data-back will
occur at the end of the cache latency time.  The processor being
pipeline for this delay coordinates the return of the data with an
execution unit that will consider the data-back as an operand for the
instruction execution.  If the data is not in the cache, a signal to
the processor arrives that allows the processor to delay until a
first data back for cache miss restarts the process and the processor
resumes as if no miss had occurred.  Aspects of the miss within the
cache structure include a miss-line-buffer into which the miss is
loaded and a and a set of valid bits that indicate when the
information can be accessed.  The line buffer is accessed in parallel
with the cache and can create a cache with capacity n or n+1
depending on the design where n is the cachesize in lines.  The time
between the arrival of the first data from a miss and when it would
have arrived, if it had been in the cache, is the leading-edge-time
of the miss.  The time to transmit the miss to the line buffer is
called the transfer time of the miss, and the the extent that the
transfer time impacts the performance of the processor is called the
trailing-edge-effect.

      PREFETCHING FROM THE L1 OF THE PROCESSOR - If a mechanism is
employed by the processor that has a predictive capability with
regard to what the next miss might be, that miss address can be
issued with slight modification to the approach taken for a genuine
miss.  For example the line buffer could now be reserved for prefetch
targets while genuine misses could be brought directly into the
cache.  The leading-edge-time of the prefetched miss would still
apply and the...