Browse Prior Art Database

Prefetching Cache in a Parallel Cache Architecture

IP.com Disclosure Number: IPCOM000106261D
Original Publication Date: 1993-Oct-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 2 page(s) / 51K

Publishing Venue

IBM

Related People

Aldereguia, A: AUTHOR [+4]

Abstract

Disclosed is an architecture, shown in the Figure, in which a processor unit 1, preferably including internal cache, is connected in a parallel fashion to both a main memory controller 2 and an external cache memory 3, which is also connected to the main memory controller to prefetch data.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 67% of the total text.

Prefetching Cache in a Parallel Cache Architecture

      Disclosed is an architecture, shown in the Figure, in which a
processor unit 1, preferably including internal cache, is connected
in a parallel fashion to both a main memory controller 2 and an
external cache memory 3, which is also connected to the main memory
controller to prefetch data.

      External cache prefetching works on the assumption that the
processor is likely to access main memory in a sequential fashion.
When processor 1 accesses a line of data from main memory, that line
is transferred to the processor, and the next line is stored in
external cache memory 3.  In this way, the next line is "prefetched."
This process is accomplished, for example, by driving the AHOLD
signal, which prevents processor 1 from requesting another cycle,
active at or before the last BRDY# signal, which indicates the end of
the cycle accessing the line requested by the processor.  When the
prefetch cycle is completed, the AHOLD signal is driven inactive.

      If processor 1 includes internal cache, back to back accesses
from the processor to main memory are usually not required, so the
additional time required to read the next line to external cache
memory 3 does not effect the performance of processor 1 in most
cycles.  In any event, the time required to read an additional line
of data from main memory to cache 3 is much less than the time
required to access the line of memory chosen by the processor, and
su...