InnovationQ will be updated on Sunday, April 29, from 10am - noon ET. You may experience brief service interruptions during that time.
Browse Prior Art Database

Improved Cache Bypass

IP.com Disclosure Number: IPCOM000109974D
Original Publication Date: 1992-Oct-01
Included in the Prior Art Database: 2005-Mar-25
Document File: 2 page(s) / 61K

Publishing Venue


Related People

Boike, M: AUTHOR [+4]


This mechanism was developed for a future ESA/390* Processor. It was designed to improve the processor/cache performance.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Improved Cache Bypass

       This mechanism was developed for a future ESA/390*
Processor.  It was designed to improve the processor/cache

      In a computing system with a Cache Memory, there are several
variables that affect the finite cache performance.  Two of these are
the cache leading edge penalty and the cache trailing edge penalty.
The leading edge penalty involves many items which affect the delay
that the processor experiences on a cache miss.  This is essentially
the additional delays added to fetch the first piece of data.  The
trailing edge penalties consist of all the cycles that are used by
the cache to install a new line into the cache following a cache

      In previous machines with cache memories, there is a cache
bypass path that has been added to reduce the number of cycles for
the requested data to be sent to the processor.  In the 3090* family
of machines, four cycles were used by bypass the first four
doublewords of data to the processor.  The problem with a fixed
bypass amount is that there are many cases where the processor only
needs one or two doublewords.  For these cases, the additional bypass
cycles make the cache busy and is unavailable to the processor for
new requests.  This invention improves the performance by minimizing
the number of doublewords sent to the processor on a cache miss to
the minimum number that might be used.  This frees up the cache to
handle additional requests.

      This inv...