Browse Prior Art Database

Pipelined Cache Design for High Reliability

IP.com Disclosure Number: IPCOM000113051D
Original Publication Date: 1994-Jul-01
Included in the Prior Art Database: 2005-Mar-27
Document File: 8 page(s) / 216K

Publishing Venue

IBM

Related People

Bibler, BJ: AUTHOR [+2]

Abstract

Using a cache structure, which utilizes pipelined arrays, the protection of modified data in a cache can be expanded without effecting the performance of the cache access. This can be done by the natural characteristics of pipelined arrays. Here the write back to the array can be hidden and the use of error monitoring by the directory will aid in the sorting of hard and soft errors in the cache. The soft errors will only be corrected and the hard errors with be corrected while that physical location in the cache which contained that data, will be removed from the cache's inpage selection pool. Here, it is hoped that by applying these methods, the cache correction would have a minimum effect and limit the number of cache locations that would be removed due to errors.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 26% of the total text.

Pipelined Cache Design for High Reliability

      Using a cache structure, which utilizes pipelined arrays, the
protection of modified data in a cache can be expanded without
effecting the performance of the cache access.  This can be done by
the natural characteristics of pipelined arrays.  Here the write back
to the array can be hidden and the use of error monitoring by the
directory will aid in the sorting of hard and soft errors in the
cache.  The soft errors will only be corrected and the hard errors
with be corrected while that physical location in the cache which
contained that data, will be removed from the cache's inpage
selection pool.  Here, it is hoped that by applying these methods,
the cache correction would have a minimum effect and limit the number
of cache locations that would be removed due to errors.

      When the data in a write-in cache is modified, it becomes the
only copy of that data in the system.  If it is damaged, that data is
lost.  If it is critical to the system ( global locks, part of a key
application, etc.), then that application or the system could fail.
ECC has been added to the caches to protect the data, but it may not
be a complete solution since the data in the cache is very dynamic.
Also, as SRAM cell size shrinks, alpha particle and cosmic rays
effects on each cell are increasing soft error rates, raising the
possibility of damaged data significantly.

      What is needed is additional protection without a great deal of
additional circuitry or delay added.  This can be done with the
following supporting structure.

      The cache structure that aids in this function can be a L1 or
L2 type, store in or store thru, but store in caches are the target
cache type since a store in cache will have the only copy of a
modified line.  Also, the cache can be made up of a single unit or
many units, as seen in Fig. 1, creating a columnized data structure
to main memory.

      Each cache unit can represent many data elements.  These
elements can be made up of varying size but, are usually structured
around an ECC word.  The following discussion will involve one of
these data elements (an ECC word).  The idea of the discussion can be
expanded to a multi-element system.  The point to understand is that
many of these words can be accessed and dealt with in a parallel
fashion.

      The cache and directory structure shown in Figs. 2 and 3 will
help us understand how the pipeline array aids in this design.
Examining these figures they appear much like a two port array but
are not.  They have internal clocking that creates sub-cycles that
stages the data through the array.  Depending on how they are set up
at the beginning of the system cycle, the accesses seem to happen in
parallel, but they are really in serial.

      The pipelined structure used here has two sub-cycles.  The
first is always a read, and the second is always a write (Fig. 4).
This simplifies the array de...