Browse Prior Art Database

Caching Mechanism for 32-Bit ECC with Variable System Data Block Size

IP.com Disclosure Number: IPCOM000110883D
Original Publication Date: 1994-Jan-01
Included in the Prior Art Database: 2005-Mar-26
Document File: 4 page(s) / 203K

Publishing Venue

IBM

Related People

Amoni, S: AUTHOR [+4]

Abstract

System memory with ECC provides improved system reliability but requires more total memory and can be an impact to system performance as well. The impact to total memory requirement can be reduced by using larger data size for ECC, but, depending on the total system, this can significantly impact system performance. This article describes a method of eliminating some of the performance penalty when the ECC data width is not matched to the system data width. This method allows the reduction of memory overhead while minimizing the performance penalties.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 30% of the total text.

Caching Mechanism for 32-Bit ECC with Variable System Data Block Size

      System memory with ECC provides improved system reliability but
requires more total memory and can be an impact to system performance
as well.  The impact to total memory requirement can be reduced by
using larger data size for ECC, but, depending on the total system,
this can significantly impact system performance.  This article
describes a method of eliminating some of the performance penalty
when the ECC data width is not matched to the system data width.
This method allows the reduction of memory overhead while minimizing
the performance penalties.

      A cache is a smaller and faster memory used to improve system
performance during code and data access operations.  It is usually
situated between a microprocessor and main memory, and it acts as
storage area for frequently used code and data.  The size of cache
memory can vary from a few to thousands of bytes depending on how it
is used in a system.  A cache takes advantage of the fact that
programs generally access instructions and data sequentially, to
improve system performance.  In this disclosure we will use the word
"stache" to represent a buffer with many of the properties of a
cache.  The following terms are used to describe cache subsystems and
their operation:

Cache Hit.  An access to data that is available in the cache
subsystem is referred to as a cache hit.
Cache Miss.  An access to data that is not available in the cache
subsystem is referred to as a cache miss.
Write-through Cache.  A cache subsystem that updates the next level
of storage as the cache contents are updated is called a
write-through cache.
Write-back Cache.  A cache subsystem that updates the next level of
storage in a batch fashion, usually after multiple updates to the
cache subsystem data, is referred to as a write-back cache.  This
batch update usually occurs when the internal cache contents are
required to hold other data.

      Error checking and correction techniques are often used in data
transmission and storage to ensure the integrity of the data.  Any
error checking and/or correcting method requires some amount of
redundancy.  One method of error checking and correction (which we
will refer to as ECC) is a modified Hamming code which allows
single-bit error correction and double-bit error detection.  The
amount of redundancy required for this technique depends on the size
of the data block being used.  For example, if ECC is performed on
16-bit data blocks, then 6 additional bits are required per data
block.  However, if 32-bit data blocks are used, then 7 additional
bits are required per block.  The size of the ECC data block used is
typically the size of the system data bus since the whole data block
must be read and written together.  If the system reads a subset of
an ECC data block in any cycle, part of the data read from ECC memory
is not used in that cycle.  If the system writes a...