Browse Prior Art Database

ERROR CORRECTION FOR LARGE MEMORIES

IP.com Disclosure Number: IPCOM000024860D
Original Publication Date: 1982-Jun-30
Included in the Prior Art Database: 2004-Apr-04
Document File: 4 page(s) / 221K

Publishing Venue

Xerox Disclosure Journal

Abstract

Currently, error correction is used on computer memories to allow for reliable operation in the presence of single-bit errors and the detection of all double errors. Since it is unlikely that a memory error will affect more than one memory chip, this technique greatly improves the reliability of a machine. The standard error correction scheme uses a Hamming code and requires 6 additional bits for a 16-bit word, 7 additional bits for a 32-bit word, and 8 additional bits for a 64-bit word. In order to minimize the overhead of the check bits, it is desirable to use long word lengths.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 39% of the total text.

Page 1 of 4

(EROX DISCLOSURE JOURNAL

ERROR CORRECTION FOR LARGE MEMORIES
Sidney W. Marshall

Proposed Classification US. Cl. 365/200
Int. Cl. Gllc 7/00

Currently, error correction is used on computer memories to allow for reliable operation in the presence of single-bit errors and the detection of all double errors. Since it is unlikely that a memory error will affect more than one memory chip, this technique greatly improves the reliability of a machine. The standard error correction scheme uses a Hamming code and requires 6 additional bits for a 16-bit word, 7 additional bits for a 32-bit word, and 8 additional bits for a 64-bit word. In order to minimize the overhead of the check bits, it is desirable to use long word lengths.

Another trend in the industry is the use of memory chips with more bits per chip. These chips are organized as 16k by 1 or 64K by 1. A problem with the error correction strategy now becomes apparent: if big chips are used and long words are used, the minimum size of the memory is very large. For example, a scheme using 32-bit words and 64K RAM chips would have a minimum memory size of 2-million bits. It is the subject of this disclosure to indicate a way around this difficulty.

Most large dynamic RAM chips can be operated in what is known as "page moder1. This is a mode whereby references to adjacent memory locations can be performed more rapidly than the normal cycle time of the chip. Instead of using n chips for an n-bit word, we could use n/2-chips and access them twice to access the full word. Unfortunately, the conventional error correction scheme depends on the independence of errors in bits of a word and this condition is violated as a single memory chip can now cause 2 errors. A new error correction scheme is accordingly required.

Some computers use 32 bits in the word and 7 check bits for a total of 39 bits. However, for packaging reasons there are actually 40 bits in a word with one bit not used. If this spare bit can be used to enhance the error correction ability of the system, it is virtually free.

The normal Hamming code corrects single errors over the binary field. There also exists Hamming codes that correct single errors over fields of more than two elements. In par icular, it is possible to coyect errors over the Galois Field of 4 elements or GF(2 1. Let an element of GF(2 ) be called a "digit" and let each digit consist of 2 bits. Then there exists a code with 18 data digits and 3.check digits that will correct single digit errors. This corresponds to a code with 36 bits and 6 check bits but the present code is more powerful. It can correct 2-bit errors if the errors are in the same digit. Such a code can therefore be used with a page mode access memory and correct all single chip errors because a single chip can only affect a single digit of the code. To detect double chip errors, a check digit can be appended on the end of the code the same way a parity bit is appended to a binary Hamming code. Th...