Browse Prior Art Database

Multibit error correction in a monolithic semiconductor memory Disclosure Number: IPCOM000019704D
Publication Date: 2003-Sep-26
Document File: 10 page(s) / 99K

Publishing Venue

The Prior Art Database

Related People

Mark G. Johnson: AUTHOR


Two-bit and multibit error correction in a monolithic three dimensional memory

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 14% of the total text.

Multibit error correction in a monolithic semiconductor memory

Author: Mark Johnson

Date: 8 September 2003

Problem Solved by the Invention

The number of bits per integrated circuit memory chip has grown exponentially for the past 30 years, at a rate of approximately 4X per 3 years. (This observation is often called Moore’s Law). Current state of the art is approximately 2 billion bits per chip, and continuing to grow.

Each of the bits on such a memory chip has a (very small) probability of “failure”, i.e. inability to read and/or write. The probability of bit failure has decreased over time, due to improvements in semiconductor materials, processing, and design. Unfortunately, the bit failure probability has not fallen as steeply as the number of bits per chip has risen. As more and more memory cells are packed onto a single chip, the probability that one (or more) of them will fail is rising. The failure rate of complete memory chips is rising. This is extremely undesirable, since electronic systems containing integrated circuit memories are becoming increasingly indispensable tools in daily life. Some of the highest density memory chips are charge-storage memories (such as conventional flash), vertically stacked 3D antifuse/diode PROMS, and vertically stacked 3D charge storage memories. These types are therefore the most likely to be afflicted with a rising failure rate of complete memory chips, and the most in need of a solution.

It is desirable to find a way to design a memory chip that can internally detect bit failures and correct them, transparently. Such a design would appear, from the outside, to be a conventional memory chip except with a much lower failure rate. In the prior art this has been done by applying ECC (error correcting codes) within the memory chip.

In the prior art, each N-bit user data word is augmented by a set of K check bits, forming an (N+K)-bit codeword stored in the memory. A common choice in the art is (N=64 bits, K=8 bits), using a Hamming code for error correction. However, prior art solutions have only corrected single-bit errors. They have implemented ECC approaches capable of correcting only one error in the (N+K) bit codeword. If more than one error occurs in a codeword, these prior art implementations are unable to correct them. It is a goal of the present invention to correct not one but two erroneous bits in the (N+K)-bit codeword. This would decrease the chip-level failure rate dramatically.

Error correcting codes are costly to implement, since the memory chip must be made larger to hold (N+K)-bit codewords instead of N-bit words. Thus it is a goal of this invention to decrease the overhead cost of ECC, i.e., to decrease the ratio K/N.

Finally it is a goal of the invention to implement all aspects of the error correcting code within a single monolithic memory chip. This makes the memory especially easy to use; external hardware and/or software is not required to perform any calculations or logic...