Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method for Background Parity Update in a Redundant Array of Inexpensive Disks (raid)

IP.com Disclosure Number: IPCOM000110007D
Original Publication Date: 1992-Oct-01
Included in the Prior Art Database: 2005-Mar-25
Document File: 3 page(s) / 93K

Publishing Venue

IBM

Related People

Crews, C: AUTHOR [+2]

Abstract

Disclosed is a method of improving the performance of a redundant array of disk storage devices attached to a host computer. Extra time is required to generate the redundant information to provide fault tolerance, and this increases the response time of the array. By performing only the required portion of the operation in the foreground, most of the generation of the redundant information can be accomplished in the background. A small amount of additional information is required to preserve the fault tolerance of the array while the background operations are going on.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Method for Background Parity Update in a Redundant Array of Inexpensive Disks (raid)

       Disclosed is a method of improving the performance of a
redundant array of disk storage devices attached to a host computer.
Extra time is required to generate the redundant information to
provide fault tolerance, and this increases the response time of the
array.  By performing only the required portion of the operation in
the foreground, most of the generation of the redundant information
can be accomplished in the background.  A small amount of additional
information is required to preserve the fault tolerance of the array
while the background operations are going on.

      Multiple rigid disk drives are connected together in an array
which is then controlled by the array controller.  The data on the
drive does not need to be stored in a sequential order of host
addresses.  There are a variety of algorithms to spread the data
around.  This dramatically improves the performance but increases the
probability of data loss in the event of a drive failure.

      One way to reduce the probability of data loss is with parity.
Some portion of the array is allocated to maintain parity for the
rest of the data on the array.  Thus, if one drive fails, the data on
that drive can be reconstructed from the data that remains on other
drives and the parity information.

      Typically, the array is broken up into parity groups which
consist of some number of blocks from each drive in the array.  The
data area on a drive (the parity drive) is set to the parity of the
corresponding blocks on the rest of the drives in the parity group.
When a block is written, the parity in the associated block on the
parity drive must be rewritten also.  One way to do this is to read
all the other blocks on the drives in the parity group corresponding
to the written block and recalculate the parity and then write it
out.  Usually it is faster to read the data that is being
overwritten, read the old parity, subtract the old data fro...