Browse Prior Art Database

RAID Architecture with Variable Error Statistics in Sectors

IP.com Disclosure Number: IPCOM000014803D
Original Publication Date: 2001-Jun-27
Included in the Prior Art Database: 2003-Jun-20

Publishing Venue

IBM

Abstract

1. Introduction Normally, in Redundant Arrays of Inexpensive Disks (RAID) architectures [1], a number of disks are XORed in order to obtain a redundant disk. This way, if a disk suffers a catastrophic failure, it is reconstructed by XORing the surviving disks. There are several ways to accomplish this and many related issues. For instance, in RAID 4, to a plurality of information disks, a redundancy disk is added that is the XOR of said plurality information disks. Such scheme, suffers from the disadvantage that each time information is written in an information disk, the redundant disk needs to be accessed in order to be updated. Hence a bottleneck effect is created in the redundant disk, since it needs to be accessed much more often than the information disks. This asymmetry is avoided in RAID 5 type of architectures, in which the redundancy is distributed among all disks. Thus, each disk has the same probability of being updated, assuming a random, equally likely distribution of writes among all disks that are part of said RAID 5 architecture. One assumption in the prior art is that sectors in each disk are XORed with corresponding sectors in the other disks. For example, the m-th track is XORed among all disks. This one is a natural assumption, since today's drives use zone recording. In zone recording, the number of sectors varies with the track. For instance, the outer diameter (OD) contains more sectors than the medium diameter (MD), which contains more sectors than the inner diameter (ID). However, it may occur that the ID and the OD of a disk have different error statistics. For example, assume that a disk drive experiences problems at the OD (approximately, at 5% of its capacity), when repeated writes stress the head in an adjacent track to the one being written, essentially wiping out the data written in that adjacent track. The more writes, the larger the probability of error, which degrades linearly with the number of writes.