Browse Prior Art Database

Prevention of Hard Errors in Magnetic Files Due to Long Term Degradation

IP.com Disclosure Number: IPCOM000038922D
Original Publication Date: 1987-Mar-01
Included in the Prior Art Database: 2005-Feb-01
Document File: 2 page(s) / 14K

Publishing Venue

IBM

Related People

Cunningham, EA: AUTHOR [+3]

Abstract

Hard errors can be due to degradation from "aging", the gradual reduction of signal by the cumulative effect of many small stresses, such as stray magnetic fields, intertrack interference, mechanical or electrical changes affecting track misregistration, chips or scratches in the head. This can be prevented by the periodic reading of all the data to test the quality of the data in the file. If any of the data is below a predetermined quality, a rewrite of the data is done after the recovery in order to refresh the data to full quality. If the rewritten data is not of high quality, the physical sector itself is in question. Thus a warning is issued to the user recommending a reallocation of the given sector.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 2

Prevention of Hard Errors in Magnetic Files Due to Long Term Degradation

Hard errors can be due to degradation from "aging", the gradual reduction of signal by the cumulative effect of many small stresses, such as stray magnetic fields, intertrack interference, mechanical or electrical changes affecting track misregistration, chips or scratches in the head. This can be prevented by the periodic reading of all the data to test the quality of the data in the file. If any of the data is below a predetermined quality, a rewrite of the data is done after the recovery in order to refresh the data to full quality. If the rewritten data is not of high quality, the physical sector itself is in question. Thus a warning is issued to the user recommending a reallocation of the given sector. File design allows for the normal variations of many parameters so that they do not interfere with the operation of the file. However, the design by necessity must have the variations near the limit so that a high capacity can be obtained. By the statistical nature of the parameters, it is therefore normal for some aspect of the files performance to occasionally be impacted by the more extreme, but still normal variations of the parameters. The Data Recovery Procedure (DRP) of a file is designed to handle these. The following procedure has been added to compensate for the cumulative effect of small stresses previously not considered in the file design.

As an example of a cumulative effect, consider a sector that is written on a file early in its life. The sector may or may not be read often, but no updated material is required, so the sector is never written again. Suppose that the sectors on each side of the sector under consideration are active and are rewritten very often. With respect to squeeze from the adjacent sectors, it is not the question of how large the squeeze is at any given time, but what is the worst that can happen in the life of the file. While the odds of the left adjacent sector and the right adjacent sector, both having a significantly large squeeze at the same time, is about nil, the odds are significant that each could happen over years of time. If the sector of interest is never rewritten over the years, the amount of squeeze from each side statistically increases with time. Thus the signal- to-noise ratio of the given sector statistically continues to decrease. If the sector is read often, the decreased signal-to-noise ratio will eventually start to cause errors. In previous recovery procedures, the data would just be recovered. The degrading factors would then continue over time and the sector would continue to get progressively worse, until eventually a Hard Error would result. It should be clear that when some given level of difficulty is encountered in reading a sector, something should be done about it. Some may consider that there...