Browse Prior Art Database

Data Shuffle for Hard Disk Drive Protection

IP.com Disclosure Number: IPCOM000032457D
Original Publication Date: 2004-Nov-05
Included in the Prior Art Database: 2004-Nov-05
Document File: 1 page(s) / 52K

Publishing Venue

IBM

Abstract

There is a known quality issue with some HDDs where lubricant from the disk bearings is deposited onto the disk surface tending to build up in heavily used areas causing loss of data or head damage. This disclosure describes a method of data manipulation that will increase drive life and reduce potential for loss of data.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 59% of the total text.

Page 1 of 1

Data Shuffle for Hard Disk Drive Protection

Hard Disk Drive (HDD) Data Shuffling

This disclosure describes control software that manipulates the physical location of data recorded on HDD surfaces and records the locations of damaged segments. When HDDs have an operating system installed e.g. OS2 or Windows*, it resides in one area of the drive. Even considering that the heads "fly" over the disk surface, it is known that contamination and vibration can result in progressive material build up or wear and tear damaging the HDD recording media and heads resulting in a loss of data particularly on heavily used tracks. The disclosure is for software coding that either through default setting or choice, shuffles data to different physical locations on the drive disk surface at regular intervals preventing the heads continuously visiting the same disk tracks and segments.

    This is achieved by the installation of drive software programmed to read packets of information from one area of the disk surface and write them in new areas at regular intervals, noting any errors and their locations and updating the address accordingly. The software would control the size of the packets, the scheduling of the transfers, re-addressing and errors encountered. This is used in conjunction with disk elevatoring and defragmentation to maintain minimal read times on an ongoing basis. The time taken to achieve this is minimal and will be invisible to the user.

The software can perform this...