Browse Prior Art Database

Smart Data Management - Autonomous Disk De-fragmentation Disclosure Number: IPCOM000238516D
Publication Date: 2014-Sep-02
Document File: 2 page(s) / 58K

Publishing Venue

The Prior Art Database


Magnetic hard disk drives offer a relatively cheap storage medium for large quantities of data. These electromechanical devices are often used in price sensitive products and are likely to continue to occupy a major share of future bulk storage despite the presence of emerging new technologies such as solid state storage devices (SSD). By its nature, the technology is prone to file fragmentation which reduces performance. This publication describes a novel method to move defragmentation activity away from the host system by embedding the capability within the mass storage device.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 53% of the total text.

Page 01 of 2

Smart Data Management - Autonomous Disk De-fragmentation


The data stream sent to a Hard Disk Drive (HDD) often fragments across the platter(s) as data are written to the drive. De-fragmenting the HDD is generally performed by a host computer under the control of the operating system and ties up both host and system resources for a standard maintenance task. Some commercial systems run the defragmentation in the background or during periods of light loading although this still consumes system resources. There is also a risk of data loss or corruption during defragmentation if the host OS performing the read/ write

operations crashes. The impact of these issues can be reduced if the defragmentation is handled by a dedicated process within the HDD..

    An individual HDD is usually blind to file-order sequences and fragmentation. However, this method enhances the features of the hard disk by adding a processing function within the HDD electronics to analyse the Master File Table and to take decisions on consolidating fragments of individual files. This takes advantage of spare cycles within the HDD to reassemble fragmented files over an extended time period without interrupting host activity.


    Defragmenting under the control of the computer allows files to be ordered in a sequence to improve start-up times, for example. During a typical defragmentation operation the fragmented data are often read back into the computer from the HDD and subsequently stored on the HDD as one contiguous file or a contiguous string of files.

    An HDD formatted for NTFS operation contains a Master File Table (MFT) or equivalent with information on the location of each fragment of each file contained

within the disk. HDDs typically contain onboard cache (eg. 32MB or 64MB) to improve Input/ Output (I/O) operational performance and processing logic to manage the transfer of data with a host system together with encoding and decoding logic for the read/ write operation.

    The modified HDD contains a processing element and additional storage to facilitate internal defragmentation. This processing element may be separate from the read/ write logic to permit an element of parallel operation and to avoid

interfering with normal I/O routines. This then performs continual analysis of the data fragments within the HDD. In the event that the HDD decides to defragment a file then the HDD might either initiate a defragmentation operation or else append the operations to existing I/O data requests from the host computer as they are being executed in order to maximise transparency to the host system.

    An additional storage element such as a block of solid state memory then acts as a high-speed, non-volatile buffer during the defragmentation process and is independent from the onboard I/O cache so that any performance degradation is minimal.

The two scenarios mentioned previously are described in outline below.

1. Autonomous Defragmentation
This process uses...