Browse Prior Art Database

Profile Guided File System Restructuring

IP.com Disclosure Number: IPCOM000117258D
Original Publication Date: 1996-Jan-01
Included in the Prior Art Database: 2005-Mar-31
Document File: 2 page(s) / 63K

Publishing Venue

IBM

Related People

Heisch, RR: AUTHOR

Abstract

The technique of reordering the instructions in a program based upon actual execution profile data is a well known method of improving program performance by eliminating inefficient use of the hardware architecture. This disclosure presents the idea and methodology of extending the technique of profile based program optimization to file system restructuring.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Profile Guided File System Restructuring

      The technique of reordering the instructions in a program based
upon actual execution profile data is a well known method of
improving program performance by eliminating inefficient use of the
hardware architecture.  This disclosure presents the idea and
methodology of extending the technique of profile based program
optimization to file system restructuring.

      One common approach to improving file system performance on an
existing file system is to simply backup all the files in a file
system, delete and recreate the file system, and then restore the
backed up files.  This process defragments the files by collecting
the individual blocks comprising a particular file, which typically
tend to get spread out across the disk over time, and writing them
back in a contiguous, sequential fashion.  However, defragmenting all
the files on a file system assumes that maximum performance will be
attained when all files are separate and sequential, which is not
always the case.  Consider the case where many files are referenced
in some typical, recurring sequence but only a few blocks are
accessed in each file.  In this situation, it would be advantageous
to physically place the commonly accessed blocks as close together on
the disk as possible so as to minimize disk seek time, rotational
latency, head switch/settle time, etc.  These commonly accessed files
would be fragmented "by design" so as to maximize performance for
typical access patterns.  Disk allocation would not be based on
static assumptions about file reference patterns but would
dynamically determine access patterns and allocate disk blocks to
max...