Browse Prior Art Database

Storage Resource Management Based On I/O per Gigabyte per Second

IP.com Disclosure Number: IPCOM000011689D
Original Publication Date: 2003-Mar-11
Included in the Prior Art Database: 2003-Mar-11
Document File: 3 page(s) / 55K

Publishing Venue

IBM

Abstract

This article discusses a Storage Resource Management solution for systems gated by Input/Output (I/O) per Gigabyte per second capability. It discusses ways I/O per Gigabyte per second can be measured and some optimization policies.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 40% of the total text.

Page 1 of 3

Storage Resource Management Based On I/O per Gigabyte per Second

  The ratio in a disk drive of "arm speed" to "data capacity" has been dropping for years and continues to drop. This metric is often referred to as Input/Output (I/O) per Gigabyte per Second capability. Applications and operating systems tend to have a rate at which they access data. That metric is not dropping and there are now many instances where (even with aggressive memory caching) the arm speed is not adequate and data space must go unused.

    This work builds on old Load Channel Balance disclosures filed on DFSMShsm some 20 years ago. This was to have similar policies that based on the workload across Channel Path IDs, favoring volumes, controllers or paths based on current levels. The issue of I/O per Gigabyte per second was non-existent then, but the idea that the data must be balanced certainly existed. This disclosure is simply applying the same principal based on another metric.

    This proposal looks at how one can go about solving this problem by managing the I/O per Gigabyte per Second characteristic data demands by placement of the data and dealing with exceptions where data placement was inadequate. With this approach, customers can more efficiently use the disk space they have with optimal performance and they may be able reduce or eliminate tape drives by using the spare disk space for archival or backup storage.

There are really three parts to this problem:
1. Measuring I/O per Gigabyte per Second, both disk capabilities and usage: This sounds straightforward, but there are problems. The simple way is to measure by logical unit (LUN). However, if there is more than one host using data on a LUN, this doesn't work. More likely, the data needs to be measured on LUN/host pairs. The measurement metric can be gotten from network management (SNMP) data, like the new SCSI MIB, or from the new Storage Networking Industry Association SMIS interface to obtain the storage perspective.
a. By LUN This is fairly straightforward and available directly using standard data.
b. By LUN and host usage A system filter driver might also be able to gather this information from a host perspective, which in the end is the only perspective or context in which this information should be gathered, because that is the context of the application that has the data I/O requirements. (Host-view can't account for caching, so this has problems too. It will take a combination of the host view and a knowledge of the cache hit ratio to come to something interesting.)

  Storage resource management (SRM) monitoring can provide the means by measuring this on a periodical basis (e.g. once every 60 seconds), generating the data from the file system filter or the database. We also want to make sure that reorganization of the file system or database is not required. If the simple reorganization would provide the I/O needed (e.g. head movement would be reduces) then we need to reorganize the underlying s...