Browse Prior Art Database

Method for Autonomic Efficiency optimization of virtualized storage systems Disclosure Number: IPCOM000200352D
Publication Date: 2010-Oct-07
Document File: 3 page(s) / 66K

Publishing Venue

The Prior Art Database


Disclosed is a method for the autonomic efficiency optimization of virtualized storage systems. The method solves the problem of manually having to administrate storage and assign it to servers and applications. It also solves changes in workload patterns leading to performance and capacity problems by applying pattern detection algorithms and virtualization techniques. The method allows a much finer tuning, on the storage block level, than is possible today and constantly monitors changes in workload patterns which lead to an autonomic optimization of the storage subsystem.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 39% of the total text.

Page 01 of 3

Method for Autonomic Efficiency optimization of virtualized storage systems

Only 10 years ago every application administrator tuned storage subsystems directly attached to servers by hand according to their application workload requirements. Storage consolidation (SAN, NAS, ...) and virtualization in the past years increased efficiency measured by most standards but also distanced the application administrators from fine granular tuning of storage for their workloads. Storage administrators on the other hand are struggling managing many different requirements such as cost, performance, space, cooling, networking and more. Today storage server utilization is not evenly distributed with bottlenecks in some areas and idle capacity in others. Storage administrators with no application knowledge struggle to optimize storage subsystems and are only contacted by application administrators when it is to late and performance problems are presenting themselves.

What is required today is a method for automatically (autonomic) optimizing the storage subsystem configuration for any dynamic (!) workload by any number of servers and applications. Rather than ad-hoc and after the fact problem resolution the optimization would occur constantly and transparently to the servers and applications. The method would analyze input/output workload patterns by servers/applications and optimize the underlying storage subsystem configuration by constant reconfiguration. This would increase efficiency in heterogenous storage networks and harmonize storage utilization across the available storage resources.

Efficiency would be expressed in a function which would include parameters like: fixed costs like equipment depreciation, floor space, performance and

capacity characteristics and
variable costs such as energy for powering and cooling, maintenance costs,

labor and maybe even CO2 emissions.

The goal is to use an efficiency function for any type of available equipment which balances performance and capacity against cost. The efficiency optimization would then always utilize only the storage system which fit the applications performance and capacity demands at the lowest cost. It will even be possible to use the method to predict what storage should be added and which should be retired to reduce total cost of ownership. This would remove today's needs for conservative storage configuration which utilizes expensive storage systems very poorly and therefore wastes money and other resources.

The drawbacks are that initially the cost parameters might not be entered correctly as this knowledge cannot always be easily calculated. Still, the method would still be able to balance dynamic workloads much finer and quicker than any human.

The core idea is to use pattern detection algorithm s on detailed, fine granular storage access patterns to understand how application input/output workload changes over time. The available storage subsystems have certain characteristic...