Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Method and apparatus to determine Storage usage and Backend Mapping by applications on Software Defined Storage Systems.

IP.com Disclosure Number: IPCOM000240938D
Publication Date: 2015-Mar-12

Publishing Venue

The IP.com Prior Art Database

Abstract

Mapping of Storage resouces from Applicaiton to the Storage Defined Systems to gather historical usage data patterns and thereafter use for better and efficient Storage provisioning to save costs and to be used in Billing and Chargeback.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 22% of the total text.

Page 01 of 11

Method and apparatus to determine Storage usage and Backend Mapping by applications on Software Defined Storage Systems .

Disclosed is a system for which the advent of Software Defined Storage in Cloud paradigm provides administrators with a free hand at allocating resources to the workloads, as the resources (storage, memory) are available in abundance with the administrator. However, this practice results in wastage of resources which otherwise could be better utilized.

Figure 1: Software Defined Storage

With the implementation of SDS (Software Defined Storage) paradigm, it becomes even more challenging for the administrators to figure out which storage device is provisioned from which storage array. As shown in the picture above, all the storage entities are viewed collectively by the Data Plane, which then provisions the storage to the Virtual Machines(VMs) as per the request. This blurs the distinction between the Storage types underneath the DataPlane layer. This article enables the Administrators to trace down the storage device all the way from the Virtual Machine to the Storage Array and the Hard Disk in them and vice versa. This enable the administrator to plan storage usage better and know the usage at any point in time.

This free run over the cloud resources is because the administrator is not aware of the pattern of resource usage and hence the resource allocation is arbitrary. However, this could change and the resource allocation could be more better managed if there is a historical resource usage data which can be used as a reference point at the time of resource allocation. Then the administrators would have a fair idea of how much resource is required and how much should be allocated for optimal performance. At present, the resource usage is available only for each host and virtual machine and the administrator doesn't have an idea which application uses which kind of storage and from which vendor and what is the performance of each kind of storage (SAN , NAS , DAS , Flash etc.) from each vendor. The storage usage chain if further extrapolated, the storage usage for each of the application too could be obtained (as explained below), and the administrator would have a fair knowledge of resources to be allocated to each

1


Page 02 of 11

workload.

This data collected over a period of time would serve as historical reference and be vital in allocating resources to workloads thereby reducing wastage or re-provisioning resources to the same workload, in case of overuse of resource.

In a typical SAN setup the Storage is centrally allocated from a Storage Array to the hosts via a fabric , thereafter the the Storage is distributed to the Virtual Servers(VSs) on the host. It has been seen as a common practice to allocate different Virtual Servers for different applications. As of today, storage usage are available to the administrator only per Host or Virtual Server. However it will be beneficial to the administrator to k...