Browse Prior Art Database

Publication Date: 2016-Oct-06

Publishing Venue

The Prior Art Database


Typically a Data center or server farm contains thousands of servers which requires large amount of power for operating and cooling leading to high operating cost and hidden cost associated to carbon foot print. Therefore data operating companies have a challenging problem to run energy efficient data centers. We seek to tackle this shortcoming by proposing a systematic approach to maximize green data center’s profit, i.e., revenue minus cost. In this regard, we explicitly take into account practical service-level agreements (SLAs) that currently exist between data centers and their customers. Our model also incorporates various other factors such as availability of local renewable power generation at data centers and the stochastic nature of data centers workload. Furthermore, we propose a novel optimization based profit maximization strategy for data centers using two types of energy distribution. We show that the formulated optimization problems in both cases are convex programs; therefore, they are tractable and appropriate for practical implementation. Using various experimental data and via computer simulations, we assess the performance of the proposed optimization based profit maximization strategy and show that it significantly outperforms two comparable energy and performance management algorithms that are recently proposed in the literature.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 19% of the total text.

Page 01 of 11


The workload experienced by the servers varies through the day. If the data center servers are on all the time, the utilization would be dramatically low during the less active workload periods, because data center. Capacity is usually provisioned for peak rather than average load. Our idea is to maximize profit by reducing cost and increasing revenue based on the parameters like number of incoming jobs, Outgoing job, mode of energy (Renewable v/s Non Renewable energy resources), operating cost of data center , Penalty, reward, revenue and profits.

First, burst arrivals may experience latency or be unable to access services. Second, there has a power consumption overhead caused by awakening servers from a power-off state too frequently. Third, the worst case is violating a service level agreement (SLA) due to the fact that shutting down servers may sacrifice quality of service (QoS) The SLA is known as an agreement in which QoS is a critical part of negotiation. A penalty is given when a cloud provider violates performance guarantees in a SLA contract. In short, reducing power consumption in a cloud system has raised several concerns, without violating the SLA constraint or causing additional power consumption are both important. To avoid switching too often, a control approach called N policy, had been extensively adopted in a variety of fields, such as computer systems, communication networks, wireless multimedia, etc. Queuing systems with the N policy will turn a server on only when items in a queue is greater than or equal to a predetermined N threshold, instead of activating a power-off server immediately upon an item arrival.

However, it may result in performance degradation when a server stays in a power-saving mode too long under a larger controlled N value., the main contributions are summarized as follows. Power-saving policy has been introduce with Transitioning between Busy, Idle, Sleep, DVFS gradient and DVFS maximum. The main objective is to mitigate or eliminate unnecessary idle power consumption. A Mode* scheduling policy is proposed to optimize the decision-making in service rates and mode-switching within a response time guarantee.

Here, we are proposing a framework that maximize the profit of data vender by assignment of complex

job(severity and criticality) to different mode of operation that are using renewable or / and non-renewable energy.

Response time amount of time it takes from when a request was submitted until the first response is

produced, not output (for time-sharing environment). so, we leverage a frame work that will run those

jobs on scalable as well as non scalable modes of operation in such a manner that will minimize the idle time, minimize energy consumption, etc. The cloud computing paradigm promises a cost-effective solution for running business applications through the use of virtualization technologies, highly scalable distribu...