Browse Prior Art Database

Method and Apparatus for Autonomic Regulation of Database Resources

IP.com Disclosure Number: IPCOM000097855D
Original Publication Date: 2005-Mar-07
Included in the Prior Art Database: 2005-Mar-07
Document File: 2 page(s) / 42K

Publishing Venue

IBM

Abstract

A basic problem with many application server installations today is that the hardware must be sized to meet the maximum foreseen computational load on the hardware. In a common topology where several application servers share a single database server, the database server must be sized to meet the maximum possible load on the server -- which means that if all of the application servers are making concurrent requests on the database server, that the database server must be of an adequate size to handle those requests. In practice, this peak load rarely occurs. However, as database licenses are granted on a per-processor basis, customers often incur large costs for database capacity that will mostly remain unused. What this invention proposes is a mechanism by which specific tuning parameters that have a direct effect on reducing the load on a database can be adjusted in the application server in response to the detection of high numbers of requests to the database. Thus, by adjusting the way in which the application servers make requests on the database, the peak load on the database can be reduced, thus reducing the size of the hardware that is needed to run the database and the number of licenses needed for the database software running on that hardware. The core idea of this invention is the ability to dynamically optimize an application environment from a holistic perspective, e.g. from the edge servers and request routers to the back end databases. A preferred embodiment of this invention would utilize autonomic technologies and infrastructure, e.g. managed resources, policies, and autonomic managers, in a closed MAPE (Monitor, Analyze, Plan, Execute) loop. Each component involved in this invention would be a Managed Resource that conforms to the APIs specified by the autonomic computing specifications. There is an Autonomic Manager to whom the managed reousrece are assigned that is responsible for correlating information produced by the managed resources and requesting any additional information required to determine the proper symptom and execute the appropriate action.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Method and Apparatus for Autonomic Regulation of Database Resources

In order to meet qualities of service requirements, the customer must accurately plan for capacity. Because there is no current easy way to dynamically tune an environment of both application servers and databases (such as WAS and DB2), the result is that plans are based on the maximum usage figures. This leads to an under utilization of resources, in particular database servers, because the peak times that the database server is maximized is limited to a small sub-set of time. To scale the applications to meet the QoS parameters, the customer has simply purchased new blade servers. Further, it is really the end-to-end QoS, not just database utilization that is important.

Assumption

The customer is using a product like WAS XD (Extended Deployment), which features a programmable router (called the On-Demand Router, or ODR) which has sensors that can monitor attributes (such as response time) of incoming HTTP and other requests and respond appropriately. Events raised in WAS XD are captured by an Autonomic Manager (AM) which is a part of its Deployment Manager process.

The customer has a QoS policy that states "Response time should be less than 2 seconds and database utilization should remain between 75 and 85%." Its business value is that it codifies the relationship between acceptable customer service (response time) and effecient resource mangement (database utilization).

Note: For sake of clarity, the rest of this disclosure references this as the QoS policy.

Scenario

During the normal processing of requests, the ODR raises a notification that reports the duration of a request. Since the ODR is a managed resource, these events are sent to its Autonomic Manager. Over the course of processing requests, the AM detects that response time is increasing and is approaching the threshold set by the QoS policy's pre-condition.

To determine the source of the problem the AM queries one of its managed resources, the operating system, to determine the CPU utilization of the thread that the database server is running on. The AM correlates this information with event information of the ODR, as well as the response time of the application's JDBC calls (the application itself is a managed resource and surfaces this information to the AM). From this information, the AM determines that the database is running hot and cannot handle this load.

When this occurs, the action from the QoS is enacted which turns on replicated 'Option A' caching across the application server cluster. Option A Caching (as described in the EJB 2.0 specification) is a caching mechanism in which values read from a database are held in memory for the lifetime of the application server instance. This reduces the load on the database since only the first query of any number of identical queries is executed against the database server -- all further queries are executed agai...