Browse Prior Art Database

Preemptive Resource Reallocation Within System Partitions Disclosure Number: IPCOM000015932D
Original Publication Date: 2002-May-27
Included in the Prior Art Database: 2003-Jun-21

Publishing Venue



Disclosed is a method of preemptively reallocating resources of a partitioned computing system within an operating system or tool such as IBM’s WorkLoad Manager, in anticipation of an immediate increase in on-line transaction processing or other high priority activity. This invention applies to operating systems for machines in which the hardware is divided into logical partitions and to resources within those partitions. These machines may have a single processor or multiple processors. The partitions may encompass one or more processors. An individual processor may be shared across two or more partitions. In current resource allocating/optimizing systems, the level of performance and/or resource utilization by each of several processes or applications is monitored. Periodically the performance level is compared with objectives and priority levels. When performance falls below objectives, resources are shifted from lower priority processes to needy ones of higher priority. Performance is again monitored, and resources are again shifted, depending on which processes have greatest need – based on priority and performance. This process is repeated continuously. The capabilities of such reactive systems are inherently limited by the cycle time between resource reallocations. If the cycle time is shortened, the overhead of the tool is excessive, impacting overall performance. If the cycle time is lengthened, the response to rapid shifts in relative workload across the various partitions is very slow, creating particular problems for real time or on-line functions, such as web serving applications. This invention relates to dynamic reallocation within the operating system of resources based on variation in demand outside the normal temporal cycles within which a load manager typically operates, in anticipation of demand. For example, in a real time purchasing system, a high demand for price look-up may anticipate high traffic through a tax computing program, or high activity in a credit card or corporate account processing system. As the level of price requests jumps upward, additional resource may be allocated to the tax computing and purchase/payment processing systems prior to those actual requests being received in those queue(s). Likewise, when no price requests are active, some or all resource may be taken away from the tax and purchase processing systems, subsequent to completion of some or all current work in process. Generally, the application manages its application level queue for each important application level resource. Application level resources would be entities such as number of orders, or number of logged-in users, or number of concurrent sessions. In contrast, system level resources are entities such as memory, messsage queues, semaphores, disk, processor time, and so on. As the application processes more requests for its application level resources, the application in turn is making additional requests for system resources. Thus, there is a direct relationship between the application resources and system resources needed to handle increased workloads.