Browse Prior Art Database

Method and mechanism to use prioritized classes as an optimization for shared processor partitioning resource usage

IP.com Disclosure Number: IPCOM000181074D
Original Publication Date: 2009-Mar-25
Included in the Prior Art Database: 2009-Mar-25
Document File: 2 page(s) / 29K

Publishing Venue

IBM

Abstract

Method and mechanism to use prioritized classes as an optimization for shared processor partitioning resource usage

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Method and mechanism to use prioritized classes as an optimization for shared processor partitioning resource usage

Shared processor partitioning allows multiple operating systems to time share physical processors. This technology has become commonplace in the marketplace today. One of the central tenants of shared processor partitioning is that virtual processors which are idle will call the Hypervisor to allow it to distribute processor cycles to other partitions which are not idle. This is typically accomplished by an explicit call from the active virtual processor to the Hypervisor. Partitions may be given an entitlement, or a guaranteed amount of central processing unit (CPU) resource available to them, should they need it. Most designs also allow active partitions to get more than their entitled CPU resource if it is available on the system. There are usually share/priority mechanisms in place to control the distribution of resources over and above entitlement. With POWER, the mechanism to distribute resources is called "variable weight", which is a share-based approach.

Typically, partitions which are very busy will consume as much CPU resource as the Hypervisor will grant them. This leads to the basic issue, if a partition contains a mix of high priority and low priority work, the Hypervisor cannot make a distinction between the priorities of work in the distribution of resources over and above the partition entitlement.

Workload management (WLM) mechanisms within an operating system allow the creation of classes of tasks. These classes have different priorities. Thus, it is possible to identify the relative priority of work by classes.

Workload classes can already have a number of attributes to define their performance requirements. To keep things simple, consider a class that is given a minimum capacity requirement. A minimum capacity requirement could be defined as a numerical value indicated the number of processors of minimum capacity required. For example, a class might require a minimum of 0.5 processors capacity.

The operating system will allow additional information attributes for workload classes. One new attribute will indicate if the class is low priority work. When running in a shared processor partition, the operating system periodically sums the processor usage of all of its classes. It subtracts the sum of processor usage of all the low priority classes from the total usage of classes. It then compares the result (the non-low priority usage) again the processor entitlement of the partition. If the result is...