Browse Prior Art Database

A method for improving virtual machine (VM) performance based on increasing inter-node memory access of VMs

IP.com Disclosure Number: IPCOM000202383D
Publication Date: 2010-Dec-14
Document File: 4 page(s) / 44K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method to increase the efficiency of a multi-processing environment where a number of processors and associated memory are tied together by an inter-processor link. The method mines the memory accesses on each physical compute node by using chipset counters to monitor physical NUMA compute node memory accesses on a server. Based on this information, the efficiency and performance of the VM within that compute node can be calculated by a central Control Manager within the cloud.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Page 01 of 4

A method for improving virtual machine (VM) performance based on increasing inter-node memory access of VMs

In a multi-processing environment, a number of processors and associated memory are tied together by an inter-processor link. As applications are run within the non-uniform memory access (NUMA) cluster, memory locations are updated by different node processors.

Memory size or performance is usually the limiting factor in deploying virtual machines and their applications. Memory is often limited to achieving high utilization of physical servers running virtual machines. Further, as the memory on a NUMA based physical server utilization increases, the percentage off-node memory access increases. Basically, instructions executing on the processor of one node are accessing information in the memory of a different node. Off-node memory access is detrimental to VM performance.

If a virtual machine running on one physical node within the NUMA node accesses or stores data within the memory of another physical node, intra-compute node traffic occurs. If the intra-compute node memory traffic is too high, then inefficiency in the VM performance or bottlenecks on the inter-nodal link can occur (Figure 1). Node 1 is over-utilized, whereas Node 3 has excess capacity.

Figure 1: Intra-compute node memory traffic is too high, causing a disruption in performance

(This page contains 00 pictures or other non-text object)

There is known art for moving the application or virtual machine among compute nodes within the same physical system to better utilize the memory and reduce the inter-processor bandwidth. In a cloud environment, however, the migration that can be performed is limited to within the same physical compute server; as is stated above,

1



Page 02 of 4

off-node memory accesses degrade performance, not only of the VM requiring off-node access, but also to the VMs existing on the same node.

Currently there is no way to maximize the overall memory utilization with a cloud computing environment (inter-physical compute system migration) to eliminate bottlenecks and maximize the virtual machine (VM) deployment. The method used today in migrating virtual machines within a physical compute node is not expandable to a cloud environment. Since these applications run in a multi-processing environment, the memory associated with the application may not reside within the same processor/memory controller node. However, memory access for these applications can travel across the inter-processor link where the associated application data are either accessed or stored. (Figure 2)

Figure 2: Memory access can travel across the inter-processor link

Node 1 Node 3

Inter-processor links

Node 2 Node 4

The disclosed solution is a method to mine the memory accesses on each physical compute node by using chipset counters to monitor physical NUMA compute node memory accesses on a server. Based on this information, the efficiency and performance of the VM within th...