Browse Prior Art Database

Method to improve LPM performance by using shared disk Disclosure Number: IPCOM000226906D
Publication Date: 2013-Apr-23
Document File: 3 page(s) / 57K

Publishing Venue

The Prior Art Database


This method to improve Live Partition Mobility (LPM) performance is to use shared disk for the memory migration rather than the network. LPM requires common storage between sources and targets. In the case of a very busy transactional activity where networks are utilized fully, a faster method for migrating the memory would be to use the fastest shared storage among the source and target systems. This also would be true if networks are running at a slower speed than disk I/O. With current technology and the comparable performance of SAN and Networks it is not yet apparrent that this would be feasible. However, if disk I/O performance or ability to share solid state devices greatly improves at a rate faster than network and IP protocols, then this invention would be more practical.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 53% of the total text.

Page 01 of 3

Method to improve LPM performance by using shared disk

An lpar migration involves following steps:

1. validation for source and destination compatibility

2. move the partition to VPM mode to track the partition state change from the PHYP

a. The first transfer of memory copies the entire memory in one pass.

b. repeated transmission of changed memory pages

3. suspend the source VM and transfer the final changed memory pages

During step 2, the performance of the Live Partition Mobility (LPM) depends on 2 things: network speed and the amount of partition memory state changes (for example, how busy the system is). With the current design, LPM uses the network to transfer the memory data of the partition. This has a limitation on the bandwidth of the physical network adapter of VIOS and is dependent on network speeds and stability.

With large memory systems, and with many network (web) applications being hosted on the same managed system, the negative impact to performance of the LPM will be the greatest.

    The proposed solution will use the disk based memory page transfer with block read/write. This saves the time at VIOS network layer that would be used to process each network packet. Disk data read/write will run at a higher speed.

In the case of a data center, where partitions have large amounts of memory and will be running at peak times with a high amount of memory activity, LPM migrations in will consume high network bandwidth for LPM activity in the data center.

The proposed solution will create a data Logical Unit (LU) and use the LU for the memory page transfer quickly. For a migration, it is a prerequisite, that the source and destination VIOS have access to the same storage subsystem/storage pool (as they need to serve the storage to migrating partition). Because of this, they can have access to new shared LU created during migration. If frequent LPM activity is expected in the data center environment, then administrator can define a dedicated LPM LU and share it across all the VIOS's.


1. LPM performance will increase in case explained above, as disk block read/write is much faster than network transfer with tcp packet processing. It will also save the VIOS CPU used for these network packet processing. 2. Safer transfer as data transmitted through storage not on the network.

3. Reduction in adverse effects from the LPM using network bandwidth, especially if any network based workload is running on any of the source and destination CEC. 4. If the physical adapter at one VIOS runs at lower speed, the lower speed will be the bottleneck for the LPM.


Page 02 of 3

5. This technique will be helpful in a cloud environment, where frequent LPM will be needed for consolidation and load balancing. Having dedicated storage (LPM-LU) for identified MSP's will help to increase the performance of the cloud solution.

In the typical LPM environment as shown below storage to the victim partition may be allocated by different s...