Browse Prior Art Database

System and method for network load balancing in a cloud environment Disclosure Number: IPCOM000240937D
Publication Date: 2015-Mar-12
Document File: 4 page(s) / 55K

Publishing Venue

The Prior Art Database


A cloud computing environment offers an integrated computing solution as a service to an end user. In an Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) environment end user gets access to the IT resources which are preconfigured and deployed as per the requirements. In addition to the IT resources deployment, meeting the service level agreements (SLAs) and quality of service (QoS) requirements are the most important aspects of a cloud computing environment. In a cloud infrastructure network performance parameters (bandwidth, latency and throughput) can play an important role to meet the SLAs and QoS requirements. Following are the challenges faced related to network performance, SLA and QoS in a cloud computing environment with the legacy approach of network configuration: 1) Network for the physical and virtual server infrastructure is configured as per the network integration design with some initial level understanding of the workloads. Network configurations are not changed as per the actual requirements from the workloads. 2) Network requirements for various workloads keep changing as per application usage and business requirements. There is no way to monitor and change the configurations dynamically. 3) Network configurations are mostly static in nature, most of the time reconfiguration exercises are carried out in case of performance issues only 4) Network optimizations are mostly driven by scalability requirements and performance issues 5) There is no proactive approach of network balancing and reconfigurations as per the workload requirements in the cloud

Example: For example in a cloud infrastructure there are two virtual machines (VMs) which are part of a workload (e.g. application and web server) and exchanges significant amount of data over network. Physical host for a VM depends on several factors: a) The time of VM provisioning and available hosts at that time b) Availability of resources on the physical host c) Relocation of VMs due to HA and resource usage balancing features Let us consider due to any of the above factors these two VMs are hosted on different physical servers which are in different server racks. Now Ethernet communication between these two VMs involves several switches in between which add significant latency and hops at each level. Ethernet traffic between the two VMs depends upon business requirements and it can lead to significant increase in the bandwidth requirements. As cloud is a highly shared infrastructure for all kind of resources it becomes a real challenge to meet the SLA and QoS. In such case optimization to minimize the number of hops and latency between the two VMs becomes a key requirement. A solution based on monitoring such instances and automatically take the optimization steps is explained in this article.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 4

System and method for network load balancing in a cloud environment
Proposed here is an innovative solution to deal with these challenges of network performance, SLA and QoS in a cloud computing environment.

As per this solution infrastructure management software/appliance like IBM Flex System Manager (FSM) or IBM Systems Director integrates with Ethernet switch management and control interface or IBM System Networking Switch Center (SNSC) with the help of a plugin called network balancer.

Network balancer can be implemented as a stand alone application or as a plugin to FSM and Systems Director which has virtual and physical infrastructure management capabilities.

Network balancer has capability to monitor the traffic between physical and virtual servers and take actions for optimization and meet the SLA and QoS.

Network balancer will have following set of rules and logic implemented:

1. Monitor various physical server to physical server and virtual server to virtual server communications and assign weight of communication based on bandwidth requirement, latency and number of hops.

2. Record and analyze various growth patterns related to network requirements based on business needs and time slots

3. As per the weight of communication and business requirements initiate virtual machine relocations to keep the communicating VMs closer to each other by satisfying one of the following conditions: a) Communicating VMs on same host allowing VMs communication through virtual switch at hypervisor layer

b) Communicating VMs in same chassis allowing VMs communication through chassis switches

c) Communicating VMs in same rack allowing VMs communication through top of rack (TOR) switches

4. After identifying a VM relocation candidate network balancer issues a relocation ticket to the respective hypervisor manager (e.g. VMControl, VMware vCenter) with source and destination details and priority of the ticket

5. Number of simultaneous relocations depends upon the VM size and bandwidth reserved for VM relocation (e.g. bandwidth assigned to the VMkernel port group in case of VMware). To control this accordingly the numbers of relocation tickets are issued at a time.

6. In case of multiple VMs relocations requirements, relocations sequence is decided based on the priority. Priority of a relocation ticket is assigned based on the weight or cost of participating VMs


Page 02 of 4

7. Based on the historical data and physical location of the server suggest a best possible host for a new VM provisioning

8. Execute the VMs relocation for network balancing as per a scheduled time line or by manual intervention of the administrator

9. Integrate with cloud management software to enable billing and metering based on the nature of the workload and usage of this network balancer to meet the QoS and SLA

Features of the proposed solution :

1. Automatic balancing of the network traffic based on VMs relocation.

2. Monitoring the east west traffic in a large serv...