Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

System and method to optimize the service quality under SLM enforcement

IP.com Disclosure Number: IPCOM000228601D
Publication Date: 2013-Jun-21
Document File: 5 page(s) / 291K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a system and method to optimize the service quality under SLM enforcement. A mechanism is provided to allow an appliance cluster to collaboratively process high priority messages in a faster pace than normal or low priority messages.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 42% of the total text.

Page 01 of 5

System and method to optimize the service quality under SLM enforcement


1. Background: SLM (Service Level Monitoring) policy, also called Service Level Agreement, plays an important role for enterprise edge appliance to protect the internal app server(s) from being overwhelmed by unexpected traffic load or traffic pattern. A typical SLM policy defines the desired traffic pattern (e.g. the maximum concurrent transaction supported, or the maximum transactions within a certain period could be processed by backend system). However, existing Slm policy solutions doesn't scale well because it requires point to point communication among the peers to exchange current statistic data. Also, the data being exchanged only reflects the current status of a particular appliance, and there is no way for other appliance in the cluster to alleviate the transaction processing pressure of a particular appliance because the transaction records can't be easily transfer from one appliance to another. Finally, there is also no concept of message priority in current SLM/SLA solutions, which prevent high priority messages from being process faster than message with normal or low priority when they all being bufferred in a SLM queue. When one message being routed or assigned to one appliance.

    2. Description of the Disclosure: Disclosed is a system and method to provide a mechanism that allow an appliance cluster to collaboratively process high priority messages in a faster pace than normal or low priority messages. Admin first define the global SLM policy, which defines how many resources should be allocated for certain priority level of message ( e.g. 50% bandwidth for high priority messages). One of the appliance in the cluster will be selected as the management node of the cluster, and it will read global SLM policy to setup different level of virtual priority queue. Each appliance in the cluster will then be notified to subscribe a certain priority queue in order to meet the requirement defined in SLM policy. Another key point of this idea is the introduction of "work ticket" system. A message might be queued locally, but the message id will be published to the priority queue as a work ticket so that others can help to process the message if they have additional bandwidth. Previously when one message is assigned to one appliance (e.g. by load balancer), the message will always be processed by that appliance, so this enhancement could alleviate the message processing pressure of particular appliance by sharing the load within the cluster.

The core ideas of the disclosure would be :

    1) Define SLM policy at cluster level. The management node will decide which appliance in the cluster should subscribe which priority level of message.

    2) The virtual priority queue that leverage publish/subscribe messaging pattern for large-scale deployment

    3) Each appliance in the cluster can help to alleviate the transaction processing pressure of other appliance by subscr...