Browse Prior Art Database

System and method for effective service chaining in a shared environement

IP.com Disclosure Number: IPCOM000244455D
Publication Date: 2015-Dec-13
Document File: 5 page(s) / 257K

Publishing Venue

The IP.com Prior Art Database

Abstract

This disclosure leverage the "shared memory space" between "chained services" to efficient pass data around, which avoid unnecessary data serialization and processing. The first service, when trying to pass the request to the second service, checks a routing controller to see if the second service is under the same "realm". A realm is defined the by combination of IP/Port/Protocol. If the second service is under the same realm, the first service stored the data to be passed in the shared memory, and provide the pointer to that shared memory via a protocol header (e.g. X-Data-Pointer). The second service, when received the request from the first service, checks if a pointer exists in the header. If the header exist, it follows the address contains in that header to retrieve the data processed by the first service. This avoid the unnecessary steps to serialize the data over network/OS stack.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Page 01 of 5

System and method for effective service chaining in a shared environement

Service chainingis a common pattern in an enterprise service bus (ESB) deployment topology, and immensely growing in cloud paradigm and micro-services architecture . Its definition is that multiple de-coupled service entities (e.g., Web server, load balancers, data base servers, etc.) which can be run in a single or multiple hosting OS(s), virtual machine and/or physical appliance instance(s), are cooperating to serve same workload (such as B2B document exchange).

For example, in an SOA topology where many applications are exposed by a single interface: a "front" service receives web and mobile workloads, enforces security validations, and then determines routing and forwards the message to "back" application service over sorts of transport protocols. In such setting, the "front" gateway service and the "back" application service can have shared environment when they are run upon the same virtual machine instance.

Even more, both services may run on the same product and share the same components loaded to interpret and compute data.

On a cloud environment, the two service nodes in the chain are not necessarily existing as processes on the same hosting OS, or the same virtual machine instance.  

Even though, the inter-service communication must have the entire message sent and received over the wired protocol - with the

front sender serializing the data format (such as JSON and XML) and the back receiver parsing the data. When the data size is large the overheads of network I/O and data parsing is getting more significant and results in performance impact (from CPU consumption and memory redundancy), especially when the services are demanded to high-performance workloads.

It is demanded to pursue an intelligent mechanism to facilitate memory-efficient inter-service communication.

This disclosure presents an idea to boost the messaging efficiency between chained services by leveraging the shared resources.

The core idea of this disclosure is to leverage the "shared memory space" between "chained services" to efficient pass data around. The first service, when trying to pass the request to the second service, checks a routing controller to see if the second service is under the same "realm". A realm is defined the by combination of IP/Port/Protocol. If the second service is under the same realm, the first service stored the data to be passed in the shared memory, and provide the pointer to that shared memory via a protocol header (e.g. X-Data-Pointer). The second service, when received the request from the first service, checks if a pointer exists in the header. If the header exist, it follows the address contains in that header to retrieve the data processed by the first service. This avoid the unnecessary steps to serialize the data over network/OS stack.

1



Page 02 of 5

The claim of this disclosure would be


1) Services located u...