Browse Prior Art Database

Dynamic deployment chains in a hybrid cloud environment Disclosure Number: IPCOM000249223D
Publication Date: 2017-Feb-10
Document File: 3 page(s) / 84K

Publishing Venue

The Prior Art Database


An event processing system utilizing a chain of event coordinators to identify appropriate runtime environment to process an event where an event coordinator is only aware of a subset of potential event coordinators in the system, and optimising the deployment chain for subsequent events.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 49% of the total text.

Dynamic deployment chains in a hybrid cloud environment

In a cloud environment an application that receives an event to process and then deploys code to process that event needs to decide where that code needs to be run in order for it to work. In a hybrid cloud environment a major difference between possible environments that can process the event is the access to required resources. These might be remote resources like databases, queueing systems and package applications like SAP or local resources like CPU, memory and file systems.

The system finds the best environment to deploy the code to and then process the event in. Where the best environment is one that has:

- the minimal access to endpoint systems required to allow the processing of the event by the code

- the minimal runtime capabilities required to run the flow - satisfies the user entitlement criteria - the cheapest cost of CPU and other operating system resources In general, in a hybrid cloud system, the more access it has to private networks then

the higher the cost of provisioning the CPU and memory required. The two extremes of this are:

- A pure cloud application that only has access to public networks but is very cheap to provision and can be scaled up as needed.

- A pure on-premises system that has access to all private networks owned by the company but is expensive to run and provision and can not be scaled up easily.

Enabling scalable applications in the cloud requires those applications to be stateless - in an integration solution this is achieved by retrieving a flow document detailing the flow logic to be performed on an event by event basis and not keeping instances of said flows in memory. A flow is instantiated based on the flow document, processes the event and then is discarded.

A more flexible system enables different flow engines to be utilised based on the logic being performed - where each flow engine or environment may provide different capabilities / qualities of service / security.

Only a subset of available environments may be visible to the deployment component (which may or may not be appropriate for processing). On deployment to an environment they may have access to more appropriate environments that were not visible to the previous component. An example may be that the deployment component is in the cloud and is aware of a single on-premise system, that on-premise system is more secure than the deployment component's environment and as such may have visibility of other environments that may be more appropriate.

The solution consists of the following system:


The "Application" is an Application that generates events that it needs to be processed by the central system that will involve interactions with various endpoints. The flow of logic is as follows:

1. The application generates the event and sends it to central system and is received by an event coordinator.

2. The event coordinator matches the event received to code in the code repository. The c...