Browse Prior Art Database

Method to Use Analytics to Produce a Forecast to Prevent Failures on Build and Packaging Task Execution on a Cloud Environment Based on Dynamic Dashboard Analysis and Balance of Specific Resources

IP.com Disclosure Number: IPCOM000247914D
Publication Date: 2016-Oct-11
Document File: 4 page(s) / 94K

Publishing Venue

The IP.com Prior Art Database

Abstract

In a software development cycle the build and release packaging is one of the process that generates the highest number of failures for which the roll back are a real loss of time, resources and investment for the software company. The reasons for the failures are multiple: lack of network and CPU resources, node unavailability, ownership rules, and so on. The idea described in this paper resolves or at least addresses the problem of failures in a complex task by routing a possible job to an infrastructure that has intrinsic capability to perform it, without knowing before any information about the environment that is going to be used to perform said task. End users do not need to have permissions to manage cloud environment nor administrative rights to handle problem determination. A forecast is available to customers that will inform them before running the build, of the percentage to have it accomplished and that will suggest a different time frame which will raise up success expectations

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 43% of the total text.

Page 01 of 4

Method to Use Analytics to Produce a Forecast to Prevent Failures on Build and Packaging Task Execution on a Cloud Environment Based on Dynamic Dashboard Analysis and Balance of Specific Resources

    When in a software provider company a product build or a release package must be performed, the following steps are executed in an standard balanced environment: 1) Identify the system that is able to give the service (service provider) based on specific hyper-visor algorithms 2) Run the task (I.e. run the build or the package)

3) Wait for task end and for task response
In this common process are induced the following points of failure:
1) If the task owner doesn't know the service provider he/she cannot perform the task. So there must be in place a communication mechanism in order to identify and maintain the list of providers available at the time the task has to be run
2) If the cloud provider is not available, for instance if it is down, if it is engaged or if it lacks of resources, the task will wait to be executed on a queue that is platform dependent and may even not be defined. This could occur in a process exception that must be handled. Even in this case a process has to be created and maintained to resolve failures in task execution (I.e. to know the provider administrator in order to retrieve the provider state of service). This require administrative privileges.

3) The build or package owner doesn't know the task execution state until it receives back provider response from the cloud engine.

4) The build or package owner needs to address to the cloud administrator the build failure to start the problem determination

    The solution described below addresses all the failures depicted above in the case where the LAN (Local Area Network) that is used by the provider is a TCP/IP based network. The method uses as task Dispatcher a packet/cell switching network that includes an origin access node that provides client access services, and at least one destination access node that provides server access services. Both server engines and dispatchers are virtual servers on a cloud environment.

    The solution requires the cloud administrator to install a virtual server that needs to start testing the build environment parameters and to produce a dashboard including all relevant information for a build to be properly handled. The Dispatcher main tasks are the following:
1) Broadcast all build nodes availability and reliability: this will be achieved by sending heavy TCP/IP packets to all nodes in the build environment and registering the time to response. This will produce both node availability and reliability in terms of network latency.

2) For each build node, retrieve hardware information relevant to the build execution: typically, available RAM and CPU percentage as well as disk space
3) Create a data base table (dashboard) that will include for each build node all reliability information
4) The Dispatcher periodically (this parameter can be...