Browse Prior Art Database

A method and system to efficiently orchestrate continuous integration and continuous deployment in an enterprise IaaS cloud environment

IP.com Disclosure Number: IPCOM000245585D
Publication Date: 2016-Mar-21

Publishing Venue

The IP.com Prior Art Database

Abstract

This article describes method and system to efficiently orchestrate continuous integration and continuous deployment in an enterprise IaaS cloud environment. It solves the efficiency problem in contexts like mass cloud deployments to production environments, or Continuous Integration (CI) in cloud development and testing, to identify in advance and avoid unnecessary time spent on cloud deployment steps that are going to fail.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 27% of the total text.

Page 01 of 11

A method and system to efficiently orchestrate continuous integration and continuous deployment in an enterprise IaaS cloud environment

In an enterprise Infrastructure-as-a-Service (IaaS) cloud environment, due to complex topology, configuration and states of heterogeneous components, end to end integration and deployment often takes very long hours. And more importantly, any failure and recovery during the course of integration and deployment may cost the team 2X time, and therefore significantly impact delivery schedule.

This article talks about solving the efficiency problems during continuous integration and continuous deployment for complex enterprise IaaS. The following are 2 typical use cases that can be optimized by applying the method and system described in this disclosure:


1) Topology with load balance and high availability architecture

1



Page 02 of 11

Diagram: OpenStack-based cloud sample 2-node topology with Load Balance & High Availability.

Typical OpenStack cloud with 2-node Load Balancer (LB) + High Availability (HA) will require 20+ nodes, while with 3-node LB + HA may have 30~40+ nodes in the topology. Due to such complexity, the cloud initial deployment on Bare Metals (BMs) and also the cloud update process both require long duration, in many best cases initial deployment costs a day or more even if the process is fully end-to-end automated. If the cloud deployment is image-based (not Chef / Puppet / Ansible convergence based) the actual node provisioning time will be shorter, but with some tradeoff that image building process costing more time.


2) Deployment with dependency on bootstrap sequence

2



Page 03 of 11

Diagram: Sample OpenStack cloud node bootstrap sequence

Because of the dependency between services on different roles of nodes, the deployment flow has to follow particular sequences for bootstrapping nodes. For example, BMs servers hosting OpenStack service VMs to be brought up first, then the storage (e.g. Ceph) nodes, the VM nodes which provide database, message queue, load balance nodes need to have services configured and started before OpenStack core services to be brought up on controller VM nodes. The dashboard services and additional features are converged after that. Together with design of HA clusters the nodes and services are bootstrapped in a sequence with mixture of serialization and parallelizations.

In a DevOps / Continuous Delivery context, during development & testing phases, after cloud's deployment code or feature code change set / commit is delivered into code stream, cloud deployment needs to be complete before code change becomes effective & ready for test in deployed cloud environment. When there are a lot of concurrent change sets delivered by large number of team members, delta between builds that go through deployment & function verifications can be hundred change sets or even more. This makes it very hard to adopt Continuous Integration & Continuous Delivery practices for f...