This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Continuous delivery is a software strategy that seeks to provide new features to users as fast and efficiently as possible. The core idea is to create a repeatable, reliable and incrementally improving process for taking software from concept to customer. Think Agile principles. And, think frequent, reliable execution of repetitive tasks … i.e. think automation.
Like DevOps, Agile and similar recent initiatives, continuous delivery is fundamentally a set of practices and attitudes. Continuous delivery isn’t “solved” by putting a smart magic box, suite or toolset in a corner; it is implemented by committing to adopt a mindset and working towards a set of goals derived from those principles.
For most organizations, though, the goals of continuous delivery require error-prone, expensive and time-consuming manual processes.
That being said, organizations cannot afford to risk significant disruption to a business-critical process such as application delivery through “big bang” automation approaches that attempt to move the entire process to supposed “one-stop-shop” suites in one go.
Instead, companies should adopt a phased approach to implementing continuous delivery, making smooth transitions that deliver measurable improvement.
Here are five considerations for continuous delivery:
* It lowers your costs. Traditional software deployments require mostly manual work such as expert scripting and frequent troubleshooting sessions, all of which add up to a significant investment. As long as the amount of deployments progresses, the amount of expenditure grows. Due to rising cost (and deployment duration), the maximum amount of deployments per hour is also bound to limits.
In a continuous delivery environment, the number of deployments does not have a large impact on overall costs. Once a deployment pipeline is configured, subsequent deployments happen automatically or at the push of a button. The maximum amount of deployments is not limited to error-prone, manual tasks.
* It shortens your time to market. In the traditional process, many changes are delivered in one big release. The time between releases is long and much effort is required to deploy the software. A big release with many changes almost inevitably gets delayed because you can’t get many features to work together in one go.
Furthermore, a manual release process is quite inefficient. For example, if three changes fail the test in a release containing 100 changes, the 97 correct changes cannot go into production until the three defective ones are fixed.
In a continuous delivery model, small batches of changes are moved continuously and become instantly visible. Changes can be made immediately available to customers -- and customer feedback can be gathered within minutes. When a feature is ready for production, it can be moved there without delay. This kind of quick feedback makes it easier to ensure the next thing you build is aligned with your customer expectations. Such speed is vital as a new feature can mean instant business value.
* It mitigates your risk. Generally, adding a large amount of changes to software introduces risk. Due to the long time gap between deployments, there is a high possibility that environments will also have to be changed. Every deployment becomes a “big bang” that touches many moving parts. The chance of hitting some untested combination is high. Every deployment is unique, making it difficult to rely on experience from previous deployments.
In contrast, a delivery pipeline only has to be configured and tested once, and from there on can be repeated many times in a row -- even for every code change, if desired.
As releases are proven on a continuous basis, the risk of poor or error-ridden releases is minimal, and lower than those of infrequent, manual ones. Simply put: the release process becomes far more reliable.
* It raises the overall quality of your application. In a traditional development model, code is compiled and packaged infrequently. Manual tests are performed once code is in its final stages, making test results visible only close to the end of a project. When a test fails, it is hard to find the solution, since there is no real 1:1 correlation between what was changed and what needs to be fixed -- which costs a lot of valuable time. As the project needs to go live as soon as possible, the code is eventually promoted to production even though not all the code problems have been solved.
In a continuous delivery model, the process of assembling, compiling and testing is completely automated. Possible quality issues become visible much earlier on in the process and can be fixed on the spot. When the current version reaches “ready for production” it is actually ready.
* Automation is key. In order to enable the frequent, reliable execution of your delivery pipelines, organizations should at the very least investigate a “standard set” of automation tooling consisting of: pipeline orchestration/release coordination, continuous integration, application release automation and environment provisioning /configuration management. Why is this key? Because it’s essential to meet the quality and throughput targets of your pipelines.
Continuous delivery offers many benefits. By automating the process of software delivery you gather crucial customer feedback more quickly, speeding up the whole process of improving quality and reducing time to market while potentially cutting down your costs of development.XebiaLabs is a provider of application release automation software for enterprises looking to improve the application delivery process.
What’s not to like?