Feb 13

Continuous Delivery Part 1: The Deployment Pipeline

Most of the ideas in this article come from the excellent book “Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation” by Jez Humble and Dave Farley.

Continuous Delivery defines a set of Patterns to implement a rapid, reliable and stress-free process of Software delivery. This is achieved by following a number of principles:

  • Every check in Leads to a Potential Release
    • This is very different to the Maven Snapshot-Release process of delivering Software
  • Create a Repeatable, Reliable Process for Releasing Software
  • Automate almost Everything
  • Keep Everything in Version Control
    • This includes code, test scripts, configuration, etc
  • If It Hurts, Do It More Frequently, and Bring the Pain Forward
    • Use the same release process and scripts for each environment
  • Build Quality In
    • Continuous Integration, Automated Functional Testing, Automated Deployment
  • Done Means Released
  • Everyone is Responsible for the Delivery Process
    • DevOps – Encourage greater Collaboration between everyone involved in Software Delivery
  • Continuous Improvement
    • Refine and evolve your delivery platform

The most central pattern for achieving the above is creating a Deployment Pipeline. This pipeline models the steps from committing a change, through building, testing, promoting and releasing it. The first step usually builds the module and creates the project artifacts, these artifacts then pass along the pipeline, each step providing more confidence that the release will be successful. Gates between steps can be automated or manually triggered depending on the work flow desired. If all gates in the process are automated, this is known as Continuous Deployment.

Implementing Pipelines

Pipelines can be modelled within the Jenkins CI Server using the Build Pipeline plug-in. The plug-in supports both manual and automatic steps.


In the example above, each build is assigned a unique release number of <app version>.<build number>. The artifacts are created as part of the build task and if successful deployed to an artifact repository. The next step is automatically triggered to deploy the artifacts onto a test environment, if this succeeds a set of Automated functional tests will be run to validate that the application functions correctly. The “Deploy to UAT” step is manually triggered by the appropriate user so that User Acceptance Testing can begin. Finally when UAT testing is complete, automated deployment to the live environment can be triggered. Permission could be added so that only certain groups of users can trigger specific steps.

As an alternative “Go” by Thoughtworks provides a more integrated solution to creating deployment pipelines, but the community edition is limited to 10 users.

The following Jenkins plug-ins are required to set up a pipeline such as the one above:

  • Build Pipeline
  • Groovy Builder – Required to set parameters for manual downstream jobs
  • Parameterized Trigger – Required to trigger the next step with the same parameters

And the following plug-ins could be useful when building your pipeline:

  • HTML Publisher – Useful for publishing reports such as the Living Documentation produced by functional tests
  • Sonar – Integration with Sonar code analytics
  • Join plugin – Useful when pipeline steps have forked for concurrent processing
  • Performance plugin – Useful for running reporting Load tests
  • Clone Workspace – Copies workspace to be used in another job

Build Process

When using the Maven release plugin we generally build snapshots continuously until we are happy to create a release, at which point we perform a release. Using the Maven release plugin this requires 3 builds, 2 POM transformations and 3 SCM revisions. Versions are usually hard coded directly into the pom.

When following Continuous Delivery every CI build leads to a potential release meaning there is no concept of snapshots and we must provide a unique version number for each build. This can be achieved by either using the maven command versions:set or by following the process as defined in the article


Configuration management

For Continuous Delivery and Automated Deployment to succeed it is important that each environment is as similar to live as possible. This way we can build up confidents that a release will be successful as we have done the same process on this pipeline several times before. Using a declarative configuration management tool such as puppet can ease this concern. By storing the puppet manifests/modules in version control along with the artifact we can always match up releases with configuration and test them together.

Artifact management

The artifact should be build only once and then used throughout the pipeline. An artifact repository such as Nexus/Artifactory should take care of this. It is often preferable to set up a number of repos, one for each environment (e.g. test, UAT, staging, live). This way we can grant permission to promote an artifact for an environment to one set of users (e.g. to signify UAT is complete) and then a different set can initialize the release. We can also regularly clear out repositories used earlier in the pipeline (e.g. remove test artifacts older than 2 weeks, UAT artifacts older than 2 months).

Automated Functional Testing

Functional testing tools such as JBehave, Fitnesse and Cucumber allow tests written in human readable form to be automated and run against a deployment. Tests should be organized by User Story and are often written in a Give-When-Then format. When following the process of Behaviour Driven Development tests are written before the functionality is implemented, either directly before coding the change or earlier in the process. If the tests themselves form the user requirements as defined by the “Specification by example” process then the Functional Tests actually become Acceptance Tests.

Once the tests are run they can produce Living Documentation defining exactly what the Application can and can’t do for a particular build.

Automated Deployment

I won’t go into Automated Deployment in too much detail, but this generally requires some kind of orchestration server (Possibly a Jenkins slave). The responsibilities for this server may include updating puppet configs, deploying multiple artifacts to multiple servers in order, running database migrations (Liquibase is often useful for this task), running smoke tests to validate the deployment or performing automated rollback if something goes wrong. Extra consideration is needed if zero-downtime deployments are required, for example data migrations will need to be forward/backward compatible.

Tools such as Control Tier, Rundeck and Capistrano can be used to help this process.



Leave a Reply