Archive for February, 2013

12
Feb 13

Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins

In this post we will build a very basic deployment pipeline using the Jenkins Continuous Integration Server and the Build Pipeline Jenkins plugin.

For an overview on deployment pipelines see my previous post Building deployment pipelines.

This basic pipeline will consist of a number of jobs:

  1. Build the artifact
  2. Acceptance testing
  3. Deploy to Staging
  4. Deploy to live

Some of the gates between jobs will be automatically triggered and some will be done manually using the Build Pipeline View. Please note the purpose of this post is to illustrate the process of creating a pipeline, the details of building artifacts, performing acceptance testing, automated deployment, etc will be saved for future posts.

Prerequisites

  • Install Jenkins
  • Install the Build Pipeline plugin
  • Intall the Groovy plugin
  • Install the parameterized Trigger plugin

Build Job

The first job in your pipeline is usually responsible for setting the version number, building the artifact and deploying the artifact to an artifact repo. The build task may also include running unit tests, reporting code coverage and code analytics.

The first task is to create a new job as below:

create_build

After clicking OK you are forwarded on to the job configuration page. Add an “Execute system Groovy script” pre build step to create a release number and add it as a parameter of the job. In this example we use the Jenkins build number. You may want to use the Version Control revision number instead (assuming you are not using a distributed revision control system such as Git, which uses hashes instead of revision numbers).

This is achieved with the following script:

import hudson.model.AbstractBuild
import hudson.model.ParametersAction
import hudson.model.StringParameterValue
def build = Thread.currentThread().executable;
String release = "1.0." + build.number;
def currentBuild = Thread.currentThread().executable;
def newParamAction = new ParametersAction(new StringParameterValue("RELEASE_NO",release));
currentBuild.addAction(newParamAction);

To set the release version of the artifact add an “Invoke top-level Maven targets” pre step with the command

versions:set -DnewVersion=$RELEASE_NO

Finally we add our “clean deploy” maven task.

configure_build

Acceptance Test Job

The next job is responsible deploying the artifact from the artifact repo into a test environment and then running a suite of automated acceptance tests against the deployment.

The first task is to create the job.

create_acceptance

I will save the details of automated deployment and automated acceptance testing for a later post.

We will now add a trigger from the Build job to the Acceptance Test job. From within the Build Job configuration add a “Trigger parameterized buid on other projects” post-build action. In “Projects to Build” specify “Acceptance Test”. Finally Click on “Add Parameters” and select “Current build parameters” to pass our release number to the next job.

auto_trigger

Deploy to Staging Job

Create a new free-style project job to named “Deploy to Staging”. This job will be responsible for running the automated deployment and smoke test scripts against the staging environment. Again I will not cover these details in this post. This job should be a manually triggered gate in the pipeline. To do this open the Acceptance Test job configuration and create a new “Build Pipeline Plugin -> Manually Execute Downstream Project” Post-build action.

manual_trigger

You can repeat the process with the “Deploy to Live” Job.

Create the Pipeline

The final task is to create the pipeline. From the Jenkins homepage create a new view by clicking on the “+” tab next to “All”. Give the view a name and specify a “Build Pipeline View”.

create_pipeline

Clicking OK will take you to the pipeline configuration page. Make sure you specify the inital build as the “Build” job we created earlier.

configure_pipeline

Once the job is created go to the pipeline view and click the “Run” icon. This should run the “Build” and “Acceptace Test” tasks. To manually trigger the subsequent classes click the trigger button in the bottom right corner of the respective blue box.

pipeline

6
Feb 13

Continuous Delivery Part 1: The Deployment Pipeline

Most of the ideas in this article come from the excellent book “Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation” by Jez Humble and Dave Farley.

Continuous Delivery defines a set of Patterns to implement a rapid, reliable and stress-free process of Software delivery. This is achieved by following a number of principles:

  • Every check in Leads to a Potential Release
    • This is very different to the Maven Snapshot-Release process of delivering Software
  • Create a Repeatable, Reliable Process for Releasing Software
  • Automate almost Everything
  • Keep Everything in Version Control
    • This includes code, test scripts, configuration, etc
  • If It Hurts, Do It More Frequently, and Bring the Pain Forward
    • Use the same release process and scripts for each environment
  • Build Quality In
    • Continuous Integration, Automated Functional Testing, Automated Deployment
  • Done Means Released
  • Everyone is Responsible for the Delivery Process
    • DevOps – Encourage greater Collaboration between everyone involved in Software Delivery
  • Continuous Improvement
    • Refine and evolve your delivery platform

The most central pattern for achieving the above is creating a Deployment Pipeline. This pipeline models the steps from committing a change, through building, testing, promoting and releasing it. The first step usually builds the module and creates the project artifacts, these artifacts then pass along the pipeline, each step providing more confidence that the release will be successful. Gates between steps can be automated or manually triggered depending on the work flow desired. If all gates in the process are automated, this is known as Continuous Deployment.

Implementing Pipelines

Pipelines can be modelled within the Jenkins CI Server using the Build Pipeline plug-in. The plug-in supports both manual and automatic steps.

pipeline

In the example above, each build is assigned a unique release number of <app version>.<build number>. The artifacts are created as part of the build task and if successful deployed to an artifact repository. The next step is automatically triggered to deploy the artifacts onto a test environment, if this succeeds a set of Automated functional tests will be run to validate that the application functions correctly. The “Deploy to UAT” step is manually triggered by the appropriate user so that User Acceptance Testing can begin. Finally when UAT testing is complete, automated deployment to the live environment can be triggered. Permission could be added so that only certain groups of users can trigger specific steps.

As an alternative “Go” by Thoughtworks provides a more integrated solution to creating deployment pipelines, but the community edition is limited to 10 users.

The following Jenkins plug-ins are required to set up a pipeline such as the one above:

  • Build Pipeline
  • Groovy Builder – Required to set parameters for manual downstream jobs
  • Parameterized Trigger – Required to trigger the next step with the same parameters

And the following plug-ins could be useful when building your pipeline:

  • HTML Publisher – Useful for publishing reports such as the Living Documentation produced by functional tests
  • Sonar – Integration with Sonar code analytics
  • Join plugin – Useful when pipeline steps have forked for concurrent processing
  • Performance plugin – Useful for running reporting Load tests
  • Clone Workspace – Copies workspace to be used in another job

Build Process

When using the Maven release plugin we generally build snapshots continuously until we are happy to create a release, at which point we perform a release. Using the Maven release plugin this requires 3 builds, 2 POM transformations and 3 SCM revisions. Versions are usually hard coded directly into the pom.

When following Continuous Delivery every CI build leads to a potential release meaning there is no concept of snapshots and we must provide a unique version number for each build. This can be achieved by either using the maven command versions:set or by following the process as defined in the article

http://www.axelfontaine.com/2011/01/maven-releases-on-steroids-adios.html.

Configuration management

For Continuous Delivery and Automated Deployment to succeed it is important that each environment is as similar to live as possible. This way we can build up confidents that a release will be successful as we have done the same process on this pipeline several times before. Using a declarative configuration management tool such as puppet can ease this concern. By storing the puppet manifests/modules in version control along with the artifact we can always match up releases with configuration and test them together.

Artifact management

The artifact should be build only once and then used throughout the pipeline. An artifact repository such as Nexus/Artifactory should take care of this. It is often preferable to set up a number of repos, one for each environment (e.g. test, UAT, staging, live). This way we can grant permission to promote an artifact for an environment to one set of users (e.g. to signify UAT is complete) and then a different set can initialize the release. We can also regularly clear out repositories used earlier in the pipeline (e.g. remove test artifacts older than 2 weeks, UAT artifacts older than 2 months).

Automated Functional Testing

Functional testing tools such as JBehave, Fitnesse and Cucumber allow tests written in human readable form to be automated and run against a deployment. Tests should be organized by User Story and are often written in a Give-When-Then format. When following the process of Behaviour Driven Development tests are written before the functionality is implemented, either directly before coding the change or earlier in the process. If the tests themselves form the user requirements as defined by the “Specification by example” process then the Functional Tests actually become Acceptance Tests.

Once the tests are run they can produce Living Documentation defining exactly what the Application can and can’t do for a particular build.

Automated Deployment

I won’t go into Automated Deployment in too much detail, but this generally requires some kind of orchestration server (Possibly a Jenkins slave). The responsibilities for this server may include updating puppet configs, deploying multiple artifacts to multiple servers in order, running database migrations (Liquibase is often useful for this task), running smoke tests to validate the deployment or performing automated rollback if something goes wrong. Extra consideration is needed if zero-downtime deployments are required, for example data migrations will need to be forward/backward compatible.

Tools such as Control Tier, Rundeck and Capistrano can be used to help this process.