Archive for the ‘Continuous Delivery’ Category

May 13

Functional Testing Using JBehave, Jetty and Maven

JBehave is a java-based functional testing framework which allows tests written in human readable text to be automatically executed.
Functionality is defined at the user story level and broken down into a number of scenarios. Each scenario should capture a unique piece of functionality or behavior.
The format of the JBehave stories are given below:

Narrative: In order to ….. As a ….. I want to …..

Scenario: …..

Given …..
When …..
Then …..

The Narrative defines the functionality we are trying to test in user story form.
Following the narrative is a number of scenarios.
The Given clause defines a set of pre-conditions to set up the context of the test.
The When clause defines one or more actions to be performed to trigger the functionality that we are testing.
The Then clause is then use to test the state is as expected following the action.
A concrete example for a “happy path” scenario is given below:

In order to buy a product
As a customer
I want to register an account

Scenario: register an account

Given no account exists with email
And no account exists with username tester
When I create an account with email, username tester and date of birth 11/12/1981
Then I receive a succesful response
And the account is created in the database

Each line of this scenarios can then be mapped to a Java “Step” using annotations:

@Given(“no account exists with email $email”)
public void checkEmailDoesNotExist( String email ) {

@When(“I create an account with email  $email, username $user and date of birth $dob”) {
public void create Account( String email, STring username, Date dob ) {
Account account = new Account(email, username, dob);
response = restTemplate.postForEntity(url, account,Account.Class);

@Then(“I receive a succesful response”)
public void checkSuccesfulResponse( String response ) {
assertThat(response.getStatusCode(), is (200) );

Test can be run a number of ways. The easiest way is through JBehaves JUnit integration which allows annotation driven configuration. It also integrates nicely with Spring. For example:

public class RegistrationTests() {


I recommend downloading the JBehave Eclipse plugin which performs syntax highlighting and linking from textueal steps to code.

Behaviour Driven Development

Behavior-driven development is a specialized version of test-driven development which focuses on behavioral specification of software units.
The tests are defined first. JBehave will mark these tests “Pending” until the implementation is complete. Preferably, these tests will be defined collaboratively between the developers, testers and business in which case they form the acceptance criteria for the user story.
The developer then implements the functionality required so that the tests pass. At this stage the developer may discover alternative scenarios that may form further acceptance criteria. For example:

Scenario: I must use a unique email

Given an account already exists with email
And no account exists with username tester
When I create an account with email, username tester and date of birth 11/12/1981
Then I receive a bad request error with message “Email must be unique”

Scenario: My Date of Birth Must be in the past

Given no account exists with email
And no account exists with username tester
When I create an account with email, username tester and date of birth 11/12/2081
Then I receive a bad request error with message “Date of birth must be in the past”

Again these stories should be a product of the collaboration between developers, testers and project stakeholders. In this way the stories not only provide acceptance criteria forming a functional contract regarding what functionality the application will provide, but also provides regularly validated “Living Documentation” specifying exactly what an application does at any point in time.

Integrating into the build process using Maven and Jetty

Your functional tests could be run against your application at various levels:

Directly against the code
Against a test container running the code (for example using spring mvc integration tests)
Against a deployed application at the http level (below the GUI)
Over the GUI (Using a tool such as Selenium).

I prefer the third option as I want to test as much as possible without the fragility that GUI testing often brings. When testing REST services this level is particularly appropriate.
A lightweight container such as Jetty provides a simple way to expose your application as part of the build lifecycle as follows:

pre-integration-test: Start Jetty containing deployed application
integration-test: Run JBehave tests using the JBehave Maven plugin
post-integration-test: Shutdown Jetty

This lifecycle can easily be set up in the plugin section of your POM:



JBehave as part of a continuous delivery pipeline

When using continuous delivery it is often preferable to run your JBehave tests directly against the application deployed in a test environment, this way your testing the configuration and deployemnt process in addition to the code.
A typical deployment pipeline will consist of the following steps

  • Build binaries
  • Deploy to test environment
  • Run acceptance tests against test environment
  • Promote to UAT
  • ….

As the tests are running directly against an app server we don’t strictly need to run jetty anymore, however it is often worth keeping it in to run a restricted set of smoke tests in the buid phase to validate the build. JBehave supports meta tags so that you can mark a restriced set of tests as smoke tests:

Scenario: Simple smoke test

Meta: @Theme smoke


When you instruct JBehave to run the tests, you can inform it to run only the smoke tests, or to exclude the smoke tests.

@UsingEmbedder(metaFilters= {“+theme smoke”})


@UsingEmbedder(metaFilters= {“-theme smoke”})

If you configure your acceptance tests so that you can specify endpoints and databases connection details externally you can point your acceptance tests to the test environments .

Feb 13

Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins

In this post we will build a very basic deployment pipeline using the Jenkins Continuous Integration Server and the Build Pipeline Jenkins plugin.

For an overview on deployment pipelines see my previous post Building deployment pipelines.

This basic pipeline will consist of a number of jobs:

  1. Build the artifact
  2. Acceptance testing
  3. Deploy to Staging
  4. Deploy to live

Some of the gates between jobs will be automatically triggered and some will be done manually using the Build Pipeline View. Please note the purpose of this post is to illustrate the process of creating a pipeline, the details of building artifacts, performing acceptance testing, automated deployment, etc will be saved for future posts.


  • Install Jenkins
  • Install the Build Pipeline plugin
  • Intall the Groovy plugin
  • Install the parameterized Trigger plugin

Build Job

The first job in your pipeline is usually responsible for setting the version number, building the artifact and deploying the artifact to an artifact repo. The build task may also include running unit tests, reporting code coverage and code analytics.

The first task is to create a new job as below:


After clicking OK you are forwarded on to the job configuration page. Add an “Execute system Groovy script” pre build step to create a release number and add it as a parameter of the job. In this example we use the Jenkins build number. You may want to use the Version Control revision number instead (assuming you are not using a distributed revision control system such as Git, which uses hashes instead of revision numbers).

This is achieved with the following script:

import hudson.model.AbstractBuild
import hudson.model.ParametersAction
import hudson.model.StringParameterValue
def build = Thread.currentThread().executable;
String release = "1.0." + build.number;
def currentBuild = Thread.currentThread().executable;
def newParamAction = new ParametersAction(new StringParameterValue("RELEASE_NO",release));

To set the release version of the artifact add an “Invoke top-level Maven targets” pre step with the command

versions:set -DnewVersion=$RELEASE_NO

Finally we add our “clean deploy” maven task.


Acceptance Test Job

The next job is responsible deploying the artifact from the artifact repo into a test environment and then running a suite of automated acceptance tests against the deployment.

The first task is to create the job.


I will save the details of automated deployment and automated acceptance testing for a later post.

We will now add a trigger from the Build job to the Acceptance Test job. From within the Build Job configuration add a “Trigger parameterized buid on other projects” post-build action. In “Projects to Build” specify “Acceptance Test”. Finally Click on “Add Parameters” and select “Current build parameters” to pass our release number to the next job.


Deploy to Staging Job

Create a new free-style project job to named “Deploy to Staging”. This job will be responsible for running the automated deployment and smoke test scripts against the staging environment. Again I will not cover these details in this post. This job should be a manually triggered gate in the pipeline. To do this open the Acceptance Test job configuration and create a new “Build Pipeline Plugin -> Manually Execute Downstream Project” Post-build action.


You can repeat the process with the “Deploy to Live” Job.

Create the Pipeline

The final task is to create the pipeline. From the Jenkins homepage create a new view by clicking on the “+” tab next to “All”. Give the view a name and specify a “Build Pipeline View”.


Clicking OK will take you to the pipeline configuration page. Make sure you specify the inital build as the “Build” job we created earlier.


Once the job is created go to the pipeline view and click the “Run” icon. This should run the “Build” and “Acceptace Test” tasks. To manually trigger the subsequent classes click the trigger button in the bottom right corner of the respective blue box.


Feb 13

Continuous Delivery Part 1: The Deployment Pipeline

Most of the ideas in this article come from the excellent book “Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation” by Jez Humble and Dave Farley.

Continuous Delivery defines a set of Patterns to implement a rapid, reliable and stress-free process of Software delivery. This is achieved by following a number of principles:

  • Every check in Leads to a Potential Release
    • This is very different to the Maven Snapshot-Release process of delivering Software
  • Create a Repeatable, Reliable Process for Releasing Software
  • Automate almost Everything
  • Keep Everything in Version Control
    • This includes code, test scripts, configuration, etc
  • If It Hurts, Do It More Frequently, and Bring the Pain Forward
    • Use the same release process and scripts for each environment
  • Build Quality In
    • Continuous Integration, Automated Functional Testing, Automated Deployment
  • Done Means Released
  • Everyone is Responsible for the Delivery Process
    • DevOps – Encourage greater Collaboration between everyone involved in Software Delivery
  • Continuous Improvement
    • Refine and evolve your delivery platform

The most central pattern for achieving the above is creating a Deployment Pipeline. This pipeline models the steps from committing a change, through building, testing, promoting and releasing it. The first step usually builds the module and creates the project artifacts, these artifacts then pass along the pipeline, each step providing more confidence that the release will be successful. Gates between steps can be automated or manually triggered depending on the work flow desired. If all gates in the process are automated, this is known as Continuous Deployment.

Implementing Pipelines

Pipelines can be modelled within the Jenkins CI Server using the Build Pipeline plug-in. The plug-in supports both manual and automatic steps.


In the example above, each build is assigned a unique release number of <app version>.<build number>. The artifacts are created as part of the build task and if successful deployed to an artifact repository. The next step is automatically triggered to deploy the artifacts onto a test environment, if this succeeds a set of Automated functional tests will be run to validate that the application functions correctly. The “Deploy to UAT” step is manually triggered by the appropriate user so that User Acceptance Testing can begin. Finally when UAT testing is complete, automated deployment to the live environment can be triggered. Permission could be added so that only certain groups of users can trigger specific steps.

As an alternative “Go” by Thoughtworks provides a more integrated solution to creating deployment pipelines, but the community edition is limited to 10 users.

The following Jenkins plug-ins are required to set up a pipeline such as the one above:

  • Build Pipeline
  • Groovy Builder – Required to set parameters for manual downstream jobs
  • Parameterized Trigger – Required to trigger the next step with the same parameters

And the following plug-ins could be useful when building your pipeline:

  • HTML Publisher – Useful for publishing reports such as the Living Documentation produced by functional tests
  • Sonar – Integration with Sonar code analytics
  • Join plugin – Useful when pipeline steps have forked for concurrent processing
  • Performance plugin – Useful for running reporting Load tests
  • Clone Workspace – Copies workspace to be used in another job

Build Process

When using the Maven release plugin we generally build snapshots continuously until we are happy to create a release, at which point we perform a release. Using the Maven release plugin this requires 3 builds, 2 POM transformations and 3 SCM revisions. Versions are usually hard coded directly into the pom.

When following Continuous Delivery every CI build leads to a potential release meaning there is no concept of snapshots and we must provide a unique version number for each build. This can be achieved by either using the maven command versions:set or by following the process as defined in the article

Configuration management

For Continuous Delivery and Automated Deployment to succeed it is important that each environment is as similar to live as possible. This way we can build up confidents that a release will be successful as we have done the same process on this pipeline several times before. Using a declarative configuration management tool such as puppet can ease this concern. By storing the puppet manifests/modules in version control along with the artifact we can always match up releases with configuration and test them together.

Artifact management

The artifact should be build only once and then used throughout the pipeline. An artifact repository such as Nexus/Artifactory should take care of this. It is often preferable to set up a number of repos, one for each environment (e.g. test, UAT, staging, live). This way we can grant permission to promote an artifact for an environment to one set of users (e.g. to signify UAT is complete) and then a different set can initialize the release. We can also regularly clear out repositories used earlier in the pipeline (e.g. remove test artifacts older than 2 weeks, UAT artifacts older than 2 months).

Automated Functional Testing

Functional testing tools such as JBehave, Fitnesse and Cucumber allow tests written in human readable form to be automated and run against a deployment. Tests should be organized by User Story and are often written in a Give-When-Then format. When following the process of Behaviour Driven Development tests are written before the functionality is implemented, either directly before coding the change or earlier in the process. If the tests themselves form the user requirements as defined by the “Specification by example” process then the Functional Tests actually become Acceptance Tests.

Once the tests are run they can produce Living Documentation defining exactly what the Application can and can’t do for a particular build.

Automated Deployment

I won’t go into Automated Deployment in too much detail, but this generally requires some kind of orchestration server (Possibly a Jenkins slave). The responsibilities for this server may include updating puppet configs, deploying multiple artifacts to multiple servers in order, running database migrations (Liquibase is often useful for this task), running smoke tests to validate the deployment or performing automated rollback if something goes wrong. Extra consideration is needed if zero-downtime deployments are required, for example data migrations will need to be forward/backward compatible.

Tools such as Control Tier, Rundeck and Capistrano can be used to help this process.