Continuous Delivery. Practical guide for developers.

Code & Tech


What is the most stressful part of writing and shipping software? Most of us would probably say: deployments. In the old days when agile was not yet a buzzword used more and more frequently in IT world, each deployment was preceded by weeks or even months of analyzing, development and tests. On the Deployment Day everything had to become finally clear: either it does work or it doesn’t. It was the day when it was revealed whether each team member did his job honestly or not, whether your project is a success or not. Sounds like a lot of things depending on just one day, doesn’t it?

What is meant by Continuous Delivery?

Continuous Delivery is all about making this process less stressful and more reliable. The principle is very simple: if it hurts – do it more often. This rule is the foundation of the way we think of Continuous Delivery. You should ship your project only through your pipeline. Change it whenever you’ll find a missing part. It would certainly help you keep the high quality of your product and prevent you from losing time on tracking bugs related to manual deployments.

I have to warn you, though: do not think of Continuous Delivery as a silver bullet for all software-related problems. For me, it’s more about choosing the proper toolset and structuring its usage. You’ll still need decent engineering to build your software, Continuous Delivery is just a way to help you with it.

At Sparkbit, we use Continuous Delivery in all of our projects and I felt like it’s time to share our experience with applying these principles in our day-to-day work. We try to stick to general rules but we also have some ideas of our own on how to improve our pipeline which you can find useful.


There are many resources providing great and extensive descriptions of Continuous Delivery. My favourite is a book called Continuous Delivery by Jez Humble and David Farley which we have in our library. Not only does it describe the theory but also adds many practical examples and real life stories of applying those rules in projects.


In our projects, we are defining the Continuous Delivery process as a list of consecutive stages:

  • Pre-commit stage. CI server runs automatic tests on every proposed change (maybe not all of them, try to keep this step quick) and team members are reviewing the code.
  • Commit stage. This step is triggered automatically after each commit to the target branch. It runs all automated tests, builds artifacts and saves them in the repository for future use.
  • Acceptance test stage. CI server runs the set of automatic acceptance tests.
  • Manual test stage. Testers are performing checks on the application manually.
  • Performance testing stage. CI server runs automatic performance and capacity tests.
  • Release stage. Operations engineer is performing the release of the application.

This is a general overview of the process. It can (and should) be adapted to specific requirements of the project. Mind that only steps 1, 4 and 6 require human intervention. When designing your pipeline try to keep the number of these steps at a minimum.

Our toolset

Continuous Delivery can be perceived as a set of guidelines for delivering software. It is technology-agnostic so you’re not tied to specific solutions when implementing its principles in your infrastructure. What I’m presenting here is our choice of tools, based on our experience, that proved to be sufficient.

Continuous Delivery with Jenkins workflow

The heart of every CD pipeline is the Continuous Integration server. It is responsible for handling all automated jobs in the pipeline. We chose Jenkins as a CI server. As your CD pipeline will grow over the course of the project, the number of your automatic tasks will grow as well. In our configurations, the number of jobs keeps growing rapidly since we prefer creating simple jobs with single responsibility. Notice that this is the moment when actual pipeline gets created: by chaining jobs to one another. For example, you could have one job for building an artifact, one for deploying this artifact and another one for executing end2end tests on it. They should be executed sequentially.

These days, no software project can be completed without a version control system. It should not surprise anyone, I recommend git. If you’re not familiar with it, there are lots of resources out there. I’m sure you’ll find something useful for you.

Another essential part of every modern CD pipeline is a code review tool. We prefer to use Gerrit. It’s a tool from Google which comes with its own git repository. You’ll read more about code review below but keep in mind for now that choosing the right tool for code review is very important. We’re using it each day for handling every change in our projects’ codebases.

Last but not the least, it is crucial to choose good set of testing tools. There should be a proper mix of them in order to be able to test different aspects of the application. Testing  It is difficult to recommend some solid choices in this area because it’s often dependant on technologies used and type of an application itself. Let me show you our choice for web applications’ testing technologies written in Java (backend) and AngularJS (frontend):

  • Quality checks: TSLint, CheckStyle, FindBugs, Sonar
  • Unit testing: karma, JUnit, Spring mocks, Spock
  • Integration testing: docker-compose
  • API testing: Apache Camel
  • UI testing: Protractor

Version control

Using version control these days is a standard. I would like to encourage you to extend its usage to less obvious things than source code. We tend to keep in the version control system full configuration for our application (for different environments) and every other thing that is related to the project. This practice has three main advantages:

  • When doing automatic deployment, it’s possible to retrieve the configuration with the source code itself, so that doing any manual changes in config files is not necessary
  • When somebody has to go back to some older version of the application, he can run it the usual way. VCS stores all relevant data.
  • You can always browse the history of changes in your config files.

Code review

In my experience code review is the part of delivery process where the majority of quality checks are being enforced. Although I like automating as much as possible, I believe that there always has to be a place for decent discussion over code and human checks for readability and maintainability.

You should not avoid constructive criticism in your team since there will always be bugs that can be spotted at a glance by your teammates even if you think your code is well polished (you’ll be surprised!).

Code review process can be tough to introduce in teams that are not used to it. Handling conflicting opinions (sometimes harsh, let’s face it) can be difficult for some of us. But when your team gets used to code review, it becomes a second nature. You’re starting to feel the need of asking for someone else’s opinion before sharing publicly your work, not only code (the same will happen with this post).

Code review guidelines

To make sure we cover everything needed during the review, we’ve introduced a set of guidelines for the reviewer. Our main points included:

  • Does the change cover one single use case and it’s not possible to split it? (it’s better to do lots of small commits than one big which is much harder to review)
  • Code readability (checked before running the code): is the code self-explanatory? Is it possible to understand what it does without looking at the requirements for this particular change?
  • Manual testing. Reviewer needs to build and run the code and check how it behaves when following the happy path and when trying corner cases.
  • Automatic testing: does the change add a new test case or modify an existing one? Are all tests passing?
  • Code quality: does the change introduce some commented code? Did developer divide his code properly into modules/functions?
  • Maintainability: does the change introduce new dependencies? Does it modify a database schema?

Making manual testing easier

I would like to emphasize the importance of the manual testing step. You should never neglect it. To support it we introduced in our pipelines automatic building and deploying of every proposed change in the code review system. This way we can check two things at the same time:

  • Are application artifacts properly built?
  • Can application be successfully started?

It’s worth noting that all this happens automatically, before a reviewer even starts looking at the change. If you’ll decide to apply the same strategy into your pipeline, you’ll end up with having lots of instances of your application on a single server. That is why it is so important to deploy all your configuration with artifacts and keep each application instance with its own set of configuration files. When doing the opposite (having one configuration globally for a server) at some point in time you’ll face the problem of configuration not matching all application instances.

Most of the remaining steps in code review process can be supported by automated tasks as well. For example, you can measure code quality with static analysis tools (in Java world: Sonar, Findbugs, CheckStyle).

When introducing Continuous Delivery you need to start thinking about each bug as a potential flaw in your process. When you often find bugs late in the process, you probably need to fix the code review step. For instance, you may consider adding a new tip to your guidelines to make sure reviewers will be more careful in the future.


Testing is obviously a hugely important element in delivering reliable software. Automation is certainly the first thing that comes to my mind when thinking about testing in Continuous Delivery. There are number of ways in which testing can be done by machines and you should use them as much as possible.

If your pipeline is already in place and you find a bug in your master codebase, you should start fixing it by analyzing why your delivery process didn’t find it? Your set of automated tests is like a shield that’s defending you from breaking things when introducing new code. So before writing a patch, create a test or change an existing one so that you’ll be able to check precisely the case that you’re about to fix. This way of thinking is moving you towards Test Driven Development and its key principle Test – Code – Refactor.

Efficient testing

The good testing strategy involves setting up tests on different levels: unit testing, integration testing (different sets for different layers), interface testing, performance testing. In general, you should adapt those to your application but try to cover all layers and look at your application from different angles through tests. This effort will definitely pay off when you’ll face fixing a bug several months after deployment.

Based on my own experience, there are two rules that make testing efficient:

  • Execute your tests as soon as possible when introducing new code to the codebase but with reasonable time limits. In reality, you don’t want to wait for a couple of hours long testset to finish when fixing a typo in documentation. You need to divide your tests into those that are executed quickly (like unit tests) and longer ones (like integration tests). Sometimes it is sane to run all tests only after integrating changes with the master codebase.
  • Don’t chase 100% code coverage. There is a tendency to state that only covering every single line of code makes your application fully tested. In my opinion this makes a system over-tested and applying changes to its codebase becomes really difficult. It’s better to focus on tests that check specific user stories that shouldn’t break over time. Achieving the perfect balance is really difficult and is something that can’t be simply taught, it requires a lot of experience.


The deployment part of the process is actually the step where Continuous Delivery has roots in. Willingness to automate it led the way to create a fully automatic process for delivering software.

Modern CD pipelines are performing deploys after each commit. When the time comes for the production deployment, Operations team can rely on their deployment procedure because they were already testing it heavily before. Another advantage of having such a deployment script is that just by forcing your team to create one, you’re gaining pretty good documentation for your environment – your script describes very explicitly how to operate with specific servers, machines, etc. And another great thing about this is that you can be 100% sure that this documentation is always up-to-date – otherwise, it simply won’t work.

In order to utilize this kind of approach, you need to have consistent environments. In other words, deployment procedure should be written once and used for every deployment with environment specifics denoted as variables. If it is not possible, you can hide infrastructure behind an abstraction like containers. The goal of containers technology (like Docker) is to be able to build container(s) with your application once and then run it on different servers exactly the same way.

Either way, you should remember: always deploy with the same procedure to different environments and always automate this task. The longer your project lasts, the more it pays off.


Establishing decent Continuous Delivery is a long-term process. It actually lasts as long as the project itself. It evolves with the project and undergoes constant changes with the requirements. But you’ll appreciate the time invested into creating an automated pipeline of tests and delivery when trying to fix a difficult bug under time pressure. You’ll be more confident with all your actions when leveraging automated and well tested tasks. In situations like this, you’ll see clearly that having the delivery pipeline is essential for shipping reliable and high quality software.