Strangling pipelines

Strangling pipelines

Supported by Redhat

This practical example is about the strangulation pattern, as explained by Martin Fowler here, applied to pipelines.

 

The situation

Right after ditching the old manually managed Jenkins jobs, we were left with ‘simple’ but very lengthy procedural pipeline scripts. These scripts were then duplicated and slightly modified for each type of pipeline (because there’s always need for another ‘standard’ way of doing things).

Maintenance meant keeping changes synchronized between the scripts but taking care of subtle but intentional differences. This becomes increasingly difficult as more complexity is added, due to adding features to continuous delivery.

To give an example of the structure, and potential for being lengthy, take a look at the following.

 

The target

An elegant library of components to help setup a pipeline made of a simple flow that calls out to these well defined, interchangeable units. These components are documented, have types and do proper error handling.

For brevity, this considers only the trunk part, building branches is left out.

You may be able to imagine that switching containers, repositories, VCS system, can be managed a lot easier with this code. Of course there are questions to be answered, like:

  • how does the component code get in the project code base?
  • how do the (correct) components get instantiated?
  • how to do effective configuration for the various components?
  • can we get around limitations like CPS transformation, lack of dependency injection, etc?

 

On how to get there

Getting from the old situation to the new is actually, by itself, not that hard. The thing though is that the world does not stop while doing the refactor work. New teams, new features, bugfixes, production deployments, etc happen during the refactoring.

Those changes need to happen across all pipelines. To make sure there’s just one relevant codebase, we apply ‘strangulation’.

 At a minimum, the following steps are involved:

  1. Create a shared library repository (and configure it in Jenkins)
  2. Move old scripts into this library, as resources (just text file)
  3. Change a project’s  ‘Jenkinsfile‘ files to load the shared library
  4. Load the old script as resource
  5. Setup a ‘context‘ that can be accessed in a static way, like a poor mans DI solution
  6. Create new components (use this pattern)
  7. Replace parts of the script with calls to new components retrieved using the context (use branches!)
  8. Clean out remaining script code
  9. Replace the loading script with just a few simple statements to components

 

About having a poormans DI solution

Jenkins actually uses Guice internally to offer DI. In scripting code this is not available (at least not by default).

At some point it might become available in some form, so it would be nice to not have to do a complete rewrite once that moment comes.

Having a static way to access some kind of context is the simplest way to always have access to the needed dependencies. Only one hardwired link is needed to the context mechanism, all logic behind it is interchangeable. Statics are bad for tests, so it becomes important to be strict about having just this one instance, as said a poormans solution.

However, when there’s context:

  • You can control whether to use new components by leveraging the ‘context‘.
  • One of the components may be a central configuration for instance
  • There may be multiple implementations of a single component contract (interface)
  • Components may depend on each other while the script does not need to know

Getting and creating components can be made dynamic to allow switching ‘branches’ when doing branch by abstraction. The context is the main abstraction.

 

About Jenkins’ groovy limitations

CPS transformation (continuation passing style) imposes restrictions on pipeline code.

Read the following, including the warnings:

 

Wrapping things up

By using branch by abstraction, replacing old parts of any code becomes a series of manageable changes while keeping things running.

This mechanism translates quite well to pipelines.

Now when you get to having an awesome shared library, you might actually want to build, validate and test it (and let Jenkins do the heavy lifting)! Spoilers here. More on this subject soon!