Skip to main content
Blog

Strangling pipelines

By 29 augustus 2017No Comments

Strangling pipelines

Supported by Redhat

This practical example is about the strangulation pattern, as explained by Martin Fowler here, applied to pipelines.

 

The situation

Right after ditching the old manually managed Jenkins jobs, we were left with ‘simple’ but very lengthy procedural pipeline scripts. These scripts were then duplicated and slightly modified for each type of pipeline (because there’s always need for another ‘standard’ way of doing things).

Maintenance meant keeping changes synchronized between the scripts but taking care of subtle but intentional differences. This becomes increasingly difficult as more complexity is added, due to adding features to continuous delivery.

To give an example of the structure, and potential for being lengthy, take a look at the following.

#!/usr/bin/env groovy

/**
 * Read this first: <a href="https://github.com/jenkinsci/pipeline-examples/blob/master/docs/BEST_PRACTICES.md">Best Practices</a><br>
 *
 * Example usage of this script:
 *
 <pre>
 def sharedBuild
 node {
   sharedBuild = load 'DefaultJenkinsfile'
 }

 sharedBuild.buildAny([DEV:[host1]], "/context", ["my.email@mydomain.com"])
 </pre>
 */

def buildAny(Map<String,List<String>> hosts, String context, String recipients) {

    def branchName = env.BRANCH_NAME

    if (branchName in ['trunk']) { // yes svn still lives.
        buildTrunk(hosts, context, recipients)
    } else {
        buildBranch(recipients)
    }
}

/**
 * Trunk builds are built, and then deployed throughout the various DTAP servers.
 * @see #buildAny
 */
def buildTrunk(Map<String,List<String>> hostNameMap, String context, String recipients) {

    def mavenRepoUrl

    node {
        stage("prepare-build") {
            echo "hosts: ${hostNameMap}"
            echo "working dir: ${pwd()}"

            cleanCheckout()
        }

        String war = build()

        stage("validate") {
            mavenRepoUrl = storeSnapshot(war)
        }
    }

    node {
        echo "starting automatic deployments"
        String downloadedWar = downloadAppWar(mavenRepoUrl)

        if (hostNameMap.DEV) { // not all pipelines have the 'D' in DTAP
            stage("deploy to DEV") {
                deploy(downloadedWar, hostNameMap.DEV, context)
            }
        }

    }

    askForAccApproval(recipients)

    node {
        stage("deploy to ACC") {
            String downloadedWar = downloadAppWar(mavenRepoUrl)

            deploy(downloadedWar, hostNameMap.ACC, context)

            storeRelease(downloadedWar)
        }

    }

    // Not shown here: when things fail, mail recipients etc.
}

/**
 * By default, branches for now only create an artifact, no deployments are done.
 * @see #buildAny
 */
def buildBranch(String recipients) {

    node {
        stage("prepare-build") {
            echo "working dir: ${pwd()}"
            checkout scm
        }

        String war = build()
        stage("validate") {
            checkWar(war)
        }
    }

    // Not shown here: when things fail, mail recipients etc.
}

def askForAccApproval(String recipients) { /* ... */}
def deploy(String warFile, List<String> hosts, String context) { /* ... */}
def cleanCheckout()  { /* ... */ }
def build()  { /* ... */ }
def storeSnapshot(String warFile) { /* ... */ }
def storeRelease(String warFile) { /* ... */ }
def downloadAppWar(String repoUrl) { /* ... */ }
def checkWar(String warFile) { /* ... */ }

// make script reusable:
return this

 

The target

An elegant library of components to help setup a pipeline made of a simple flow that calls out to these well defined, interchangeable units. These components are documented, have types and do proper error handling.

For brevity, this considers only the trunk part, building branches is left out.

// let's say these are 'components', a.k.a. 'little machines'
def context
def vcs
def builder
def repo
def container
def buildPromoter


def warUrl
node {
    stage("prepare-build") {
        context.printEnvironment(this)

        vcs.cleanCheckout()
    }

    String war

    stage("build") {

        war = builder.build(this)
    }

    stage("validate") {
        warUrl = repo.storeSnapshot(this, war)
    }
}

node {
    String downloadedWar = repo.download(this, warUrl)

    // this is the project's Jenkinsfile, so commenting out stuff is easy
    stage("deploy to DEV") {
        container.deploy(this, downloadedWar, "dev")
    }

}

buildPromoter.askForAccApproval(recipients)

node {
    stage("deploy to ACC") {
        String downloadedWar = repo.downloadAppWar(this, warUrl)

        container.deploy(this, downloadedWar, "acc")

        repo.storeRelease(this, downloadedWar)
    }

}

You may be able to imagine that switching containers, repositories, VCS system, can be managed a lot easier with this code. Of course there are questions to be answered, like:

  • how does the component code get in the project code base?
  • how do the (correct) components get instantiated?
  • how to do effective configuration for the various components?
  • can we get around limitations like CPS transformation, lack of dependency injection, etc?

 

On how to get there

Getting from the old situation to the new is actually, by itself, not that hard. The thing though is that the world does not stop while doing the refactor work. New teams, new features, bugfixes, production deployments, etc happen during the refactoring.

Those changes need to happen across all pipelines. To make sure there’s just one relevant codebase, we apply ‘strangulation’.

 At a minimum, the following steps are involved:

  1. Create a shared library repository (and configure it in Jenkins)
  2. Move old scripts into this library, as resources (just text file)
  3. Change a project’s  ‘Jenkinsfile‘ files to load the shared library
  4. Load the old script as resource
  5. Setup a ‘context‘ that can be accessed in a static way, like a poor mans DI solution
  6. Create new components (use this pattern)
  7. Replace parts of the script with calls to new components retrieved using the context (use branches!)
  8. Clean out remaining script code
  9. Replace the loading script with just a few simple statements to components

 

About having a poormans DI solution

Jenkins actually uses Guice internally to offer DI. In scripting code this is not available (at least not by default).

At some point it might become available in some form, so it would be nice to not have to do a complete rewrite once that moment comes.

Having a static way to access some kind of context is the simplest way to always have access to the needed dependencies. Only one hardwired link is needed to the context mechanism, all logic behind it is interchangeable. Statics are bad for tests, so it becomes important to be strict about having just this one instance, as said a poormans solution.

However, when there’s context:

  • You can control whether to use new components by leveraging the ‘context‘.
  • One of the components may be a central configuration for instance
  • There may be multiple implementations of a single component contract (interface)
  • Components may depend on each other while the script does not need to know

Getting and creating components can be made dynamic to allow switching ‘branches’ when doing branch by abstraction. The context is the main abstraction.

class Context {

  private static Context INSTANCE

  static Context get() {
    if (INSTANCE == null) {
      INSTANCE = new Context()
    }
    return INSTANCE
  }

  // get components from the instance (not static)
  MyBean getMyBean() {
    new SomeExtendedMyBean()
  }

  // for use with tests
  static void stub(Context fake) {
    Context.INSTANCE = fake
  }

}

// Example call:
Context.get().getMyBean().doSomeThing()

 

About Jenkins’ groovy limitations

CPS transformation (continuation passing style) imposes restrictions on pipeline code.

Read the following, including the warnings:

 

Wrapping things up

By using branch by abstraction, replacing old parts of any code becomes a series of manageable changes while keeping things running.

This mechanism translates quite well to pipelines.

Now when you get to having an awesome shared library, you might actually want to build, validate and test it (and let Jenkins do the heavy lifting)! Spoilers here. More on this subject soon!