First8 Java Consultancy https://technology.first8.nl First8 is gespecialiseerd in het pragmatisch ontwikkelen van bedrijfskritische Java toepassingen. Mon, 13 Nov 2017 08:43:47 +0000 nl hourly 1 https://wordpress.org/?v=4.8.3 Functional Java by Example | Part 1 – From Imperative to Declarative https://technology.first8.nl/functional-java-by-example-part-1-from-imperative-to-declarative/ Tue, 07 Nov 2017 00:17:55 +0000 https://technology.first8.nl/?p=5962 Functional Programming (FP) is about avoiding reassigning variables, avoiding mutable data structures, avoiding state and favoring functions all-the-way. What can we learn from FP if we would apply functional techniques to our everyday Java code? In this series called “Functional Java by Example” I will refactor in 8 installments an existing piece of code to see if I can reach … Lees verder Functional Java by Example | Part 1 – From Imperative to Declarative

Het bericht Functional Java by Example | Part 1 – From Imperative to Declarative verscheen eerst op First8 Java Consultancy.

]]>
Functional Programming (FP) is about avoiding reassigning variables, avoiding mutable data structures, avoiding state and favoring functions all-the-way. What can we learn from FP if we would apply functional techniques to our everyday Java code?

In this series called “Functional Java by Example” I will refactor in 8 installments an existing piece of code to see if I can reach Functional Nirvana in Java.

I don’t have much experience in a “real” functional language such as Haskell or F#, but I hope to demonstrate in each article by example what it means to apply some of these practices to your every day Java code.

Hopefully at the end you’ve gained some insight and know to pick some techniques which would benefit your own codebase.

These are all the parts:

  • Part 1 – From Imperative to Declarative
  • Part 2 – Naming Things
  • Part 3 – Don’t Use Exceptions to Control Flow
  • Part 4 – Prefer Immutability
  • Part 5 – Move I/O to the Outside
  • Part 6 – Functions as Parameters
  • Part 7 – Treat Failures as Data Too
  • Part 8 – More Pure Functions

I will update the links as each article is published. If you are reading this article through content syndication please check the original articles on my blog.

Each time also the code is pushed to this GitHub project.

Disclaimer: code is written in Apache Groovy, primarily for conciseness, so I don’t have to type stuff (you know: typing) where it doesn’t matter for the examples. Secondary, this language Just Makes Me Happy.

Why should you care about Functional Programming (FP)?

If you’re not doing Haskell, F# or Scala on a hip real-time, streaming data event processing framework you might as well pack your bags. Even the JavaScript guys are spinning functions around your methods these days — and that language has been around for some time already.

There are a lot of articles and video’s out there which make you believe that if you don’t hop on the Functional bandwagon these days, you’re left behind with your old OOP-contraptions and frankly, are obsolete within a couple of years.

Well, I’m here to tell you that’s not entirely true, but FP does have some premises, such as readability, testability and maintainability, values which we also strive to achieve in our (enterprise) Java code right?

As you’re reading this, for years you might already have the same outspoken opinion about FP being a step forwards or backwards or anno 2017-2018 you are just open for new ideas 🙂

You can level up your skills in every language by learning FP.

Determine for yourself what you can learn from it and how your own programming can benefit from it.

If you’re up to the task, let’s start this series with…

Some existing code

A word about example code: It’s pretty tricky to come up with contrived examples for blogs like these: it should be easy enough to appeal to a broad audience, simple enough to be understood without too much context, but still be interesting enough to result in desired learning effects.

Moving forward, each installment in this series will build on the previous one. Below is the code we’re going to take as a starting point.

So, put on your glasses and see if you’re familiar with coding-style below.

class FeedHandler {

  Webservice webservice
  DocumentDb documentDb

  void handle(List<Doc> changes) {

    for (int i = 0; i < changes.size(); i++) {
      def doc = changes[i]
      if (doc.type == 'important') {

        try {
          def resource = webservice.create(doc)
          doc.apiId = resource.id
          doc.status = 'processed'
        } catch (e) {
          doc.status = 'failed'
          doc.error = e.message
        }
        documentDb.update(doc)
      }
    }
  }
}

  • It’s some sort of FeedHandler.
  • It has two properties, some Webservice class and a DocumentDb class.
  • There’s a handle method which does something with a list of Doc objects. Documents?

Try to figure out what’s going on here 🙂

..

..

..

Done?

Reading stuff like this can make you feel like a human parser sometimes.

Scanning the class name (FeedHandler?) and the one method (void handle) can give you, next to some eye sore, a “feel” for the purpose of everything.

However, figuring out what exactly gets “handled” inside the handle method is much harder.

  • There’s a for-loop there — but what’s exactly being iterated? How many times?
  • This variable webservice is called, returning something called resource.
  • If webservice returns successfully, the doc (a document?) being iterated over is updated with a status.
  • Seems webservice can also throw an Exception, which is caught and the document is updated with another status.
  • Ultimately, the document is “updated” by this documentDb instance. Looks like a database.
  • Oh wait, this happens only for the “important” docs — a doc.type is checked first before doing all above stuff.

Perhaps, you have heard of the phrase:

Code is read more than it is written.

Check out this piece of beauty:

for (int i = 0; i < changes.size(); i++) {

Above code is written in an imperative style, which means that the concrete statements — which manipulate state and behaviour — are written out explicitly.

  • Initialize an int i with zero
  • Loop while int i is less then the size of the changes list
  • Increment int i with 1 each iteration

In this style of imperative (procedural) coding (which most of the mainstream languages, including object-oriented programming (OOP) languages, such as Java, C++, C#, were designed to primarily support) a developer writes the exact statements a computer needs to perform to accomplish a certain task.

A few signals of very imperative (procedural) code:

  1. Focus on how to perform the task
  2. State changes and order of execution is important
  3. Many loops and conditionals

The code clearly focuses on the “How” — which makes the “What” hard to determine.

Focus on the What

Our first step, as the title of this article already have away, is to move away from the imperative style of coding and refactor to a more declarative style — of which FP is a form.

The loop is bugging me the most.

Here’s the new version of the code.

class FeedHandler {

  Webservice webservice
  DocumentDb documentDb

  void handle(List<Doc> changes) {

    // for (int i = 0; i < changes.size(); i++) {
    //    def doc = changes[i]
    changes
      .findAll { doc -> doc.type == 'important' }
      .each { doc ->

      try {
        def resource = webservice.create(doc)
        doc.apiId = resource.id
        doc.status = 'processed'
      } catch (e) {
        doc.status = 'failed'
        doc.error = e.message
      }
      documentDb.update(doc)
    }
  }
}

What’s changed?

  • The if (doc.type == 'important') part has been replaced with a findAll { doc -> doc.type == 'important' } again on the document collection itself — meaning “find all documents which are important and return a new collection with only those important documents”
  • The imperative for-loop (with the intermediate i variable) has been replaced by the declarative each method on the documents collection itself — meaning “execute the piece of code for each doc in the list and I don’t care how you do it” 🙂

Don’t worry about each and findAll: these methods are added by Groovy, which I use happily together with Java in the same code base, to any Collection, e.g. Set, List, Map. Vanilla Java 8 has equivalent mechanisms, such as forEach to iterate a collection more declaratively.

What leads to readable software is:

Describe “What” and not “How”.

I can easily see what’s going on if I write my code in a more functional style, which saves me time (because yes, I do read code 90% of the time instead of writing it) and writing it like this is less error-prone, because less lines gives less opportunity for bugs to hide.

This is it for now 🙂

In part 2, we will name things properly, paving the way for more functional programming, such as “Either” or “Try” even later in the series.

This is a cross-post from my personal blog. Follow me on @tvinke if you like what you’re reading or subscribe to my blog on https://tedvinke.wordpress.com.

Het bericht Functional Java by Example | Part 1 – From Imperative to Declarative verscheen eerst op First8 Java Consultancy.

]]>
Why is Spring’s Health Down, Down, Up, Up, Up and Down again? https://technology.first8.nl/why-is-springs-health-down-down-up-up-up-and-down-again/ Wed, 25 Oct 2017 07:45:39 +0000 https://technology.first8.nl/?p=5957 Why Our new JavaScript client application regularly calls the /health endpoint of our Grails backend to determine on- of offline state. Things started to become “funny” with it. This endpoint we get for free, since Grails is based on Spring Boot, which comes with a sub-project called Spring Boot Actuator. This gives us a a bunch of endpoints which allows … Lees verder Why is Spring’s Health Down, Down, Up, Up, Up and Down again?

Het bericht Why is Spring’s Health Down, Down, Up, Up, Up and Down again? verscheen eerst op First8 Java Consultancy.

]]>
Why

Our new JavaScript client application regularly calls the /health endpoint of our Grails backend to determine on- of offline state. Things started to become “funny” with it.

This endpoint we get for free, since Grails is based on Spring Boot, which comes with a sub-project called Spring Boot Actuator.

This gives us a a bunch of endpoints which allows us to monitor and interact with our application, including /health which returns health information.

So, our JS client checks whether or not it can reach this /health endpoint, executed every few seconds, to determine if the user is on- or offline. Nothing fancy, and we might switch later on to just using the Google homepage or something, but for now this works.

Failing health check

On localhost everything always seems fine, but as soon as I got our Jenkins pipeline finally to deploy the app to to our test servers after each build, and we started veryfying the app there, things became funny.

Usually we had a streak of perfectly good calls.

GET https://tst.example.com/health 200 ()
GET https://tst.example.com/health 200 ()
GET https://tst.example.com/health 200 ()
etc

Other times every few seconds we saw errors accumulating in the Chrome Inspector. Health checks would fail with with a HTTP status code of 503 Service unavailable for a long time.

GET https://tst.example.com/health 503 ()
GET https://tst.example.com/health 503 ()
GET https://tst.example.com/health 503 ()
etc

Then after a while we would get good calls again!

GET https://tst.example.com/health 200 ()
GET https://tst.example.com/health 200 ()
etc

The response of these failed requests just said

{"status":"DOWN"}

This is — by design — not very descriptive.

I certainly did not write any healh indicators myself so why would it be “down”?

Experienced Spring Booters know it will pick up any health indicator on the classpath and comes default with a few. Which ones are actually in use can be a mystery, because by default this endpoint is classified by Spring Boot as “sensitive” — and thus doesn’t expose too much information to the outside world.

I had to make the health check a bit more “chatty” by setting the following setting:

endpoints.health.sensitive: false

Now, calling the endpoint manually revealed the contenders!

{
  "status":"DOWN",
  "diskSpace":{
    "status":"DOWN",
    "total":8579448832,
    "free":20480,
    "threshold":10485760
  },
  "db":{
    "status":"UP",
    "database":"H2",
    "hello":1
  }
}

The general status of “down” is an aggregate result of (in this case: 2) auto-configured health indicators listed explicitly now.

What inmediately came to mind, when I saw this:

  • Why didn’t I remove H2 yet 🙂
  • Hey, disk space is running out on the test server already?!

The H2 database comes as a default dependency in any Grails application, but our app doesn’t use it — not in production and not for testing — so we will definately remove it from the dependencies. That’s a worry less.

With regard to disk space, it’s the good ol’ DiskSpaceHealthIndicator (indeed part of the auto-configured indicators) telling me things are unhealthy.

It has a default threshold of 10485760 bytes or 10 MB — the minimum disk space that should be available.

And…there’s only 20 kb free space? Of 8 gigs in total.

That’s a pretty low number 🙂

In the first 0.7 seconds I didn’t believe the healt indicator, can you imagine?

So I SSH’ed into the test server to check the available disk space with the df utility:

[Ted@server-01t ~]$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  8.0G  8.0G   20K 100% /
...

Right, at least the health check speaks the truth there: there’s actually only a tiny bit of space left.

I relayed this to my IT collegue which provisioned this machine, to investigate. Seemed that there were already some Java heap dumps from earlier experiments taking up the space — which I was told will be removed ASAP.

Better check the other node too.

[Ted@server-02t ~]$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  8.0G  5.3G  2.8G  66% /

Enough room there.

Wait a minute? “Other node?” Yes, we have 2 test servers, 01t and 02t.

At that point, I realized: the behaviour I was seeing was because of the loadbalancer forwarding a request to tst.example.com to either server-01t or the other `server-02t’. One of them was low on disk space, which explains that the health indicator of the Grails app on that server says “down” – resulting in a HTTP 503.

When observing these health calls (which requests are continuously made by our JS client) through the Chrome Inspector one small question was left: why do we have a streak of (sometimes 50x) “ups” (200) and then a bunch of “downs” (503) then in a seemingly random order?

The load balancer should keep us “fixed” on that node where a JS client for the first time makes its requests, as we configure our servers like that.

If the loadbalancer would send every request (to tst.example.com) round robin to server 1 or 2, I would expect a more (random) response of e.g. “up”, “down”, “down”, “up”, “down”, “up”, “up”, “down”, “up”.

Well, it seemed that during the window while I was observing this behaviour, the rest of the team was still developing features and…pushing to Git, which Jenkins picks up, which gets deployed to both servers. Because of a redeploy of the app to ech server serially, the loadbalancer “sees” the unavailibility of the application on the one server (with enough disk space: “up”, “up”, “up”, “up”, “up”) for the duration of the deployment and redirects traffic to the other server (with almost no disk space: “down”, “down”, “down”)…

…which gets updated with a new WAR pretty soon after, and requests end up on the other server again (with enough disk space: “up”, “up”, “up”, “up”, “up”).

🙂

Costs again 3 hours out of my life. Including some time noting down this stuff here (but I think that’s worth it) 🙂

Lesson learned

Know your process

Knowing that there’s a loadbalancer and multiple nodes (and how they work) helps. And that your CI server continuously deploys new versions to your environment which is under investigation does not help. But altogether knowing this did help to clarify the observed behaviour.

Learn the “sensible” defaults of your framework.

In case of Grails 3 and Spring Boot, know the stuff which gets “auto-configured” from the classpath, inspect it and make sure it’s going to be what you actually want.

We will get rid of H2 and review the health indicators we actually need, may be disabling the auto-configuration altogether. We cleaned up the Java heap dumps which caused the full disk. We’ve re-confirmed that the Unix team will monitor the OS, including disk space, so that we at least don’t need the DiskSpaceHealthIndicator anymore 🙂

This article has been crossposted from my personal blog.

Het bericht Why is Spring’s Health Down, Down, Up, Up, Up and Down again? verscheen eerst op First8 Java Consultancy.

]]>
Jenkins shared libraries: tested https://technology.first8.nl/jenkins-shared-libraries-tested/ Sun, 15 Oct 2017 08:04:11 +0000 https://technology.first8.nl/?p=5625 Jenkins is a very neat tool to implement a continuous delivery process, mainly due to its flexibility. Sometimes it can be hard though to keep complexity low, and when that happens, (automated) tests become far more important. Jenkins should in fact be running tests that verify the scripts running tests, proving they actually work. Warning: this dog will chase its … Lees verder Jenkins shared libraries: tested

Het bericht Jenkins shared libraries: tested verscheen eerst op First8 Java Consultancy.

]]>
Jenkins is a very neat tool to implement a continuous delivery process, mainly due to its flexibility. Sometimes it can be hard though to keep complexity low, and when that happens, (automated) tests become far more important.

Jenkins should in fact be running tests that verify the scripts running tests, proving they actually work. Warning: this dog will chase its own tail.

 

Jenkins

Jenkins pipeline scripts

Once, not so long ago, the way to go was to manage jobs manually using the GUI. At the moment Cloudbees is proposing pipelines as scripts, either scripted procedurally or declarative.

Declarative pipelines are still in development. Their primary intent is to be suitable to be edited from a simple GUI.

 

Extending Jenkins scripting

In the world of open source, integration is becoming the only major thing a company needs to implement themselves.

Using just a single script however will get you only so far. At a certain point you will want to introduce abstractions.

 

The Jenkins Shared library

Cloudbees recently introduced a way of having more than just a script, but keeping things relatively easy and secure to use.

The ‘shared library’ is a Jenkins concept that allows projects to declaratively import a full repository of classes and scripts.

In short, any shared library will have a few common folders:

/ -- Jenkins will load the complete repository, the following places are important:

/src -- classes, optionally using @Grab to get dependencies
/vars -- scripts, closures, dsl building
/resources -- anything else, stuff

 

It comes with tests

As with all new areas of applying code, once maturity is reached, tests become relevant. Fortunately there is a way to run tests for the pipeline code, be it scripts or (typed) classes.

These tests will lean towards unit tests even though the unit boundaries for pipelines are not that clear. Interaction with external systems is of course not something best covered using unit tests. The following examples are aimed primarily at testing groovy compilation, execution and (emulated) interpretation by Jenkins. When defining clear units (classes), these can become proper targets for unit tests once more.

 

JUnit example

Note that the code is groovy, but it can just be java, just add semi colons 😉 (okay maybe a few anonymous classes etc, but should be possible).

package com.first8

import org.junit.Before
import org.junit.Test

import com.lesfurets.jenkins.unit.BasePipelineTest

class BasicPipelineTest extends BasePipelineTest {
        
    @Before
    void before() {
    super.setUp()
        
    helper.registerAllowedMethod("pwd", []) { "/tmp/testworld/doesnotexist" }
    helper.registerAllowedMethod("stash", [Map.class]) { println "mock stash called." }
    }
    
    @Test
    void happyFlowLoading() throws Exception {
        Script script = loadScript("resources/com/first8/grails3/DefaultJenkinsfile")
        
        printCallStack()
    }
    
}

Spock example

Since the pipeline scripts use groovy, one might as well write tests using groovy.

abstract class JenkinsSpecification extends Specification implements RegressionTest {

    /**
     * Delegate to the junit cps transforming base test
     */
    @Delegate
    BasePipelineTestCPS baseTest

    /**
     * Do the common setup
     */
    def setup() {

        // Set callstacks path for RegressionTest
        callStackPath = 'jenkinsSpec/callstacks/'

        baseTest = new BasePipelineTestCPS()
        baseTest.setUp()
    }
}

class ExampleSpec extends JenkinsSpecification {

    def setup() {
        // like with junit, the helper is available here
    }

    void cleanup() {
        printCallStack()
    }

    def "example spec"() {
        when:

            println "nothing happens in this spock specification, success!"
        
        then:
            assert true
    }

}

 

Project layout

To be able to work with a shared library, a project setup would be convenient, so why not use groovy through gradle? The layout of a Jenkins shared library is not standard for gradle. This means that a little config is needed.

Once the configuration is in place, the project will build quite easily. There are some snags though, which will show in the gradle config file below. The little problems together make the build.gradle file not quite as trivial.

 

 

Gradle config

Check the following build.gradle config, there are hints on possible difficulties in the comments.

// Apply the java plugin to add support for Java
apply plugin: 'java'
apply plugin: 'groovy'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'project-report'

targetCompatibility = '1.8'
sourceCompatibility = '1.8'

model {
    components {
        main(JvmLibrarySpec) {
            targetPlatform 'java8'
        }
    }
}

// follow the structure as dictated by Jenkins:
sourceSets {
    main {
        groovy {
            srcDirs = ['src','vars']
        }
        resources {
            srcDirs = ['resources']
        }
    }
    test {
        groovy {
            srcDirs = ['test']
        }
    }
}

// allow for the (pipeline code) Ivy Grab/grape system to do some setup...
String home = System.getProperty("user.home")
task intializeGrapeConfig(type: Copy) {
    doFirst {        
        println "installing grape config in: ${home}/.groovy"
    }    
    from '.'
    include "grapeConfig.xml" // assuming that this file is in the shared library!
    into "${home}/.groovy/"
}
intializeGrapeConfig.group = "build setup"

// In this section you declare where to find the dependencies of your project
repositories {
    // Use 'jcenter' for resolving your dependencies.
    // You can declare any Maven/Ivy/file repository here.
    mavenCentral()
    jcenter()
    maven {
        url "http://repo.jenkins-ci.org/releases/"
    }
}

// In this section you declare the dependencies for your production and test code
dependencies {
    // to be sure @Grab compile time dependency downloading works in scripts, have the goods ready
    compile group: 'org.codehaus.groovy', name: 'groovy-all', version: '2.4.9'
    compile group: 'org.apache.ivy', name:'ivy', version:'2.4.0'

    // The production code uses the SLF4J logging API at compile time
    compile 'org.slf4j:slf4j-api:1.7.21'

    // Declare the dependency for your favourite test framework you want to use in your tests.
    // TestNG is also supported by the Gradle Test task. Just change the
    // testCompile dependency to testCompile 'org.testng:testng:6.8.1' and add
    // 'test.useTestNG()' to your build script.
    compile 'junit:junit:4.12'
    
    // ======= jenkins pipeline unit testing framework ====== //
    compile group:'com.lesfurets', name:'jenkins-pipeline-unit', version:'1.0'
    
    /**
     * For Spock unit tests (not needed when just using JUnit)
     */
    testCompile 'org.spockframework:spock-core:1.1-groovy-2.4'
    testCompile 'cglib:cglib-nodep:3.2.2'
    testCompile 'org.objenesis:objenesis:1.2'
    testCompile 'org.assertj:assertj-core:3.7.0'

    // ============== base of jenkins =============== //
    // (this list may grow, just try to minimize libs coming in):
    compile('org.jenkins-ci.main:jenkins-core:2.69') {
        transitive = false
    }
    compile('org.kohsuke.stapler:stapler:1.251') {
        transitive = false
    }
    
    // ========= parts of jenkins plugins ============ //
    // might by used in pipeline scripts (note the @jar!):
    compile 'org.jenkins-ci.plugins.workflow:workflow-step-api:2.12@jar'

    // ========= PIPELINE COMPONENT DEPENDENCIES ========//
    // since ivy @Grab is not emulated, we need to include them this way
    compile 'org.jfrog.artifactory.client:artifactory-java-client-services:0.16'
    
}

// make sure groovy compiler has all the dependencies before! starting compiling (@Grab needs ivy for instance)
tasks.withType(GroovyCompile) {
    groovyClasspath += configurations.compile
}

Running the tests

Since gradle can be started using the brilliant gradle wrapper, starting the build from Jenkins is easy.

It’s enough to create a typical ‘multi-branch job’ and have a Jenkinsfile with the following:

sh "./gradlew verify"

Now each push to the shared library repository will start a test run producing early warnings if something is wrong.

 

More info

 

Chasing tails

The recently developed option to test pipeline code is great. As with all testing options though it’s important to keep in mind whether the extra testing effort is worth it.

Having Jenkins run tests on test code that runs tests seems kind of pointless, after all tests will (hopefully) fail both when the production code is broken and when the tests are broken. It turns out that having some trivial unit tests for pipeline code does deliver a lot of value by creating a very short feedback loop. In terms of knowing whether the pipeline actually does the ‘right’ thing is a different matter. Tests that can assert this are far more complicated, true system tests are probably more suitable for that purpose.

 

Some things to think about:

  • what is continuous passing style?
  • what would be a convenient strategy to actually integration-/system-test new pipeline code?

 

Het bericht Jenkins shared libraries: tested verscheen eerst op First8 Java Consultancy.

]]>
Spring WebFlux annotation based https://technology.first8.nl/spring-webflux-annotation-based/ Thu, 05 Oct 2017 14:13:55 +0000 https://technology.first8.nl/?p=5925 The release of Spring Framework version 5 is just around the corner and it contains a lot of new features. One of those is the WebFlux framework. It adds support for reactive web programming on client and server side. Spring WebFlux uses Reactor to make it all possible. The server side supports two programming models: Annotation based, using @Controller annotations. … Lees verder Spring WebFlux annotation based

Het bericht Spring WebFlux annotation based verscheen eerst op First8 Java Consultancy.

]]>
The release of Spring Framework version 5 is just around the corner and it contains a lot of new features. One of those is the WebFlux framework. It adds support for reactive web programming on client and server side. Spring WebFlux uses Reactor to make it all possible. The server side supports two programming models:

  • Annotation based, using @Controller annotations.
  • Functional, using Java 8 lambdas for routing and request handling.

The annotation based model correlates most to what we have been doing for years using Spring MVC. We are able to use all the things we know, but in a reactive manner. On the outside there is little difference. The biggest change is on the inside. The underlying implemenation is based on a reactive implementation of HttpServletRequest and HttpServletResponse. This blog focuses on the annotation based model. A future blog will show the functional model.

Let’s have a look at a a simple application that serves the names of the planets in our solar system, optionally applying a filter, as plain text using a line for each planet. Using curl I would like to see the following:

$ curl http://localhost:8080/planets
Mercury
Venus
Earth
Mars
Jupiter
Saturn
Uranus
Neptune
$ curl http://localhost:8080/planets/s
Venus
Mars
Saturn
Uranus

Not too spectacular, but it will suit the purpose of this blog. A Spring MVC controller doing the job will look something like this:

@RestController
@AllArgsConstructor
public class PlanetController {

    private PlanetRepository planetRepository;

    @RequestMapping(path={"/planets/{substring}","/planets"})
    public String getMatchingPlanets(@PathVariable(name = "substring", required=false) Optional<String> substring) {
        return planetRepository.getPlanets()
                .filter( p -> p.toLowerCase().contains(substring.orElse("").toLowerCase()))
                .reduce("",  (l,r) -> l + r + "\n");
    }
}

Nothing strange here. We have a PlanetRepository that gives us a Stream<String> of planets. A controller with a @RequestMapping to handle the request and produce a suitable response. Switching to Spring WebFlux requires some changes. First change the dependency on Spring MVC to Spring WebFlux. If you are like me and use Spring Boot, you can swap spring-boot-starter-web for spring-boot-starter-webflux. Next we need to update the response type of the request mapping methods to either Flux<String> or Mono<String>. WebFlux requires the request handling methods to return on of these types. The updated controller looks like:

@RestController
@AllArgsConstructor
public class PlanetController {

    private PlanetRepository planetRepository;

    @RequestMapping(path={"/planets/{substring}","/planets"})
    public Flux<String> getMatchingPlanets(@PathVariable(name = "substring", required=false) Optional<String> substring) {
        return Flux.fromStream(planetRepository.getPlanets()
                .filter( p -> p.toLowerCase().contains(substring.orElse("").toLowerCase()))
                .map( p -> p + "\n"));
    }
}

As you can see we now return a response of Flux<String>. We could have used Mono<String> which is used for non collection types, but as we are showing one planet per line, we can treat it as a stream of planets. Therefore a Flux<String> is the better option.

As mentioned before, we are using a PlanetRepository that supplies us with a Stream<String> of planets and to convert it to a Flux<String> type, we use the static factory method Flux.fromStream(Stream<T>). As a flux is quite similar to a stream, we can make our code a bit cleaner by having the PlanetRepository return a Flux<String> instead of a Stream<String>. Our controller will then look like:

@RestController
@AllArgsConstructor
public class PlanetController {

    private PlanetRepository planetRepository;

    @RequestMapping(path={"/planets/{substring}","/planets"})
    public Flux<String> getMatchingPlanets(@PathVariable(name = "substring", required=false) Optional<String> substring) {
        return planetRepository.getPlanets()
                .filter( p -> p.toLowerCase().contains(substring.orElse("").toLowerCase()))
                .map( p -> p + "\n");
    }
}

There you go – a fully reactive web controller. The full source code for the project can be found on github.

This is a cross-post from my personal blog. Follow me on @acuriouscoder if you like what you’re reading or subscribe to my blog on https://acuriouscoder.net.

Het bericht Spring WebFlux annotation based verscheen eerst op First8 Java Consultancy.

]]>
Strangling pipelines https://technology.first8.nl/strangling-pipelines/ Tue, 29 Aug 2017 08:47:40 +0000 https://technology.first8.nl/?p=5811 Strangling pipelines This practical example is about the strangulation pattern, as explained by Martin Fowler here, applied to pipelines.   The situation Right after ditching the old manually managed Jenkins jobs, we were left with ‘simple’ but very lengthy procedural pipeline scripts. These scripts were then duplicated and slightly modified for each type of pipeline (because there’s always need for … Lees verder Strangling pipelines

Het bericht Strangling pipelines verscheen eerst op First8 Java Consultancy.

]]>
Strangling pipelines
Supported by Redhat

This practical example is about the strangulation pattern, as explained by Martin Fowler here, applied to pipelines.

 

The situation

Right after ditching the old manually managed Jenkins jobs, we were left with ‘simple’ but very lengthy procedural pipeline scripts. These scripts were then duplicated and slightly modified for each type of pipeline (because there’s always need for another ‘standard’ way of doing things).

Maintenance meant keeping changes synchronized between the scripts but taking care of subtle but intentional differences. This becomes increasingly difficult as more complexity is added, due to adding features to continuous delivery.

To give an example of the structure, and potential for being lengthy, take a look at the following.

#!/usr/bin/env groovy

/**
 * Read this first: <a href="https://github.com/jenkinsci/pipeline-examples/blob/master/docs/BEST_PRACTICES.md">Best Practices</a><br>
 *
 * Example usage of this script:
 *
 <pre>
 def sharedBuild
 node {
   sharedBuild = load 'DefaultJenkinsfile'
 }

 sharedBuild.buildAny([DEV:[host1]], "/context", ["my.email@mydomain.com"])
 </pre>
 */

def buildAny(Map<String,List<String>> hosts, String context, String recipients) {

    def branchName = env.BRANCH_NAME

    if (branchName in ['trunk']) { // yes svn still lives.
        buildTrunk(hosts, context, recipients)
    } else {
        buildBranch(recipients)
    }
}

/**
 * Trunk builds are built, and then deployed throughout the various DTAP servers.
 * @see #buildAny
 */
def buildTrunk(Map<String,List<String>> hostNameMap, String context, String recipients) {

    def mavenRepoUrl

    node {
        stage("prepare-build") {
            echo "hosts: ${hostNameMap}"
            echo "working dir: ${pwd()}"

            cleanCheckout()
        }

        String war = build()

        stage("validate") {
            mavenRepoUrl = storeSnapshot(war)
        }
    }

    node {
        echo "starting automatic deployments"
        String downloadedWar = downloadAppWar(mavenRepoUrl)

        if (hostNameMap.DEV) { // not all pipelines have the 'D' in DTAP
            stage("deploy to DEV") {
                deploy(downloadedWar, hostNameMap.DEV, context)
            }
        }

    }

    askForAccApproval(recipients)

    node {
        stage("deploy to ACC") {
            String downloadedWar = downloadAppWar(mavenRepoUrl)

            deploy(downloadedWar, hostNameMap.ACC, context)

            storeRelease(downloadedWar)
        }

    }

    // Not shown here: when things fail, mail recipients etc.
}

/**
 * By default, branches for now only create an artifact, no deployments are done.
 * @see #buildAny
 */
def buildBranch(String recipients) {

    node {
        stage("prepare-build") {
            echo "working dir: ${pwd()}"
            checkout scm
        }

        String war = build()
        stage("validate") {
            checkWar(war)
        }
    }

    // Not shown here: when things fail, mail recipients etc.
}

def askForAccApproval(String recipients) { /* ... */}
def deploy(String warFile, List<String> hosts, String context) { /* ... */}
def cleanCheckout()  { /* ... */ }
def build()  { /* ... */ }
def storeSnapshot(String warFile) { /* ... */ }
def storeRelease(String warFile) { /* ... */ }
def downloadAppWar(String repoUrl) { /* ... */ }
def checkWar(String warFile) { /* ... */ }

// make script reusable:
return this

 

The target

An elegant library of components to help setup a pipeline made of a simple flow that calls out to these well defined, interchangeable units. These components are documented, have types and do proper error handling.

For brevity, this considers only the trunk part, building branches is left out.

// let's say these are 'components', a.k.a. 'little machines'
def context
def vcs
def builder
def repo
def container
def buildPromoter


def warUrl
node {
    stage("prepare-build") {
        context.printEnvironment(this)

        vcs.cleanCheckout()
    }

    String war

    stage("build") {

        war = builder.build(this)
    }

    stage("validate") {
        warUrl = repo.storeSnapshot(this, war)
    }
}

node {
    String downloadedWar = repo.download(this, warUrl)

    // this is the project's Jenkinsfile, so commenting out stuff is easy
    stage("deploy to DEV") {
        container.deploy(this, downloadedWar, "dev")
    }

}

buildPromoter.askForAccApproval(recipients)

node {
    stage("deploy to ACC") {
        String downloadedWar = repo.downloadAppWar(this, warUrl)

        container.deploy(this, downloadedWar, "acc")

        repo.storeRelease(this, downloadedWar)
    }

}

You may be able to imagine that switching containers, repositories, VCS system, can be managed a lot easier with this code. Of course there are questions to be answered, like:

  • how does the component code get in the project code base?
  • how do the (correct) components get instantiated?
  • how to do effective configuration for the various components?
  • can we get around limitations like CPS transformation, lack of dependency injection, etc?

 

On how to get there

Getting from the old situation to the new is actually, by itself, not that hard. The thing though is that the world does not stop while doing the refactor work. New teams, new features, bugfixes, production deployments, etc happen during the refactoring.

Those changes need to happen across all pipelines. To make sure there’s just one relevant codebase, we apply ‘strangulation’.

 At a minimum, the following steps are involved:

  1. Create a shared library repository (and configure it in Jenkins)
  2. Move old scripts into this library, as resources (just text file)
  3. Change a project’s  ‘Jenkinsfile‘ files to load the shared library
  4. Load the old script as resource
  5. Setup a ‘context‘ that can be accessed in a static way, like a poor mans DI solution
  6. Create new components (use this pattern)
  7. Replace parts of the script with calls to new components retrieved using the context (use branches!)
  8. Clean out remaining script code
  9. Replace the loading script with just a few simple statements to components

 

About having a poormans DI solution

Jenkins actually uses Guice internally to offer DI. In scripting code this is not available (at least not by default).

At some point it might become available in some form, so it would be nice to not have to do a complete rewrite once that moment comes.

Having a static way to access some kind of context is the simplest way to always have access to the needed dependencies. Only one hardwired link is needed to the context mechanism, all logic behind it is interchangeable. Statics are bad for tests, so it becomes important to be strict about having just this one instance, as said a poormans solution.

However, when there’s context:

  • You can control whether to use new components by leveraging the ‘context‘.
  • One of the components may be a central configuration for instance
  • There may be multiple implementations of a single component contract (interface)
  • Components may depend on each other while the script does not need to know

Getting and creating components can be made dynamic to allow switching ‘branches’ when doing branch by abstraction. The context is the main abstraction.

class Context {

  private static Context INSTANCE

  static Context get() {
    if (INSTANCE == null) {
      INSTANCE = new Context()
    }
    return INSTANCE
  }

  // get components from the instance (not static)
  MyBean getMyBean() {
    new SomeExtendedMyBean()
  }

  // for use with tests
  static void stub(Context fake) {
    Context.INSTANCE = fake
  }

}

// Example call:
Context.get().getMyBean().doSomeThing()

 

About Jenkins’ groovy limitations

CPS transformation (continuation passing style) imposes restrictions on pipeline code.

Read the following, including the warnings:

 

Wrapping things up

By using branch by abstraction, replacing old parts of any code becomes a series of manageable changes while keeping things running.

This mechanism translates quite well to pipelines.

Now when you get to having an awesome shared library, you might actually want to build, validate and test it (and let Jenkins do the heavy lifting)! Spoilers here. More on this subject soon!

 

 

Het bericht Strangling pipelines verscheen eerst op First8 Java Consultancy.

]]>
Conflict-free replicated data types https://technology.first8.nl/conflict-free-replicated-data-types/ Tue, 04 Jul 2017 19:36:53 +0000 https://technology.first8.nl/?p=5840 By now everybody who has ever worked on a distributed system has heard of the CAP theorem. Simply put, it states that you have to choose between being (C)onsistent or being (A)vailable. The P, standing for Partition tolerant, is not really a choice for a distributed system (e.g. see the number 1 fallacy of distributed programming). So that leaves us in a peculiar state: … Lees verder Conflict-free replicated data types

Het bericht Conflict-free replicated data types verscheen eerst op First8 Java Consultancy.

]]>
By now everybody who has ever worked on a distributed system has heard of the CAP theorem. Simply put, it states that you have to choose between being (C)onsistent or being (A)vailable. The P, standing for Partition tolerant, is not really a choice for a distributed system (e.g. see the number 1 fallacy of distributed programming).

So that leaves us in a peculiar state: we can’t deliver what our customers want. It does give us a great excuse for creating inconsistent or unavailable software, but I am not sure if that’s much of an upside. But is the situation really that dire? 

In proving Brewer’s conjecture and thereby making it a theorem, Seth Gilbert and Nancy Lynch had to define what it means to be consistent. And that is a very strict definition indeed. Loosening the definition of ‘being consistent’ can circumvent the CAP triangle. This is exactly what conflict-free replicated data types (or CRDT for short) intend to do. 

 

What’s a CRDT?

A CRDT is a group of data structures that allow you to store and retrieve data in a distributed way. The data structures in Java’s Collection Framework are fast and efficient per-jvm structures. But they are not distributed; that is something you have to coordinate yourself. CRDT’s are distributed. They’ll work over multiple replicas (that’s what the (D)istributed in CRDT stands for). And, they deliver some form of consistency that is actually quite useable. Maybe not as strict as you would like, but hey, blame CAP’s mathematics for that impossibility. 

So what kind of consistency does a CRDT deliver? Well, they promise strong eventual consistency and monotonicity. Let’s have a look at what that means.

Strong eventual consistency

Eventual consistency is a loosely defined term that means that replicas of a certain data structure don’t always have to be consistent with each other. But somehow, magically, there will be a time when they are, sometimes. Strong eventual consistency is a bit more well defined. It states that two replicates who have received exactly the same updates are in the same state. But these updates do not have to have arrived in the same order. That’s actually quite nice and feels ‘natural’: a replica that hasn’t received all updates naturally isn’t, well, up-to-date. And since the updates don’t have to come in order, we can receive updates on all replicas and don’t have to care too much about timing and synchronisation that much. (Hence the conflict-free part of CRDT). Of course, we are only “really consistent” if we don’t have updates for a while. If that never happens, well, you are out of luck. But still, from some update back in time, the replicas will be consistent.

Monotonicity

This is another nice property of a CRDT. It guarantees that if you read multiple times from a replica, you’ll never go back in time. It may seem a bit obvious but for distributed systems it’s not necessarily guaranteed. For example, if you have a master-slave replication, typically you update the master and read from a slave. If that’s hidden from your client, you could easily go back in time. Imagine that you update the master and immediately read from the slave. The slave may be behind in updates and from your client’s perspective you go back in time.

Ok, cool. Give me an example!

One of the simplest CRDT’s is a global tick counter. Imagine that you want to keep track of how often your ads on your website are viewed (a tick). Since that might be a massive amount and it is critical for your business, you want to distribute that. A GCounter datastructure can do that for you. Each instance of the GCounter can receive ticks, and they’ll distribute their counts amongst all instances. Since they’re CRDT’s, you’ll receive the strong eventual consistency and monotonicity guarantees.

The magic works like this: each instance keeps track of all other instances’ counts. In other words, each instance has a Map<Instance, Integer>. The total count (over all instances) is thus simply the sum of all values in that map. If an instance receives a new tick, it increases the counter for its own Instance in the map. And each time something changes in that map, publish that to all other instances. The other instances will then update their own maps. 

I want more!

Ok, a global tick counter doesn’t seem to impressive. Even if it has strong eventual consistency and monotonicity, there has to be more! And yes, there is more! For example, if you add a second Map to the implementation, you could do up- and down ticks. That’s called a PNCounter.  But you can also create Set’s that guarantee uniqueness of objects. For example, there’s GSet (grow only set), 2PSet (with adds and removes, remove-wins), Observed Removed Set (with multiple adds and removes), Add-wins Last-Writer-Wins Set (with ‘timestamps’) and AWORSet (add wins observed removed set), RWORSet, MVRegister, SUSet. And probably by the time you read this, plenty more. 

You can find more on CRDT’s on in this paper and actual implementations on these links:

 

 

Het bericht Conflict-free replicated data types verscheen eerst op First8 Java Consultancy.

]]>
devcon 2017 https://technology.first8.nl/devcon-2017/ Tue, 11 Apr 2017 12:23:18 +0000 https://technology.first8.nl/?p=5674      Afgelopen donderdag (6 april) ben ik naar devcon geweest. Dit is een jaarlijkse conferentie georganiseerd door Luminis en is voor en door (Luminis) developers gemaakt. Het is een jaarlijkse conferentie die wordt gehouden in de CineMec bioscoop in Ede. De focus bij deze conferentie ligt op vakmanschap en het delen van kennis. Nadat de conferentie officieel geopend was, … Lees verder devcon 2017

Het bericht devcon 2017 verscheen eerst op First8 Java Consultancy.

]]>
 

opening devcon, vlak voor aanvang
opening devcon, vlak voor aanvang

  

Afgelopen donderdag (6 april) ben ik naar devcon geweest. Dit is een jaarlijkse conferentie georganiseerd door Luminis en is voor en door (Luminis) developers gemaakt. Het is een jaarlijkse conferentie die wordt gehouden in de CineMec bioscoop in Ede. De focus bij deze conferentie ligt op vakmanschap en het delen van kennis.

Opening devcon door Bert Ertman
Opening devcon door Bert Ertman


Nadat de conferentie officieel geopend was, was het tijd voor de eerste keynote: AI or Death, Redefining what it means to be human in the age of software van Alix Rübsaam. Zij gaf een filosofische kijk op Artificial Intelligence vanuit het perspectief van humanisme.

"Visions of technology reflect the time and space where they originate" by Genevieve Bell & Paul Pourish, Divining a digital future, p. 17
AI or Death

 

Vervolgens ben ik naar The story map – new dimensions to your product backlog van Walter Treur geweest. Dit ging over wat een story map is, dat een story map gebruikersperspectief/context toevoegt aan je project, hoe je een story map kunt toepassen binnen je project (voor zowel kleine projecten, grote projecten als legacy projecten) en hoe het zich verhoudt tot een product backlog.

In Digital security for engineers heeft Angelo van der Sijpt een aantal security breaches die de afgelopen tijd in het nieuws zijn geweest belicht en hierover zijn visie gegeven over hoe goed of slecht de verschillende voorvallen door de verantwoordelijken zijn opgepakt. Hij gaf vooral ook aan dat digitale beveiliging iedereens verantwoordelijkheid is. Aan de hand van de ISO-standaard en voorbeelden heeft hij de begrippen attack, threat, control en risk uitgelegd. Aan het einde heeft hij nog een aantal tools aangereikt die je kunnen helpen, zoals FireEye en Splunk.

Na de lunch was het tijd voor de tweede keynote IoT and the web of systems door Jens Wellmann. Hij begon zijn praatje in het Duits totdat de dagvoorzitter hem onderbrak en vroeg te vervolgen in het Engels (of Nederlands), waarna het praatje in het Nederlands verder ging met allerlei grappen. Dit was een nepspeech van Pieter de Rijk (een komiek). Hij heeft het erg overtuigend gebracht en de humor na de lunch is een prima invulling voor wat normaal gesproken de ‘graveyard slot’ genoemd wordt.

Angela Lengkeek vertelde in HTTP/2 – What’s in it for you de belangrijkste verbeteringen in HTTP 2.0 t.o.v. HTTP 1.1. Hierbij moet je denken aan:

    • Multiplexing
    • Binair protocol
    • Header-compressie
    • Server push
    • Stream prioritering

Tevens heeft ze ook als tip meegegeven om bij HTTP 2 geen gebruik meer te maken van inline resources en alles JavaScript, CSS, afbeeldingen, enz. in losse bestanden te zetten. Dit heeft er mee te maken dat wanneer er iets wijzigt in één van de bestanden alleen van dat bestand een nieuwe versie hoeft te worden opgevraagd, i.p.v. de gehele inhoud.

Verder heeft ze laten zien met de chrome ontwikkelaarstools wat het verschil is in het laden van webpagina’s via HTTP 1 & HTTP 2.

Modules or Microservices ging erover of je modules of microservices moet gebruiken. Zijn antwoord was meteen ‘het hangt ervan af’. In deze presentatie zette Sander Mak zijn mening uiteen over dat dit een OR en geen XOR is. In zijn ogen wordt er te veel gefocust op de lange termijn winst die microservices opleveren en vergeten dat deze een hoge ‘cost’ aan het begin hebben. Voor modules geldt dat ze tevens winst opleveren voor de lange termijn en de kosten aan het begin zijn hiervan veel lager, zoals op onderstaande foto te zien is (uit de presentatie van Sander Mak). Zijn conclusie was dat voor producten op een schaal van bijvoorbeeld Netflix het zeker nut heeft om met microservices te werken en dat voor producten op een wat kleinere schaal (wat voor de meeste ontwikkelaars het geval is) je beter voor modules kunt kiezen.

Monolith vs. modules vs. microservices
Monolith vs. modules vs. microservices

 

Tot besluit

Al met al vond ik het een erg leerzame en inspirerende dag en heb ik weer nieuwe dingen om mee aan de slag te gaan.

 

Het bericht devcon 2017 verscheen eerst op First8 Java Consultancy.

]]>
First8Friday editie 4 2017 – Ansible https://technology.first8.nl/first8friday-editie-4-2017-ansible/ Fri, 07 Apr 2017 07:21:48 +0000 https://technology.first8.nl/?p=5642 Welkom bij de First8Friday van april 2017, je terugkerende dosis Open Source inspiratie die we elke eerste vrijdag van de maand met je delen. Deze videoblog gaat over Ansible. Ansible is een ‘Configuration Management Tool’ waarmee je geautomatiseerd je infrastructuur kunt uitrollen.       Wil je meer weten over Ansible? Op dinsdag 30 mei organiseren we een First8University over dit onderwerp.

Het bericht First8Friday editie 4 2017 – Ansible verscheen eerst op First8 Java Consultancy.

]]>
Welkom bij de First8Friday van april 2017, je terugkerende dosis Open Source inspiratie die we elke eerste vrijdag van de maand met je delen. Deze videoblog gaat over Ansible. Ansible is een ‘Configuration Management Tool’ waarmee je geautomatiseerd je infrastructuur kunt uitrollen.

 

 

First8Friday_logo

 

 

Wil je meer weten over Ansible? Op dinsdag 30 mei organiseren we een First8University over dit onderwerp.

Het bericht First8Friday editie 4 2017 – Ansible verscheen eerst op First8 Java Consultancy.

]]>
Groovy power op de Greach conferentie https://technology.first8.nl/groovy-en-grails-in-madrid/ Wed, 05 Apr 2017 09:13:59 +0000 https://technology.first8.nl/?p=5647 Begin april waren Ted Vinke en ik (Koen Aben) op de Spaanse conferentie Greach, om onze open source software te presenteren. Greach is de jaarlijkse conferentie in Madrid voor de Groovy community. De Groovy community is een actieve, internationale community met conferenties over de hele wereld om elkaar te ontmoeten. Vorig jaar waren we ook al naar de Greach conferentie … Lees verder Groovy power op de Greach conferentie

Het bericht Groovy power op de Greach conferentie verscheen eerst op First8 Java Consultancy.

]]>
Begin april waren Ted Vinke en ik (Koen Aben) op de Spaanse conferentie Greach, om onze open source software te presenteren. Greach is de jaarlijkse conferentie in Madrid voor de Groovy community. De Groovy community is een actieve, internationale community met conferenties over de hele wereld om elkaar te ontmoeten. Vorig jaar waren we ook al naar de Greach conferentie geweest (zie blog van vorig jaar). Omdat we dit jaar als sprekers gingen leerden we de Groovy community nog beter kennen, via bv het speakers diner en informele gesprekken. We hebben uiteraard ook veel verdiept in de Groovy techniek.   

Wat is Groovy ook alweer? Waarom is het gaaf? 

Groovy is een JVM taal sinds 2003, om het ontwikkelen van Java applicaties veel leuker te maken (“gaaf” vertaalt in het Engels naar “Groovy”). Groovy maakt Java development vlotter en krachtiger (o.a. minder overbodige code, minder strenge regels en meer ondersteunende features). Groovy heeft een groeiend ecosysteem aan frameworks (zoals Ratpack, Grails en GORM) en dat biedt dus de ontwikkelaar houvast om te ontwikkelen tot een full-stack developer. De laatste jaren neemt in de DevOps wereld het gebruik van de taal Groovy toe, dankzij bijvoorbeeld adoptie door build tools als Gradle en Jenkins. Groovy is een Apache project, met vele actieve committers.

Erg belangrijke ontwikkelingen op deze Groovy conferentie bleken de uitbreidingen van het ecosysteem, met name GORM (zie materiaal uit de keynote van Graeme Rocher).

Graeme Rocher in actie tijdens de keynote

Grails founder Graeme Rocher in actie tijdens de keynote

Onze sessie zelf was een korte game quiz met Groovy puzzels. De sessie was een succes: onze software blijkt goed te gebruiken voor korte coding battles tijdens conferenties. De Masters Of Java software uit 2004, welke de NLJug gebruikte voor hun jaarlijkse competitie, hebben we uitgebreid met moderne Java tools (zoals Maven, SpringBoot, FreeMarker en Groovy). Zo hebben we met Groovy een Ferrari motor ingebouwd, in het oude legacy product! Met de nieuwe software konden we op Greach een coding battle faciliteren van Groovy puzzels. Onze sessie was in de vroege ochtend na de conferentie party en de opkomst viel dus wat tegen. Echter, de deelnemers hadden wel veel plezier met de opdrachten en na afloop zei de organisatie dat we volgend jaar een uitgebreidere coding battle kunnen houden. Hieronder een actie foto:

In actie tijdens onze sessie “the Masters of Groovy Challenge”

Vooruitblik

Al met al een goede voorbereiding op onze First8 Groovy University op 13 juni (voor de Nederlandse Java User Group). We hebben beiden veel geleerd in Madrid: teruggekomen in Nederland gaan we weer actief klussen met leuke open source Groovy projecten. We kijken al genietend vooruit naar onze volgende Groovy conferentie in Denemarken in mei, waar we met een team van 5 First8-ers naartoe gaan! 

Na afloop van de Greach conferentie zijn Ted en ik naar het stadspark van Madrid gegaan om te relaxen. Daar hebben we deze ludieke Groovy blog opgenomen: we bespreken hierin enkele hoogtepunten van de conferentie!

 

Het bericht Groovy power op de Greach conferentie verscheen eerst op First8 Java Consultancy.

]]>
Grails Anti-Pattern: Everything is a Service https://technology.first8.nl/grails-anti-pattern-everything-is-a-service/ https://technology.first8.nl/grails-anti-pattern-everything-is-a-service/#respond Tue, 04 Apr 2017 07:55:42 +0000 https://technology.first8.nl/?p=5635 The context Grails makes it very easy to put any logic of your application in a service. Just grails create-service and you’re good to go. There’s a single instance by default, injectable anywhere. Powerful stuff and makes it easy to get up ’n running very fast! Creating a new application, following so-called “best practices” from blogs like these 🙂 and … Lees verder Grails Anti-Pattern: Everything is a Service

Het bericht Grails Anti-Pattern: Everything is a Service verscheen eerst op First8 Java Consultancy.

]]>
The context

Grails makes it very easy to put any logic of your application in a service. Just grails create-service and you’re good to go. There’s a single instance by default, injectable anywhere. Powerful stuff and makes it easy to get up ’n running very fast!

Creating a new application, following so-called “best practices” from blogs like these 🙂 and the ‘idiomatic Grails-way’ described in the docs and in tutorials work in the beginning, but there’s always a tipping point — where the application has grown a reasonable size — where one should start following a different, maybe less-Grailsey, strategy.

So what can go wrong by creating services in your application?

In a previous anti-pattern post about dynamic finders I tried to explain what could happen from Day 1 of your project onwards.

A team really took this advice to heart and started centralising their Album queries in a AlbumService, their Product queries in a ProductService and so on.

Here’s what I saw happening.

Sprint 1: Life is beautiful

This team started out really sharp: they almost were implementing business-like logic in controllers, but could pull those into services just in time. The grails create-service command would even immediately create an empty unit test — ready to implement.

The productivity was unparalleled. The team never had to manually create a class anymore with their IDEs and for the next sprints the team burned through the backlog like a hot knife through the butter.

Fast-forward 6 sprints.

Sprint 6

Looking at their code, it seems their services folder consists of a dozens of classes:

grails-app/services/
└── example
    ├── AnotherProductService.groovy
    ├── ...
    ├── OrderService.groovy
    ├── OrderingService.groovy
    ├── ...
    ├── Product2Service.groovy
    ├── ProductAccountingService.groovy
    ├── ProductBuilderService.groovy
    ├── ProductCatalogService.groovy
    ├── ProductCreateService.groovy
    ├── ProductFinderService.groovy
    ├── ProductLineFileConverterDoodleService.groovy
    ├── ProductLineMakerService.groovy
    ├── ProductLineReaderService.groovy
    ├── ProductLineService.groovy
    ├── ProductLineTaglibHelperService.groovy
    ├── ProductLineUtilService.groovy
    ├── ProductManagementService.groovy
    ├── ProductManagerService.groovy
    ├── ProductMapperService.groovy
    ├── ProductOrderService.groovy
    ├── ProductReadService.groovy
    ├── ProductSaverService.groovy
    ├── ProductService.groovy
    ├── ProductTemplateOrderBuilderService.groovy
    ├── ProductUtilService.groovy
    ├── ProductWriterService.groovy
    ├── ProductsReadService.groovy
    ├── ProductsService.groovy
    └── ...
1 directory, 532 files

The pattern

This happened to me a gazillion times before. Me and the team value the simplicitly and power of Grails. Hence, it’s pretty easy to start using the Grails commands to the fullest, such as all the create-* commands:

grails> create-
create-command                create-controller             
create-domain-class           create-functional-test        
create-integration-test       create-interceptor            
create-scaffold-controller    create-script                 
create-service                create-taglib                 
create-unit-test

In many Grails projects, similar to the fictional one 🙂 above, the create-service command has been over-used, because it seems to be idiomatic way of creating “business logic in the service layer”.

Yes, this command does create a nice, empty unit test, and is automatically a handy Spring bean injectable in controllers, tag libraries and other Grails artefacts.

Yes, using a *Service works well in blog posts, demo’s and tutorials.

However, it seems that we have forgotten basically everything is a “service” to someone else, but that we do not necessarily need to postfix (“Service”) every class as such.

Seems that people usually understand when something needs to be a controller (“let’s do create-controller“) or a tag library (“let’s do create-taglib“) and so forth, and for everything else: boom!, “let’s do create-service“.

In any other, non-Grails project we’re used to calling a builder simply “PersonBuilder”, in Grails projects it’s suddenly “PersonBuilderService”. In any other project a factory is a “PersonFactory”, in a Grails project it’s a weird “PersonFactoryService”.

What if a “PersonReadService” is responsible for getting or finding persons? For years people having been using the Repository pattern for this and this can be reflected with a “Repository” postfix, e.g. “PersonRepository”.

Even in Grails a builder can be a Builder, a factory a Factory, a mapper a Mapper, a repository a Repository, a doodle a Doodle and whatever can end in Whatever — you can name every class the way you want.

What we can we do about it?

Stop calling everything a Service

First, next time you’re about to create a class following one of the Famous Design Patterns, such as Builder, Factory, Strategy, Template, Adapter, Decorator (see sourcemaking.com for a nice overview), or other “well-known” Java (EE) patterns, such as Producer or Mapper or something, ask yourself a question:

Can it be a regular class in src/main/groovy?

Move and choose a better name

  • Yes, then just create the class in src/main/groovy. Maybe choose a nice package, such as example. If you don’t want 532 classes in one package, you can always introduce sub-packages such as example.accounting. Give it a name which does not end in *Service. Don’t forget to manually add an associated *Spec in src\test\groovy.

Do you still want to have the benefit of Spring and Dependency Injection?

In other words, do you need an instance of your class to be able to be injected into any Grails classes, such as a controller, as you are used to with a service, like the ProductReadService below?

// grails-app/controllers/example/HomepageController.groovy
class HomepageController {
    ProductReadService productReadService

    def index() { ... }
}

// grails-app/services/example/ProductReadService.groovy
class ProductReadService {
    SecurityService securityService

    Product findByName(String name) {
        assert securityService.isLoggedIn()
        Product.findByName(name)
    }
}

The underlying container is created by the Spring Framework.

  • There’s a great chapter in the docs about Grails and Spring. It’s this framework that will instantiate for example one SecurityService in the application, inject it in the “securityService” property when it creates one instance of ProductReadService which it injects in the HomepageController and so forth.

  • The SecurityService in the example — which might come from a Security plugin and the *Service classes in your own application sources – they’re all automatically picked up and managed by the Spring container and injected in every other managed class that needs it.

  • It’s not so much the move of grails-app/services/example to the src/main/groovy/example folder, but by renaming a class to something which doesn’t end in “Service” anymore, that’s when you lose the automatic management by Spring. This happens when we, as suggested, after the move, rename the class ProductReadService to a ProductRepository class.

Yes, it want them to be a Spring bean!

Declaring Spring beans the Grails way

Sure, we can do this ourselves. The Grails idomatic way is to specify beans in resources.groovy.

// grails-app/conf/spring/resources.groovy
import example.*
beans = {
    productRepository(ProductRepository) {
        securityService = ref('securityService')
    }
}

We’ve defined a a bean named “productRepository” of class ProductRepository and we’ve indicated the SecurityService needs to be injected.

This is how the original code has changed, but the behaviour has not: it’s now using ProductRepository instead.

// grails-app/controllers/example/HomepageController.groovy
class HomepageController {
    ProductRepository productRepository

    def index() { ... }
}

// src/main/groovy/example/ProductRepository.groovy
class ProductRepository {
    SecurityService securityService

    Product findByName(String name) {
        assert securityService.isLoggedIn()
        Person.findByName(name)
    }
}

This is not the only way to declare Spring beans.

Declaring Spring beans the Spring way

We declared Spring beans the Grails way, but we can also declare beans the Spring way.

Ok, there’s not just “a Spring way”, there are many ways, from the old XML declarations, classpath scanning to Java-style configuration.

Having (a subset of) your 532 classes in resources.groovy might be considered not all that better than the XML configuration Spring used in the early days.

Even through the Beans DSL is a lot more powerful here than XML ever was (because: Groovy), we’re not transitioning our automatically picked up service beans to get manual labour back, in my opinion. 😉

This is how it would look:

beans = {
    anotherProductRepository(AnotherProductRepository) {
        securityService = ref('securityService')
        orderingService = ref('orderingService')
    }
    productLineReader(ProductLineReader)
    productFinder(ProductFinder) {
        productRepository = ref('productRepository')
        productLineService = ref('productLineService')
    }
    productRepository(ProductRepository) {
        securityService = ref('securityService')
        productReader = ref('productReader')
        productWriter = ref('productWriter')
    }
    orderingService(OrderingService) {
        securityService = ref('securityService')
        productRepository = ref('productRepository')
    }
    ...

There are use cases where resources.groovy is perfectly fine, but why not get rid of the boiler-plate and leverage the modern features of Spring at our disposal?

Try component scanning instead.

  • Just once, set Spring’s @ComponentScan annotation our Application.groovy class

// grails-app/init/example/Application.groovy
package example

import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.context.annotation.ComponentScan

@ComponentScan
class Application extends GrailsAutoConfiguration {
    static void main(String[] args) {
        GrailsApp.run(Application, args)
    }
}

This makes Spring, at application startup, scan all components on the classpath under the package “example” and register them as Spring beans. Or specify @ComponentScan("example") to be more explicit.

What are these “components” you say? All classes annotated with Spring’s stereotype annotation @Component. Or @Service or @Repository which are just specializations.

  • Annotate our classes to make them candidate for auto-detection.

import org.springframework.stereotype.Component

@Component
// or @Repository in this particular case
class ProductRepository {
    SecurityService securityService

    Product findByName(String name) { .. }
}

  • At the moment, when we would restart our app, we’ll get at NullPointerException when we try to invoke anything on securityService.Spring no longer recognizes it should do something with the property — it’s just a mere property now. To make the SecurityService be injected by Spring we need to annotate the property with Spring’s @Autowired, but that’s basically the same as the setter-injection we already had in the beginning. And @Autowired is boiler-plate we don’t need.

While we’re at it, I recommend to use constructor-injection, which means we create (or let the IDE create) a constructor.
* We make the dependencies of ProductRepository explicit.
* Spring will automatically “autowire” our constructor as long as we have exactly one, and inject all parameters of the constructor

import org.springframework.stereotype.Component

@Component
class ProductRepository {
    final SecurityService securityService

    ProductRepository(SecurityService securityService) {
        this.securityService = securityService
    }

This is it.

BTW having an explicit constructor with all the mandatory dependencies, is always a good practice — whether it’s a Spring bean or not.

  • Finally, revert resources.groovy to its initial, empty state – we’re not using it anymore.

Naming is important

Now, if we would have done this to the original, 532 classes we might end up with a more enjoyable tree of files. E.g.

grails-app/services/
└── example
    ├── OrderService.groovy
    ├── ProductService.groovy
    └── SecurityService.groovy
src/main/groovy/
└── example
    ├── order
    │   ├── OrderBuilder.groovy
    │   └── OrderRepository.groovy
    └── product
        ├── ProductBuilder.groovy
        ├── ProductFinder.groovy
        ├── ProductLineReader.groovy
        ├── ProductLineTaglibHelper.groovy
        ├── ProductMapper.groovy
        ├── ProductRepository.groovy
        ├── ProductUtils.groovy
        └── ProductWriter.groovy

Some actual *Service classes cal still reside in grails-app/services and the rest can become clearly named classes, neatly placed in src/main/groovy, while you still enjoy the benefit of using them as Spring beans.

If you and the team early on in the process decide on proper naming conventions (packages, class prefixes and such), you don’t have to reorder everything like we did just now. Together with the team, create and name your classes in an organized place.

Cross-posted from my personal blog

Het bericht Grails Anti-Pattern: Everything is a Service verscheen eerst op First8 Java Consultancy.

]]>
https://technology.first8.nl/grails-anti-pattern-everything-is-a-service/feed/ 0