By Bruno Souza and Edson Yanaga
This talk was about continuous integration tools in Java Development. There are several good open source options for all stages of the continuous integration tree to use. The talk was illustrated by the story of Charlie and the Chocolate factory.
More often than not, deployment is postponed because it is scary to developers. It is considered the final stage of development. After deployment our work stops. The fact is, deployment is an important thing and developers should be involved deeply in this process. Everybody has a Golden ticket to (the tools of) the deployment factory, because the tools in this factory are free and open source. You don’t even have to buy chocolate for it.
There are several “rooms” in the deployment factory:
- WAR file
- Any binary / result of your compilation
- Dependency management which is often tackled by Maven, but Gradle or even Ant with Ivy are fine options.
- Version stability is important, so no “let’s quickly upgrade that library before going to production”. Make sure you always use the same physical binary to test on each and every environment. So no “quickly run a production build”.
- Easier reuse can be achieved by version stability. No need to rebuild other projects from head when integrating in other applications. Always deliver your artifacts in a repository manager, don’t use e-mail for this. I gets ugly quickly.
This is where the new things are installed. Environments could be development, testing, QA and production. Keep these as similar as possible. There are several tools to get a similar testing environment compared to production.
Tools like Chef allow you to create a config-definition in code, which can be version-controlled in your source-safe. Define the architecture of your environment and use that to set up development and testing environments with tools like Vagrant. You can also connect your debugging process to the virtual machine. Jenkins can use this definition to create an exact copy of the environment for testing. The production definition can be tweaked to use e.g. different security mechanisms. Vagrant can then create a virtual environment based on this definition.
Automation of the delivery process forces the team to thoroughly think about the steps needed. This increases stability and at the same time team confidence. Obviously it eliminates a lot of human errors. Delivery becomes a more formal process, without adding documentation. These documents tend to get outdated and often after a while, nobody will read them.
Do the risky things early and often. When automated, you create stability in these processes. A lot of releases with little code changes rather than a single release with a lot of code changes, implicates smaller steps with more confidence, less errors and risks. Do not fall into the trap to quickly do a small change by hand, “just this time, very quickly”.
Keep an eye on the tool called GO as an alternative to Jenkins. It integrates with:
- Flyway: this tool automates migrations to the database, like Ruby and Grails do. Rollbacks to previous versions is supported as well. It can also do load-tests on data. It integrates well with both Ant and Maven.
- Packer: this is a command-tool that uses the definition from Chef, Vagrant etc. and creates a VM image based on that, ready to be started by a cloud-provider like Amazon. It can also inject a new version of your software directly inside the VM. If your main cloud-provider has temporary issues or downtime, you can quickly create an exact image for another provider.
Pursue team integration and don’t look at the “production guys” as the other guys. “Done” is when your application is running on production, not when development is finished and delivered. The tool Rundeck is similar to Jenkins, but for setting up production environments. It automates tasks like adding a user-account or setup a machine. This way, operation can provide tools to automate these tasks without supplying development with production passwords.
After the talk, I asked Bruno about continuous deployment. I was interested in his thoughts about triggered deployment based on code-commits. Would you always configure this and if not, on what environments would you auto-deploy and where not? His answer wasn’t an exact answer to the question though, but it did clear up some things. First, there’s a difference between continuous delivery and continuous deployment. Delivery is the act of automatically delivering the artifacts that make up your build. That is easily done and there’s no reason you shouldn’t. It makes life easier and saves a lot of time. Continuous deployment is a whole different story. For this to be successful, you not only have to automate the actual deployment (which you really want to, even if you don’t do actual continuous deployment), but you also have to automate validation of the correct deployment (integration tests) AND automate a rollback if applicable. Most organizations implement continuous delivery and use that for some time, before switching to continuous deployment if they do.