Skip to main content
Blog

JavaOne 2014 – Day 2 – Future of development and the Cloud

By 1 oktober 2014januari 30th, 2017No Comments

By Ken Sipe

The world of Continuous Delivery and dev-ops is in flux. In the client-server era we had small apps running on big servers. In the cloud era, we instead have big apps, running on multiple small servers.

client-server-cloud

An example of a big app, running on multiple small servers is Twitter. This Big app used to be down from time to time, showing the Twitter “Fail whale”. But for a while now, Twitter has been running without interruption This was not (only?) because of the migration from dynamic language back to Java, but a migration to Mesosphere.

Currently we are in a perfect storm within the topic of datacenters. Legacy datacenters have everything “defined”. We install a datacenter and define a number of VM instances. Scaling is done manually and is limited to the actual hardware. Humans are involved, which is the single point of failure. Also, we have a known IP and port for a resource, which is odd. Compare this to starting an application on your PC, your PC never asks “Which CPU do you want to run it on?”. A cloud-solution should be similar: resources should be a commonality, it should just be there when you need it. You don’t care on what server it runs, or even how many. You care about uptime and efficient use of resources. But how efficient are resources currently being used?

sharing resources

In the above picture, even elastic sharing is not 100% efficient. How can we achieve 100% efficiency 24/7? Think of it this way: what if we give all important tasks immediate access to resources and have less important tasks get resource whenever available? We could even split resources over multiple priorities. Or provide minimum resource levels at given times. Google already does resource-planning like this. Tier-0 processes are guaranteed to have resources. Then tier-1 processes run when able, since those are first in line when resources are available. Lastly, tier-2 processes run on low prio, however some can get a minimum resource level. Analytics for example, can be run in low activity hours, but during high activity, we can allocate a minimum of 5% to it, so we have at least something. This gives a very flexible resource allocation, which fits most any usage-scenario.

Mesosphere supplies that flexibility and ease of use. It allows the admin to start tasks not on specific CPU’s or even VM’s, but on any CPU. Processes can be allocated to 0.1 CPU for instance, but it doesn’t matter which CPU it is. Mesosphere is based on Docker, an open source Apache project for running software containers. It is based on Linux containers, which has been around since 2008. Back then, it was rocket-science. Docker adds simplicity. VM’s are not context aware, they don’t know what an admin may have changed. Therefore, each change requires a new rollout of that changed VM in full (often GB’s in size). With Docker, changes are just that: changes. You DON’T need to roll out an entire new VM-image, but instead apply the (small, often KB-sized change) to all docker-images.

What’s even more easy, is how a developer may use Docker images. If the developer needs an external component, like a database, he can just say “docker run couch-db” or something and it’s ready to go. There’s no need to install anything. This could evolve in even more simplified scenario’s. What if our Eclipse just started a container (eg. Docker) with an isolated runtime environment with Ruby, JVM, JBoss or whatever. It would start in tenths of a second and be fully isolated and easy to share on teams.