We have sat on the river bank and caught catfish with pin hooks. The time has come to harpoon a whale.
– John Hope
The hallmark of a healthy software ecosystem is the thrum of older technologies being displaced by the newer. But from time to time, a fascinating phenomenon occurs whereby the older becomes enveloped by the newer and the result is so compelling that they are rarely seen apart.
In this multi-part series, we’d like to share our experiences in transforming from our legacy, single-point-of-failure integration server to modern Continuous Integration (CI) services backed by an infrastructure-as-code (IaC) philosophy.
For building our open-source OpenDaylight-based Lumina SDN Controller, our Lumina Networks developers use a common build tool stack of Jenkins, Maven, Java, Python, NodeJS, and a testing framework of Robot Framework and Nightwatch. Our build system is centralized with about three dozen repositories. Like most projects that grow over time, friction developed in our build workflow, including
- An older Jenkins with a risky upgrade path (and a disruption to team productivity)
- Inability to upgrade plugins/tools (which would risk incompatibility and broken builds)
- Accumulation of ad hoc snippets of shell scripts embedded in Maven POM files
- Developers using our CI build system to do compiles and test code (instead of doing that locally)
- Spurious trouble with local builds passing and CI builds failing
Virtually every development team of any size in every company has suffered with these issues, particularly those with projects that are older than a year or two. Developers rely on doing builds dozens of times per day, so any slowdowns or blockages in this workflow can become very expensive, have low visibility outside of development, and costs are hard to quantify.
To address these and other issues, we started looking at containerization to solve replication, scaling, and dependency management.
Along Came a Whale
Docker first was released for public consumption 2013. Today, it is used in production in very diverse ecosystems. It is used by tier one TelCos and by the largest tech companies. Docker has been downloaded billions of times and has enjoyed large scale growth year over year.
Docker/containers are a natural fit for DevOps applications. There are some compelling reasons to consider using containerized builds. Here at Lumina Networks, we have just completed our conversion to containerized builds and want to enumerate the advantages we saw in this solution.
Advantages Of Containerized Builds
So what does containerizing the builds achieve ? It means
- we can deploy onto a cloud with minimal work – this can address scaling issues effectively. Note that some builds will still depend on lab access to local devices, and these dependencies may not scale.
- efficient resource management – instead of spinning vm-s per build, we can run 15-20 builds in a single vm, all isolated from each other securely.
- easier upgrading – for example, running a component in its own container isolates it so other containers that depend on it are forced through a standard, explicit interface.
- better partitioning – so instead of making environments that contain all possible tools and libraries, a container can only use those needed for its specific purpose. This has the side effect of reducing defects due to interacting 3rd party components.
- a clean reset state – instead of writing reset scripts, the container is discarded and resurrected from a fixed image. This is a phoenix (burn-it-down) immutable view of containers, and forces all build and configuration to be explicit (not accumulate in snow flake instances).
- 100% consistency between local development and the build system, which should eliminate the “but-it-works-on-my-machine” problem.
- effective postmortems for build failures, potentially leaving failed runs in the exact state of failure, rather than relying solely on extracted log files.
- building and installing an intermediate, shared artifact once, instead of 52 times, and potentially speeds up the build cycle.
some tests can make temporary checkpoints via a temporary image/container and roll back to that state rather than tearing down and rebuilding, affording a quicker build.
Judicious use of containers might help with diagnosing hard-to-reproduce issues in the field. We have seen instances of technical support sending/receiving VM images to/from customers. Containers would be both simpler and could be a lot smaller.
Containerizing the build is considered a modern best-practice and affords access to many kinds of build work flow and management tools. If you are a customer of ours, and you have your own in-house software development, maybe this list will help you convince your management to do the same.
Use of containers is not limited to build contexts. Containers are used in production environments too. Delivering software components in orchestrated containers has been under discussion for some time here.
This is Part 1 in our series of The Build-Cycle diaries. Look here for future blog articles giving more details about our experiences.