+1 (669) 231-3838 or +1 (800) 930-5144

The case for container-based builds

by Allan Clarke, Vasu Srinivasan

We have sat on the river bank and caught catfish with pin hooks. The time has come to harpoon a whale.
– John Hope

The hallmark of a healthy software ecosystem is the thrum of older technologies being displaced by the newer. But from time to time, a fascinating phenomenon occurs whereby the older becomes enveloped by the newer and the result is so compelling that they are rarely seen apart.

In this multi-part series, we’d like to share our experiences in transforming from our legacy, single-point-of-failure integration server to modern Continuous Integration (CI) services backed by an infrastructure-as-code (IaC) philosophy.

For building our open-source OpenDaylight-based Lumina SDN Controller, our Lumina Networks developers use a common build tool stack of Jenkins, Maven, Java, Python, NodeJS, and a testing framework of Robot Framework and Nightwatch. Our build system is centralized with about three dozen repositories. Like most projects that grow over time, friction developed in our build workflow, including

  • An older Jenkins with a risky upgrade path (and a disruption to team productivity)
  • Inability to upgrade plugins/tools (which would risk incompatibility and broken builds)
  • Accumulation of ad hoc snippets of shell scripts embedded in Maven POM files
  • Developers using our CI build system to do compiles and test code (instead of doing that locally)
  • Spurious trouble with local builds passing and CI builds failing

Virtually every development team of any size in every company has suffered with these issues, particularly those with projects that are older than a year or two. Developers rely on doing builds dozens of times per day, so any slowdowns or blockages in this workflow can become very expensive, have low visibility outside of development, and costs are hard to quantify.

To address these and other issues, we started looking at containerization to solve replication, scaling, and dependency management.

Along Came a Whale

Docker first was released for public consumption 2013. Today, it is used in production in very diverse ecosystems. It is used by tier one TelCos and by the largest tech companies. Docker has been downloaded billions of times and has enjoyed large scale growth year over year.

Docker/containers are a natural fit for DevOps applications. There are some compelling reasons to consider using containerized builds. Here at Lumina Networks, we have just completed our conversion to containerized builds and want to enumerate the advantages we saw in this solution.

Advantages Of Containerized Builds

So what does containerizing the builds achieve ? It means

  • we can deploy onto a cloud with minimal work – this can address scaling issues effectively. Note that some builds will still depend on lab access to local devices, and these dependencies may not scale.
  • efficient resource management – instead of spinning vm-s per build, we can run 15-20 builds in a single vm, all isolated from each other securely.
  • easier upgrading – for example, running a component in its own container isolates it so other containers that depend on it are forced through a standard, explicit interface.
  • better partitioning – so instead of making environments that contain all possible tools and libraries, a container can only use those needed for its specific purpose. This has the side effect of reducing defects due to interacting 3rd party components.
  • a clean reset state – instead of writing reset scripts, the container is discarded and resurrected from a fixed image. This is a phoenix (burn-it-down) immutable view of containers, and forces all build and configuration to be explicit (not accumulate in snow flake instances).
  • 100% consistency between local development and the build system, which should eliminate the “but-it-works-on-my-machine” problem.
  • effective postmortems for build failures, potentially leaving failed runs in the exact state of failure, rather than relying solely on extracted log files.
  • building and installing an intermediate, shared artifact once, instead of 52 times, and potentially speeds up the build cycle.
    some tests can make temporary checkpoints via a temporary image/container and roll back to that state rather than tearing down and rebuilding, affording a quicker build.

Judicious use of containers might help with diagnosing hard-to-reproduce issues in the field. We have seen instances of technical support sending/receiving VM images to/from customers. Containers would be both simpler and could be a lot smaller.

Containerizing the build is considered a modern best-practice and affords access to many kinds of build work flow and management tools. If you are a customer of ours, and you have your own in-house software development, maybe this list will help you convince your management to do the same.

Use of containers is not limited to build contexts. Containers are used in production environments too. Delivering software components in orchestrated containers has been under discussion for some time here.

This is Part 1 in our series of The Build-Cycle diaries. Look here for future blog articles giving more details about our experiences.

Dependency Injection and Default Visibility Constructors

It has been a few years since I’d done more than review or read Java code. I’ve been using Java on and off over the years but my last few years have been spent writing UI/JavaScript in browsers and Node.js (along with Python, Perl, Groovy, Bash scripting…but no Java). So as I approached the task of tackling an ODL ‘application’ for the first time I doubted my previous Java stint with GWT years ago were going to pay off…

TDD and DI

The task was to create a new set of functional components which collaborated with the code layer immediately responsible for communicating with MDSAL and other ODL features. For this I used Spock and a TDD (test driven development) approach to write tests which exemplify the required functionality based on a hastily sketched interface and the acceptance criteria in the JIRA item. Typically all that is needed for implementation, especially when using dependency injection friendly patterns, are the bits that wire the components together – variables to hold references and functional methods with no implementation but what will satisfy the compiler. The simplified lenient Mock and Stub capabilities from Spock and the use of Java interfaces substitutes whatever results of the functions are needed. Once a level of satisfaction that the behaviors needed and the inter-relationship between components is complete the set of unit tests become the acceptance criteria of the class implementations.

Constructor Visibility

To be friendly with unit testing practices and the use of DI (Blueprint in ODL), the visibility modifiers of Java are used to define at least two constructors: a default visibility (“package-private”) constructor and a public constructor. Unit tests reside in the same package as the ‘subject under test’ and access the default visible constructor for a full set of dependency injection parameters. This includes things as the Logger and all components which it collaborates. The public constructor only exposes a subset of the dependency injection parameters in the default visible constructor. This limits the exposure in DI to just the desired injection properties (e.g. Logger is omitted since that flexibility is unnecessary for normal use).

DI Constructor Pattern


package com.example.java;

import com.google.common.base.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class ServiceImpl implements Service {
  private Logger logger;

  public ServiceImpl() {
    this(LoggerFactory.getLogger(Service.class));
  }

  ServiceImpl(Logger logger) {
    Preconditions.checkNotNull(logger);
    this.logger = logger;
  }

  @Override
  public void op() {}
}

The test starts simply as:
Test Specification


package com.example.java

import spock.lang.Specification
import org.slf4j.Logger

class ServiceImplSpec extends Specification {
  def logger = Mock(Logger)

  def "construction succeeds"() {
    when:
      new ServiceImpl(logger)

    then:
      notThrown Exception
  }

  def "construction fails"() {
    when:
      new ServiceImpl(null)

    then:
      thrown NullPointException
  }

  def "op called successfully"() {
    given:
      def sut = new ServiceImpl(logger)

    when:
      sut.op()

    then:
      notThrown Exception
  }
}

Along with a simple regression test provided and the beginnings of the unit test for the class there are additional architectural benefits to this code approach:

  • the default visible constructor is DRY with regards to constructor parameter checking and construction code
  • the public constructors are only responsible for construct default injection properties for the class
  • increasing public access to injection properties, or altering the defaults, can be done with new public constructors if necessary to maintain backwards compatibility

In Action

One of the last challenges to this task turned out to be an integration issue. A component I was ‘hiding’ behind the public constructor actually required a reference to an instance found through Blueprint. This required adding a DI parameter to some public constructors to inject the needed Blueprint reference into the component system. Since the default visible constructors already provided the dependency injection properties, unit tests and implementation remained unchanged in all other aspects and the system functioned as desired.

Pin It on Pinterest