Don’t Be Afraid of Open Source

Open source is arguably one of the hottest trends in tech these days and in the networking space specifically. This week we heard about IBM’s huge acquisition of Red Hat, and recently we’ve seen deals around Microsoft and GitHub, Salesforce and MuleSoft, and Cloudera and Hortonworks. In the networking space specifically, we’ve seen the initiation of ONAP (the Open Network Automation Platform), OSM (Open Source MANO), OpenConfig and the maturation of OpenDaylight.

Amidst all this positive activity, I continue to find that trepidation around open source persists, especially among people in the networking space, where the open source trend is relatively recent. To help alleviate some confusion, I’d like to take on three common myths about open source.

Myth #1 – Open source is a “do it yourself” approach

While open source platforms put control in the hands of the user, this doesn’t have to mean “DIY”. There is a long list of vendors that can provide support and work upstream in the open source community on behalf of their customers. Lumina Networks along with our partners Cloudify, Amdocs and many others offer supported distributions.

However, for those that do want to keep more of their engineering in-house, open source communities are public efforts, under public licenses and DIY is an option for those that want to work in the open source community. Bottom line, YOU are in control how much you want to balance vendor support with home-grown engineering.

Open-Source-BenefitsMyth #2 – Open source means giving away your intellectual property

Quite the opposite is true. Open source, in most cases, can allow you to leverage a common platform and spend more of your engineering effort working on proprietary software that is specific to your business. Open source platforms are, being multi-vendor, naturally architected to support proprietary extensions and vendor or service-specific interfaces.

Everybody involved in open source has some type of business interest in either selling software or services that can be differentiated. So, the open source community tends to work on common platform-oriented software that does not contain proprietary code. This is where an understanding of open source licensing can be helpful. Most open source licenses are oriented around balancing common platform development while allowing for service-specific extensions.

Myth #3 – Open source is a difficult business model

When people say this, they are usually referring to the over-simplified concept that open source software is “free” and so it is reasonable to ask: “how can anybody make any money?” On the vendor side, there are plenty of profitable business models that include services and support programs, value-added software development and vendor-specific interfaces and testing for system integration. The recent headline exits for RedHat and others testify to the potential value of the open source business model.

But I like to look at open source business models more from the engineering cost side of the balance sheet. In other words, “How much engineering cost would it require your organization to develop the open source platform that you are using?”. Looking at open source in this way sheds a different light on the myth.

In Summary

Open source has a lot to offer network providers, vendors and the industry in general in terms of speeding up the development and deployment of new technologies such as 5G. In fact, open source may be the only good mechanism we now have to develop the platforms required for these next-generation networks.

So, we at Lumina Networks, encourage looking at open source from the developer’s point of view, and the benefits it can bring from that angle. Whether you are a developer or a consumer of networking software, get involved in the open source community and become a contributor to the emerging platforms that are going to carry us all into the future. And most definitely, don’t be afraid!

Getting Started Upstream in ODL

By Allan Clarke and Anil Vishnoi



Getting started as an upstream contributor to OpenDaylight is not easy. The controller ecosystem is big, there are many projects, and there are millions of lines of code. What is a new ODL developer to do? Here is some pragmatic advice on where to begin to become an active contributor.

Fix Bugs

One of the easiest ways to get to know a code base is to start fixing bugs. Peruse the ODL bugs list on Bugzilla, say with the NETCONF project. You want to find bugs that aren’t likely being worked on and are of limited scope (to match your limited understanding of the project). Ideally bugs will have an owner assigned to indicate that they are actively being worked on, but it is not always a great indicator. In particular, someone may run across a bug, file a report, then jump into fixing it—and forget to assign it to themselves. This is most likely with the project contributors, so figure out who are the project contributors and look at the date of the report. If it was a project contributor and a newish date, then that bug might be being worked on. You should read through the report and try to decide how much domain knowledge is needed—as a newbie, smaller is better.

Once you have selected a bug to work on, click on the “take” link. Also add a comment to the bug. If someone already is working on it, they should get a notice and respond. You can also try the ODL mailing lists and give notice there. You mainly want to avoid duplicate work, of course.

Review Patches

Reviewing patches is a great way to contribute. You can access patches via Gerrit, and we’ll use the NETCONF patches as an example. Doing code reviews is a great way to not only see existing code but also to interact with other developers.

  • If you have some domain expertise and know the code, you can review the functionality that is being pushed.
  • If you have neither of these, you can do the review based on Java best practices and good software engineering practice.

Address Technical Debt

ODL uses Sonar for analytics of the upstream project. Here is an example for the NETCONF issues. Note that the ODL project has coding conventions, and the Sonar Qube has some best practices. This list shows violations that should be addressed. As a newbie, you can work on these with little domain knowledge required. You can also see that the code coverage varies for the NETCONF coverage, so adding NETCONF unit tests to boost the coverage in the weakest areas would be very helpful.

Sonar has a lot of interesting metrics. You can explore some of them starting here including coverage, tech debt, etc. If you look at the Sonar dashboard, it will point out a lot of available work that does not require a large span of time to invest. Doing some of this work is a great step towards getting your first patch submitted.

Follow Best Practices

With well over a million lines of code and many contributors from many companies, the ODL project has quite a girth. To manage the code entropy, ODL has some best practices that you should become familiar with. These cover a diverse set of topics, including coding practices, debugging, project setup and workflow. We strongly recommend that you carefully read these. They will save you a lot of time and will pay back your investment quickly. They will help you skate through code reviews. These practices are really time-tested advice from all the ODL developers, so don’t ignore them.

Support Attribution

Attribution is an important insight into most if not all open source projects. Attribution allows stakeholders to see who is contributing what, from the individual up through sponsoring companies. It allows both a historical and current view of the project. You can see an example of why attribution is illuminating here. You need to sign up for an ODL account, and a part of that process will be to associate yourself with a company (if applicable). You can also see breakdowns by authors on the ODL Spectrometer.

That’s all for now. Happy trails, newbie.

Watch for Allan’s blog next week where he will share his Top 10 learnings as a new developer contributing to ODL.

Service Providers Are Placing Big Bets on Open Source Software Networking – Should You?

The service provider market is undergoing earth-shaking changes. These changes impact the way that applications for consumers and business users are deployed in the network and cloud as well as the way that the underlying data transport networks are built.

At Lumina, we’ve had the chance to work with several large service providers on their software transformation initiatives and get an up-close look at what works and what doesn’t. Three factors are particularly favorable in setting up successful projects for frictionless progress from the lab through prototype and proof of concept and into the production network.

Top-Down Advantage

Our first observation is that top-down initiatives and leadership work better than bottom-up or “grass roots” approaches. The experience of AT&T strongly illustrates the advantage. While a few of the hyperscale cloud providers had already launched open source initiatives and projects, the first big move among the established service providers was AT&T’s Domain 2.0, led by John Donovan in 2013. Domain 2.0 was not a precise description of everything that AT&T wanted to do, but through that initiative, the leadership created an environment where transformative projects are embraced and resistance to change is pushed aside.

While lack of top down support is not a showstopper, it is extremely helpful to get past obstacles and overcome organizational resistance to change. If top-down support in your organization is lacking or weak, it is worth your effort to influence and educate your executives. In engaging executives focus on the business value of open software networking. The advantages of open source in software networks include eliminating lock-in and spurring innovation. As our CEO, Andrew Coward, wrote in his recent blog, Why Lumina Networks? Why Now?: “Those who build their own solutions—using off-the-shelf components married to unique in-house developed functionality—build-in the agility and options for difference that are necessary to stay ahead.”

Although it may create a slower start, from what we have seen, taking the time to do early PoCs to onboard executive support so that they deeply attach to the value is time well worth spent. Sometimes a slow start is just what is needed to move fast.

Collaboration through Open Source

The second observation is that industry collaboration can work. I read an interesting comment by Radhika Venkatraman, senior vice president and CIO of network and technology at Verizon, in her interview with SDxCentral. She said, “We are trying to learn from the Facebooks and Googles about how they did this.” One of the best ways to collaborate with other thought leaders in the industry is to join forces within the developer community at open source projects. The Linux Foundation’s OpenDaylight Project includes strong participation from both the vendor community and global service providers including AT&T, Alibaba Group, Baidu, China Mobile, Comcast and Tencent. Tencent, for one, has over 500 million subscribers that utilize their OpenDaylight infrastructure, and they are contributing back to the community as are many others.

A great recent example of industry collaboration is the newly announced ONAP (Open Network Automation Platform) project. Here, the origins of the project have roots in work done by AT&T, China Mobile and others. And now, we have a thriving open source developer community consisting of engineers and innovators who may not have necessarily collaborated in the past.

These participants see benefits of collaboration not only to accelerate innovation but also to build the software run time across many different types of environments and use cases so as to increase reliability. Providers recognize that in their transformation to software networks there’s much they can do together to drive technology, while using how they define and deliver services to stand out from each other in the experiences created for customers.

What about your organization? Do you engage in the OpenDaylight community? Have you explored how ONAP can help you? Do you use OpenStack in your production network? And importantly, do you engage in the discussions and share back what you learn and develop?

Pursuit of Innovation

A third observation is the growing ability for service providers to create and innovate at levels not seen before. A prime example here is the work done by CenturyLink to develop Central Office Re-architected as a Datacenter platform to deliver DSL services running on OpenDaylight. CenturyLink used internal software development teams along with open source and Agile approaches to create and deploy CORD as part of a broad software transformation initiative.

One might have thought that you would only see this level of innovation at Google, Facebook or AWS, but at Lumina we are seeing this as an industry-wide trend. The customer base, business model, and operations of service providers vary widely from one to another based on their historical strengths and legacy investment. All have an opportunity to innovate in a way that advances their particular differences and competitive advantages.

Closing Thoughts

So we encourage you to get on the bandwagon! Don’t stand on the sidelines. A combination of leadership, collaboration and innovation are the ingredients you need to help your organization drive the software transformation needed to stay competitive. There is no other choice.

Stay tuned for my next blog where we will discuss some of the specifics of the advantages, development and innovation using open source.

Why Lumina Networks? Why Now?

The elusive promise of open software networking

Software promised to eat the world, but networking has proved a little harder to digest. While applications and data centers have experienced dramatic and visible change, the network itself has remained remarkably stubborn to virtualization and automation.

Network vendors sold magic pills to grow superpowers, matching hyperscale providers in skills, technology and speed. Placebos of course, take time to be discovered and the promised agility remained elusive.

Meanwhile, a quiet revolution brewed at providers. In backrooms and labs, on weekends and in free moments, passionate, committed renegades have been working on the technology to bring transformational change to their networks. These teams realize that change has to come from within and cannot be outsourced. No magic pill can replace the hard work that all lasting change requires. Their call-to-arms is open source, virtualization, and automation. They take inspiration from the methods of hyperscale providers but must apply technology pragmatically to both their new and old infrastructure.

Escape velocity

In these labs, all of the right ingredients—open source software, agile technologies, white boxes, abstraction models and programming skills—are now proving themselves in performance and in features.  So what is stopping these solutions from escaping the lab and being deployed in production networks? I believe the missing ingredient is a catalyst.

By definition, a catalyst causes ingredients to react, to bond, and to change the form of elements around them. Such a catalyst is needed to bring the new into the existing network, to integrate with what is there, and to bond the benefits from the greenfield to the brownfield. And perhaps most importantly, to furnish expertise, support and enablement to the internal teams charged with making it all happen.

This change from within has reached a critical moment. Providers must now choose technology differentiation from within, or wait for the market to deliver turn-key solutions. If differentiating now carries risk, waiting to be usurped by others surely carries more. Those who think competitively realize that turn-key is a zero-sum game. Everyone gets essentially the same solution at the same time, in a world where time is now the greatest competitive force setting apart the winners from the losers. Those who build their own solutions—using off-the-shelf components married to unique in-house developed functionality—build-in the agility and options for difference that are necessary to stay ahead. Few would argue that differentiation in the digital world demands control of your own destiny in how you develop and deploy technology.

Changing together with each other

At the center of every transformational change are heroes. These heroes form one part of the change, their network and their organization the other. Yet, these heroes often have to work in conflict with their organizations that have ingrained ways of working.

It’s not as if these heroes have not been offered ‘help’ along the way. Traditional networking vendors and integrators have fallen over themselves to sign providers up for transformational change. But the results have been lackluster, at best.

When you put the fox in charge of the hen house, it eats time and money, then leaves a trail of legacy software and hardware behind. No, the challenge is that the change must come from within—meaning that providers must be responsible for their own journey and cannot simply outsource the work to the cheapest or most shiny bid that comes in from the outside. Maybe I’m being too hard on the fox. The reality in many cases is that the fox does want to change, does want to adapt but isn’t really in a position to affect change because, well, they still love to eat chickens.

As an alternative, many in the networking community have spent the last three years working out how to build software networks out of open components. The thinking is that if you can build your software network with open source, where the possibility to change vendors is always available, these vendors will continue to work extremely hard on your behalf, to stay at the top of their game, and to do what is best with you.

And so, I’ve come to believe that what heroes need is a catalyst that will work with them and their organization. Deliver projects with them is the big idea—not to them, or for them. Working together, from within, change is genuine and sustaining. It is shaped to specifics of the organization. I believe that with a catalyst, teams can achieve self-sufficiency and resolve the indigestion now afflicting their networks.

Being a part of the change

I am most fortunate to have formed, with a mighty team of similarly-minded challengers, our own company that we called Lumina Networks. As Lumina Networks, we are embracing the ethos of catalyst, a catalyst that works with our customers, our partners and our developer communities. We are focused on aiding transformational change in providers—without reservation or conflict from other considerations (e.g. hardware)—working with our customers to make change possible, one small advance in capability, one shift in mindset, one step at a time.

For the past three years, and as part of Brocade we’ve been able to learn much about execution in this new way of business. Our development engineers and NetDev Services team have been working closely with the world’s largest providers in building open technologies and integrating them into large, complex production networks. We’ve been there, in the room, working through the gnarly challenges of what it really takes to change networks. We’ve been there as customers consider what it means not only for their architectures and technology, but also for those who must operate the networks reliably, securely, at carrier scale, and for those who must figure out how all of this now works with the back office delivery systems.

We see the industry evolving quickly in how it collaborates through the vehicle of open source community to build the best of what’s possible. Providers know there’s much they must do together in developing technology, while using how they define and deliver services to stand out in the experiences they create.

Lumina Networks Principles

As we set out, we wanted a set of principles for Lumina Networks that guide us, and represent our “true north”. These principles are simple, yet they govern difficult decisions, balance tradeoffs and help us to avoid pitfalls.

  1. Open source first. Work with engineering communities to develop platforms that solve industry-wide problems.
  2. Pure open source network products. No forking from community code or proprietary extensions that create lock-in, packaged for specific use cases and ease-of-use, and commercially supported for reliability. To ensure 100% compatibility, we upstream our enhancements and fixes.
  3. Network products as swappable components. Open interfaces and software elements focused on a small, well-defined set of clear functions to provide ease-of-integration, based on standard models.
  4. Integration with third parties. Products readily combined with those from other companies, including the lengthy list of legacy technologies and interfaces now embedded deeply into provider networks. Meet the needs of today and tomorrow without needing to rip out of everything that providers have today.
  5. Platform approach. Components combined to create extensible solutions for infrastructure and customer services. Not only does a platform approach ease change in the future, it enables an incremental approach today to deliver immediate, achievable improvement.
  6. Designed for serviceability. Embracing the DevOps and Site Reliability Engineering movements, operations and support teams and their requirements are brought to the front of the design and development process. Networks shake off their past and deliver the holy grail of increased reliably in conjunction with increased agility.
  7. NetDev Services. Engineers leading the Agile development and deployment of new software technologies, both networking and operations software, must be willing and able to work in joint teams to pass on knowledge and skills, tools and practices that lead providers to self-sufficiency.
  8. Open business relationships. Licenses and contracts that define how vendors and providers work together must change so that developers have full ownership of their innovation, customers have full control over their priorities for change, and well-defined outcomes leave the flexibility to adjust in details with new learning of teams.

We see our job to be the catalyst in bringing open software out of the lab and into the live network. These are the principles that guide us in working with providers and in building the open software networks of the future.

If these principles resonate with you, and if you have a software networking project in the lab that you are ready to move to the live network, reach out. Join our movement and get out of the lab today!

The Robot Uprising – A Tale of Automated Testing

Ah testing, it’s one of those love/hate relationship type things. I see writing tests like brushing your teeth, it’s just something you need to do because if you don’t, you’ll regret it in the long term when everything goes wrong and it’s going to cost you big time to fix! So, just like maintaining your dental hygiene can be a tedious task, so can test automation but alas, there are tools out there to make life a bit easier and choosing the right one can save you a lot of time, money and effort.

Now as you, my astute readers might know, there are many different types of testing. In fact, this website lists over 100 different types of testing! That’s a little too many for my liking, but hey it’ll hopefully keep us employed during the robot uprising, given that they’ll need to be tested as well. Speaking of robots, this blog is primarily about the Robot Framework and how it has done wonders for at least one of those 100 types of testing – Acceptance Testing, and really what else is more important than your customer accepting your solution and handing over your hard worked earnings.

Here comes the obligatory “what is the Robot Framework?” section. Well, the Robot Framework is used for acceptance test driven development (ATDD). What that basically means is you have promised that your solution can do X,Y, and Z but you may or may not have developed the functionality. So, by using Robots keyword and data driven test automation capability, your write test suites, and test cases that invoke REST APIs, run CLI commands over SSHclick things on a webpage and basically prove that your solution does what your promised it would do, so when you run the entire test suite and it comes out all green, you can comfortably look your customer in the eye and say “show me the money!”.

Okay great, now that we know what the Robot Framework is, and why it’s useful, how do we use it? Best place to start is the quick start article, but we will cover the basic concepts here. So first thing’s first, we need to install it. Installing Robot is simple if you have python and pip installed. Simply run pip install robotframework. You will now be able to run robot –help and see all the wonderful different ways you can run your tests (see executing test cases for more information).

I find examples helpful when trying to get concepts across, so for the remainder of this blog, let’s pretend our company Asimov General Appliances is creating a REST API driven toaster. This toaster is very much like your printer, you load a loaf of bread in at the beginning of the week and you use an app on your smartphone to create toast on demand, heck let’s add Siri and scheduling integration as well. Now we as testers have been commissioned to write our automated tests using the Robot Framework to ensure this amazing new toaster does what the marketing and sales departments says it does and it will be up to the developers to implement it. Voila, you have acceptance test driven development!

The bulk of the action happens within a test case, a test case can look a little like this:

*** Test Cases *** 
Toaster can make toast    # Test case name, it represents a use case we are trying to prove works 
    [Documentation]    Tests if toaster can make eatable toast, assumes fresh bread is in spooler 
    Feed Bread In To Toaster # A keyword with no argument that does some action  
    Start Toasting 
    Wait For Toast    30 # Another keyword this time with an argument, saying lets wait 30 seconds  
    Stop Toasting 
    ${toast}=  Eject Toast    # A keyword that returns a value, that we can assert if it matches some criteria 
    Should be toasted and eatable  ${toast} # A keyword doing some assertion with an argument, using the toast 
    variable we received from the previous keyword.

This is an example of a workflow driven test, the workflow being a particular way your solution might be used which meets some acceptance criteria. You might be thinking this is some sort of witchcraft, to write such plain English and have some toast actually be made. So let me explain what is happening under the hood.

Ultimately the keywords are where the magic takes place. Keywords come from two places: Libraries or from the keywords we as testers write, also known as User Keywords. Library keywords are implemented in standard programming languages, typically Python. User keywords are just higher level keywords that encapsulate one or more other keywords. These are typically defined within the Test Suite (a collection of test cases), or in a Resource file (which is just a collection of keywords). In our example, all of the keywords are User keywords that in turn use Library keywords to make REST API calls to the toaster to perform the tasks. If any of the keywords fail, the test case will fail, and in the resulting report and log file that is generated this can be inspected and understood by both non-technical and technical personnel allowing bugs to be quickly identified.

The other important aspect of the Robot Framework to grok are the use of Variables. Any test case will be subject to change, such as the time we want to wait for toast to be completed, or the number of concurrent toasts we want to make. By using variables, we can write a test once, making it flexible enough to test a range of scenarios. More information on variables can be found here.

Now to sprinkle some words of wisdom I have garnered from using the robot framework.
Follow a style guide – Using a style guide gives your tests structure, you can even enforce a style with a linter such as robotframework-lint. This guide is a good start to form your style guide.
Prefix all keywords with the Library or Resource they originate from – This will help your team members know where User Keywords originate from (as they could be in the test case file, a library or a resource), to better understand them and to troubleshoot issues if need be.
Be consistent with naming and delimiters – As Robot uses a tabular syntax separated by two or more spaces, or the pipe character, there’s nothing worse than seeing both mixed, or two space in some spaces and four or more in others. Stick to one and try enforce it.
Simple folder structure – Now there is no real best way to organise your robot framework test suite, but I would follow something similar to this:

├── libraries
│   └──
├── resources
│   └── toaster.robot
├── tests
│   └── api
│       ├── 00_authentication.robot
│       ├── 01_temperature_control.robot
│       └── __init__.robot
└── variables

I hope that this short blog on the Robot Framework and ATDD has made getting started with the framework a little less daunting. It really is an easy to use and valuable tool. The existing libraries help you automate many of the tasks you would have to do manually, and if you don’t find one that does what you need it to do, it is easy to create using Python or Java. The main thing to remember is to stay organised, use a convention and stick to it, it makes collaboration and troubleshooting a lot easier.
Well that’s all folks, happy testing!

Originally Published on the [] on 5/20/17

My Experience at OpenStack Summit Barcelona 2016

When my colleague Jon Castro told me early this year that the next OpenStack summit was going to be in Barcelona and that we should go I thought, yeah right, keep dreaming! How could we possibly land a valid excuse to travel halfway across the world to eat Paella and drink Sangria, oh and attend some talks of course 😃 . So we submitted an abstract for the work we had done with NTT West around the OpenDaylight controller and OpenStack, and thankfully we were selected!

Being my first OpenStack summit I had no preconceived notions as to what to expect (so hopefully this recap is bias free), and if you’re a fan of TLDRs like I am, then here is mine: Lots of NFV, SFC, Orchestration talk, OpenStack is no longer a science project but an enterprise solution, and Barcelona is good fun. Now into the details.

My Talk
As I was at the summit for the reason of presenting, I thought it’s best to go over this topic first. My talk was focused on a project we did with NTT West around providing programmatic/API access to a traditional firewall that could only be configured via CLI, and then driving this API through a new Horizon dashboard. The end goal being an operator (tenant) could be assigned a firewall and be able to administer the device without having to be experienced in the specifics of the command line interface. If this sounds interesting to you, you can watch the presentation on YouTube to get a better understanding of how we achieved this.

NFV Stuff
Just by looking at the Summit schedule it is clear that NFV is a key use case for many companies using and contributing to OpenStack. There were 24 presentations and/or demos focused around NFV, diving into particular aspects such as orchestration, service function chaining (SFC) and use cases such as virtual evolved packet core (vEPC). I can’t talk about NFV without mentioning OPNFV (Open Platform for NFV) which is a reference NFV platform built by the open source community under the linux foundation. OPNFV is primarily a testing and integration project that brings together all the bits and pieces (OpenStack, OpenDaylight, OVS, DPDK, etc) you would need to launch a NFV cloud ready platform, that has been thoroughly tested, automated and put through a CI/CD pipeline, all packaged in a nice installer(s) with documentation. You can learn more about OPNFV by visiting the following siteOne OPNFV presentation at the summit was highlighting the advancements in neutron that benefit NFV use cases such as VLAN aware VMs and also showing the current shortfalls of neutron and how other networking solutions such as OpenDaylight can play nicely with Neutron to fulfil these shortfalls.

SFC Stuff
SFC was another hot topic, there were a couple of talks that showcased the benefits of using SFC and how it was being implemented in OpenStack. The following presentation was one of those, showing how traffic could be redirected based on a classification such as optimising IPSec, or video traffic. This is achieved through using Network Service Headers (NSH) a SFC encapsulation protocol still in a IETF draft but quickly becoming reality and some aspects can already be used in the Mitaka release of OpenStack.

OVN Stuff
Other great network-centric talks included the OVN presentation which shows some powerful features if you’re operating a OpenStack cloud using OVS and OVN to perform L2 and L3 functions (networking-ovn plugin). It allows for extended OVS scaling (distributed DHCP on the OVS agent), L3 performance becomes on-par with L2 performance through flow caching, and pre-calculation of network hops. New debugging features called ovn-trace allows for “what-if” analysis on packet classifications so you can see how a packet will traverse the network and flow table(s). They also spoke about BPF Datapath which provides a sandboxed environment in the linux kernel allowing new functionality to be inserted at runtime without having to write new linux kernel modules which can cause headaches not only to write, but to maintain and be supported in various linux distributions. This means new network and tunnelling protocols developed for a particular use case can be created and potentially be portable across Linux distributions.

Orchestration Stuff
Orchestration was also a big topic at the summit, with many presentations from the Tacker team, Cloudify and others. All of whom seem to be converging on the use of TOSCA as the modelling language for NFV services. I recommend the following presentations for those who are interested in all things orchestration:

Other Stuff and Conclusion
Stepping back from the technical aspects of the summit and reading between the lines it can be said that OpenStack has really matured, the distributors such as Mirantis, Ubuntu and RedHat have really gone to lengths to ease the pain of installation through project such as Ironic, OpenStack Ansible, Fuel, Packstack etc. The CI/CD system in place for OpenStack has also meant that code that is pushed upstream is properly reviewed (through gerrit), tests are written (Unit + Integration) and run automatically by the CI system (Jenkins) the results of this process is a stable system made up of many different components with hundreds of contributors from around the world, quite an achievement in itself. I believe the maturity and stability of the product is driving more adoption into telcos who generally set a high bar when it comes to production ready software, so this can only be a good sign for the project.

Last but not least, the city of Barcelona is an amazing place, and the perfect setting to catch up with colleagues and meet new ones all over some fantastic food, drinks and laughs. 10/10 would do it again.

Originally Published on the [] website on 4/27/174


Structures semi-structured text, useful when parsing command line output from networking devices.

Structify Text

Structures semi-structured text, useful when parsing command line output from networking devices.

What is it

If you’re reading this you’ve probably been tasked with programmatically retrieving information from a CLI driven device and you’ve got to the point
where you have a nice string of text and say to yourself, “wow I wish it just returned something structured that I could deal with like JSON or some other key/value format”.

Well that’s where


tries to help. It lets you define the payload you wish came back to you, and with a sprinkle of the right regular expressions it does!


With pip: 
pip install structifytext

From source

make install


Pass your text and a “structure” (python dictionary) to the  parser  modules parse method.

from structifytext import parser
output = """
  eth0      Link encap:Ethernet  HWaddr 00:11:22:3a:c4:ac
            inet addr:  Bcast:  Mask:
            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
            RX packets:147142475 errors:0 dropped:293854 overruns:0 frame:0
            TX packets:136237118 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:17793317674 (17.7 GB)  TX bytes:46525697959 (46.5 GB)

  eth1      Link encap:Ethernet  HWaddr 00:11:33:4a:c8:ad
            inet addr:  Bcast:  Mask:
            inet6 addr: fe80::225:90ff:fe4a:c8ad/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
            RX packets:51085118 errors:0 dropped:251 overruns:0 frame:0
            TX packets:3447162 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:4999277179 (4.9 GB)  TX bytes:657283496 (657.2 MB)

struct = {
        'interfaces': [{
            'id': '(eth\d{1,2})',
            'ipv4_address': 'inet addr:(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})',
            'mac_address': 'HWaddr\s((?:[a-fA-F0-9]{2}[:|\-]?){6})'

parsed = parser.parse(output, struct)
print parsed
This will return the python dictionary
 'interfaces': [
          'id': 'eth0',
          'ipv4_address': '',
          'mac_address': '00:11:22:3a:c4:ac'
          'id': 'eth1',
          'ipv4_address': '',
          'mac_address': '00:11:33:4a:c8:ad'
Which you can then do with as you please, maybe return as JSON as part of a REST service…

The Struct

A struct or structure or payload or whatever have you, is just a dictionary that resembles what you wish to get back.

With the values either being a dictionary { }, a list [ ], or a regular expression string [a-z](\d) with one group (to populate the value).

The structure is recursively parsed, populating the dictionary/structure that was provided with values from the input text.

Quite often, similar sections of semi-structured text are repeated in the text you are trying to parse.

To parse these sections of text, we define a dictionary with key of either id  or block_start the difference being block_start key/value is dropped from the resulting output.

This id or block_start marks the beginning and end for each “chunk” that you’d like parsed.

You can forcefully mark the end of a “chunk” by specifying a block_end key and regex value.

An example is useful here.

E.g. The following structure.

        'tables': [
                'id': '\[TABLE (\d{1,2})\]',
                'flows': [
                        'id': '\[FLOW_ID(\d+)\]',
                        'info': 'info\s+=\s+(.*)'
Will create a “chunk/block” from the following output
[TABLE 0] Total entries: 3
    info = related to table 0 flow 1
[TABLE 1] Total entries: 31
    info = related to table 1 flow 1
That will be parsed as:
        'tables': [{
        'id': '0',
        'flows': [{ 'id': '1', 'info': 'related to table 0 flow 1' }],
        }, {
        'id': '1',
        'flows': [{ 'id': '1', 'info': 'related to table 1 flow 1' }]

From Open Source to Product; A Look Inside the Sausage Making Factory

I’ve spent the last few months working closely with the OpenDaylight and OpenStack developer teams here at Brocade and I’ve gained a heightened appreciation for how hard it is to turn a giant pile of source code from an open source project into something that customers can deploy and rely on.

Kevin Woods

Kevin Woods

Not to criticize open source in any way – it’s a great thing.   These new open source projects in the networking industry, such as OpenDaylight, OpenNFV and OpenStack are going to do great things to advance networking technology.

No, it’s just the day to day grind of delivering a real product that challenges our team every day.

On any given day, when we are trying to build the code, we’ll get new random errors and in many cases it’s not immediately obvious where the problem is.   In another test we’ll get unexpected compatibility problems between different controller elements.   Again, somebody made a change and you can’t trace the problem.  On some days, certain features will stop working for no known reason.  Because of the above, we need to continuously update and revise test automation and test plans – that is also done daily.

When it comes to debugging a problem, unless you’re working with the source code and regularly navigating to find problems, diagnosis is difficult.    Some of the controller internals are extremely complex, for example the MD-SAL.   Digging into that to make either enhancements or fixes is not for the faint of heart.

The OpenDaylight controller is actually several projects that must be built separately and then loaded via Karaf.  This can be non-intuitive.

Another area of complexity is around managing your own development train.   If you’re going to have a non-forked controller that stays very close to the open source, you cannot risk being totally dependent upon the project (for the above reasons and others), and so you basically have to manage a parallel development thread.   At the same time, you find problems or want to make minor enhancements that you need in service, but cannot contribute immediately back to the project (that takes some review and time).    So you’re left with this problem of branching and re-converging all the time.   Balancing the pace of this with the projects pace is a challenge every day.

Then there’s all the maintenance associated with managing our own development thread, supporting internal teams, maintaining and fixing the documentation etc.   Contributing or committing code back to the project, when needed, is not a slam dunk either.   There is a commit and review process for that.  It takes some time and effort.

I think we’ll find the quality of the new Helium release to be significantly better than Hydrogen.  Lithium will no doubt be an improvement over Helium and so on.   The number of features and capabilities of the controller will also increase rapidly.

But after going through this product development effort the last few months I have a real appreciation for the value that a commercial distribution can bring.    And that’s just for the controller itself – what about support, training and so on?    Well, I leave those things for another blog.

Originally Published on the Brocade Community on 10/9/2014

Pin It on Pinterest