Investing in Open Source Business Models

Investing in Open Source Business Models

Interest in evaluating and investing in open source startups is on the rise again after a dip in the past couple of years. There is a huge shift toward using open source platforms, particularly in the networking space. New open source initiatives such as ONAP (Open Network Automation Platform), OSM (Open Source MANO) and OpenConfig are being driven by the service provider community and large enterprises. As a young growing company focused on open source for networking with a unique approach (that tier-1 service providers appreciate), we are invited to many network transformation discussions with industry giants. Because of this role in the ecosystem, we can see relevant market trends before most. Based on what we’re hearing from across the aisle, we believe now is the time that investors should look closely at opportunities to drive and benefit from the emerging wave in this space.

It would be easy to conclude that this trend is just the industry’s way to get cheaper software and shift profits from the large vendors (like Cisco and Juniper) to the network buyers. But that would be an oversimplification of the opportunity. Lowering CAPEX and OPEX costs are surely in play. However, equally important are:

  • The need for better network optimization and automation tools
  • The requirement to move faster than the traditional hardware-based networking vendors can move and
  • The desire to co-innovate and take more control of the tools

These factors are driving the open source trend just as much, if not more than the cost factors. Wise investors should look at these drivers as opportunities. Companies that can deliver these benefits using open source software stand to gain significantly.

All of this raises questions about the business model of open source. Conventional wisdom goes something like: “RedHat is the only company that has been successful at monetizing open source”. Nothing could be further from the truth. Many companies have been successful using open source as the centerpiece of their product strategies. Elastic, GitHub, Cloudera and Mulesoft are recent examples of open source-based businesses. In the past several years, MySQL, Zensource, Springsource, Zimbra, JBoss and Suse have all had very successful exits while building a business on open source platforms.

There are many different types of open source products and services have proven to be valuable to customers and revenue-generating for vendors.  Vendors that provide these types of services on an unchanged open source base are referred to as “pure-play” vendors. The intention of pure-play vendors is to ensure that the customers reap the full benefit of utilizing open source software, including no vendor lock-in. Let’s take a look at these types of these pure-play offerings:

  1. Product support for the open source code base – Examples of this include defect resolution, testing, continuous integration, extended support, upgrade and migration support and open source community advocacy.  All of these are valuable services not typically provided by open source communities.
  2. Applications running on an open source platform – These applications perform specific use cases or user-facing functions. Typical applications perform zero-touch configuration, overlay/underlay network control, traffic engineering, planning, configuration management, inventory, analytics, service assurance, fault remediation or policy control. Narrowly-focused, these applications can be of great value to service providers and of even greater benefit when combined with other open source platforms.
  3. Device or systems integration – There’s a lot of value and ongoing benefit to assuring that the open source platform interfaces to various devices continue to operate across revisions as well as support for new network devices. It’s also no secret that deploying open source platforms is complicated. Working with experts in the space, and your named platform cannot only accelerate integration and deployment times but can be part of a knowledge transfer initiative as well.
  4. Custom software development –  All network operators have functions or services that they want to be specific to their environment or service. Vendors are in a very good position to provide custom software development because of their deep understanding of the open source platform.

Are there questions about the business open source business model for the networking space? Yes, absolutely there are yet-to-be-answered questions. But by the time we have all the answers, the best investment opportunities will have gone. So, now is the time to look at investing in open source networking – call us if you want to understand more about this space.

Want to learn more?

  • Hear what some of the industry’s thought leaders are saying on this topic – this Telecom TV video with VMware, Amdocs and Lumina Networks highlights the power and relevance of open source in a transformational network.
  • Learn more about relevant open source projects with Linux Foundation.
Neglected Factors in 5G:  Network Slicing

Neglected Factors in 5G: Network Slicing

It may seem odd to long-time networkers that “slicing” is discussed extensively in relation to 5G deployment. After all haven’t we been using VLAN, VPNs, VRFs and a whole host of ways to slice and dice networks for many years? Keep in mind that for established 4G networks, there has been no easy way to create logical segments or divide the network into granular slices, even in cases where the equipment may be capable of it. While legacy services hosted MBB, voice, SMS and even MVNOs on the same infrastructure, it was built in a way that was either physically discrete, channelized or rigid – not the way a packet network, and thus 5G, would do things. This approach is monolithic and will need to be updated for successful 5G deployments.

With packet networking, software-defined networking and network function virtualization coming into the network buildouts for 5G, network slicing is becoming an important part of service agility. The power of 5G is not just in higher-data rates, higher subscriber capacity and lower latencies – it is in the fact that services and logical networks can be orchestrated together. This is critical for deployment of connected car, IOT (Internet of Things), big data and sensor networks, emergency broadcast networks and all the amazing things that 5G will be able to support.

But there’s an often-overlooked element of the 5G rollout, slicing deployment over existing equipment, is something that Lumina Networks is uniquely equipped to enable.

Most presentations that you will see on 5G (especially from the vendors) just assume that the provider will be buying all-new hardware and software for the 5G buildout. In reality, a lot of existing equipment will need to be used, especially components that are physically distributed and expensive or impossible to access. How will the new network slices traverse these legacy devices?

Network slicing will traverse from the mobile edge, continue through the mobile transport, including fronthaul (FH) and backhaul (BH) segments, and the slices will terminate within the packet core, probably within a data center.  Naturally, this will involve transport and packet networking equipment in both the backhaul and fronthaul network. The packet core will also likely involve existing equipment. These systems will rely on the BGP protocol at the L3 transport networking layer, even when they are newer platforms.

The 3GPP organization’s definition of a slice is “a composition of adequately configured network functions, network applications, and the underlying cloud infrastructure (physical, virtual or even emulated resources, RAN resources etc.), that are bundled together to meet the requirements of a specific use case, e.g., bandwidth, latency, processing, and resiliency, coupled with a business purpose”

Given the number of elements involved in a slice, sophisticated cloud-based orchestration tools will be required. It’s noteworthy that many of the optical transport vendors have acquired orchestration tools companies to build these functions for their platforms. However, since the start of the Open Network Automation Project (ONAP) at the Linux Foundation, it is clear that the service providers will demand open-source based platforms for their orchestration tools. Rightfully so, an open solution to this problem reinforces operators’ desire to end vendor lock-in and enable more flexible, service-creation enabled networks.

The creation of a “slice” in a 5G network will often involve the instantiation of relevant and dedicated virtual network functions (VNFs) for the slices and this is a key aspect of the work going on in the ONAP project. VNFs, in addition to participating as connectivity elements within the slice, will provide important functions such as security, policies, analytics and many other capabilities.


The good news here is that established open source projects such as OpenDaylight have the control protocols that will be used for legacy equipment, such as NETCONF, CLI-CONF, BGP-LS and PCEP, as well as the newer protocols that will be used for virtual L3 slicing such as COE and OVSDB.

Some of the network slicing capabilities that these protocols enable are:

  • Supporting end-to-end QoS, including latency and throughput guarantees
  • Isolation from both the data plane and orchestration/management plane
  • Policies to assure service intent
  • Failure detection and remediation
  • Autonomic slice management and operation

ONAP utilizes the OpenDaylight controller as it’s “SDN-C”. And, more recently ONAP has a new project to develop an OpenDaylight SDN-R to configure radio transport services.

This blog series, “Neglected 5G Factors” will address  SDN-R more our next blog. For now, be sure you’ve read the first of the series “How SDN will Enable Brownfield Deployments.”


Top 10 Things For New OpenDaylight Developers

I recently started contributing upstream to the Open Daylight project (ODL) as a developer. I mostly followed the ODL developer documentation on how to get started and how to use the tools. Through ambiguous documentation or through hubris (every developer always thinks they know what they are doing), I made some mistakes and had to learn some things the hard way. This article is an enumeration of some of those hard knocks, just in case it can help the next person following this path.

First, this list is more than ten items. It is just that “top 10” has a catchy sound to it that just isn’t there for “top 17”.

Gerrit Workflow

Gerrit has a different workflow that you are likely using in downstream coding with other tools. There is a general tutorial and an ODL coding introduction to working with Gerrit that is very helpful.

Coding Conventions

We followed our own coding guidelines and those did not match to the Google coding conventions used upstream. Differences we ran into were:

  • Lower-case method names, even for JUnit test methods
  • Using the most restrictive visibility for access qualifiers (I won’t push back the philosophy of library design)
  • Use Java 8 lambdas everywhere possible

Gerrit Commands

Gerrit (plugin) has some non-obvious controls, namely that you can kick off another build by putting “recheck” as your comment. Others are “rerun”, “verify” as in here.


Upstream coders usually add the prefix “WIP: ” to their bug report to let other developers know things are not ready for prime time reviews yet. I have been putting the “WIP:” prefix as a new comment right after my new patch set.


Mechanically you can review the issues by using “Diff against:” drop list to pick a revision to start at then go into the first file and then use the upper right arrows to move between files.

Broken Master

The master branch on ODL can and does break, causing lots of down time with broken builds (days even). Full builds take 10 hours and verifying-before-pushing is an evolving process. Have patience.

Git Review

If you are using “git review” and you forgot the “- – amend” option on your commit, you will be prompted to squash and will end up with a new branch. You can recover using “git rebase -i HEAD~2” then using the force option and then abandoning the new branch.


Along with master breaking, master can also produce stale artifacts, so don’t assume that clearing your m2 cache, creating a subproject repo and building will give you up-to-date code.

Searching Jars

You can search jar files using find . -type f -name ‘*.jar’ -print0 |  xargs -0 -I ‘{}’ sh -c ‘jar tf {} | grep “YourClassHere” &&  echo {}’ for helping verify your m2 has the latest.

Patch Workflow

The patch workflow is not very good at all for having multiple people working on an item; traditional git branches and management are superior in this respect so expect some pain.

Broken Master Again

If your patch won’t build because master is broken, you can rebase it to pick up a fix temporarily. When you do a “git review” to push it, you will be prompted because there are two commits (your new patch revision and the fix). You do not want to merge that fix into your patch.

Skip Tests

There may be a bug here or there in dependencies, so you should always to a full build within a subproject the first time after you create the repo. In my case, I was in netconf and saw a difference between starting off with a “-DskipTests” or not. The former lead to a failed build, while starting with a full build (and then doing any number of skip tests), seemed to work.


If you are a developer who works with meaningful coding standards, you will find yourself clashing with the pedantic nature of ODL’s use of check style. Although it probably varies from project to project, your code reviewer might decide that your code is the perfect opportunity to enforce check style.

Bug Report

Put the URL to the gerrit issue in “external references” on the bug report and put the bug as the first part of the commit message like “BUG-8118: ….”

Gerrit Patch Set

Any replies that you make to the patch set discussion use the current patch set as an annotation. Be sure to move your “current patch set” by using the upper right drop list “Patch sets 8/8” (for example).

Easy Debug

You can do development on a project like netconf and start a controller with your code: git clone, mvn clean install, go to netconf/karaf/target/assembly/bin, do ./karaf, at console install your feature, like feature:install odl-netconf-callhome-ssh (and maybe odl-mdsal-apidocs). Viola!


If you work on the docs, then you need to know there are three kinds: contributor (on wiki), developer, and user (both on

Getting Started Upstream in ODL

By Allan Clarke and Anil Vishnoi



Getting started as an upstream contributor to OpenDaylight is not easy. The controller ecosystem is big, there are many projects, and there are millions of lines of code. What is a new ODL developer to do? Here is some pragmatic advice on where to begin to become an active contributor.

Fix Bugs

One of the easiest ways to get to know a code base is to start fixing bugs. Peruse the ODL bugs list on Bugzilla, say with the NETCONF project. You want to find bugs that aren’t likely being worked on and are of limited scope (to match your limited understanding of the project). Ideally bugs will have an owner assigned to indicate that they are actively being worked on, but it is not always a great indicator. In particular, someone may run across a bug, file a report, then jump into fixing it—and forget to assign it to themselves. This is most likely with the project contributors, so figure out who are the project contributors and look at the date of the report. If it was a project contributor and a newish date, then that bug might be being worked on. You should read through the report and try to decide how much domain knowledge is needed—as a newbie, smaller is better.

Once you have selected a bug to work on, click on the “take” link. Also add a comment to the bug. If someone already is working on it, they should get a notice and respond. You can also try the ODL mailing lists and give notice there. You mainly want to avoid duplicate work, of course.

Review Patches

Reviewing patches is a great way to contribute. You can access patches via Gerrit, and we’ll use the NETCONF patches as an example. Doing code reviews is a great way to not only see existing code but also to interact with other developers.

  • If you have some domain expertise and know the code, you can review the functionality that is being pushed.
  • If you have neither of these, you can do the review based on Java best practices and good software engineering practice.

Address Technical Debt

ODL uses Sonar for analytics of the upstream project. Here is an example for the NETCONF issues. Note that the ODL project has coding conventions, and the Sonar Qube has some best practices. This list shows violations that should be addressed. As a newbie, you can work on these with little domain knowledge required. You can also see that the code coverage varies for the NETCONF coverage, so adding NETCONF unit tests to boost the coverage in the weakest areas would be very helpful.

Sonar has a lot of interesting metrics. You can explore some of them starting here including coverage, tech debt, etc. If you look at the Sonar dashboard, it will point out a lot of available work that does not require a large span of time to invest. Doing some of this work is a great step towards getting your first patch submitted.

Follow Best Practices

With well over a million lines of code and many contributors from many companies, the ODL project has quite a girth. To manage the code entropy, ODL has some best practices that you should become familiar with. These cover a diverse set of topics, including coding practices, debugging, project setup and workflow. We strongly recommend that you carefully read these. They will save you a lot of time and will pay back your investment quickly. They will help you skate through code reviews. These practices are really time-tested advice from all the ODL developers, so don’t ignore them.

Support Attribution

Attribution is an important insight into most if not all open source projects. Attribution allows stakeholders to see who is contributing what, from the individual up through sponsoring companies. It allows both a historical and current view of the project. You can see an example of why attribution is illuminating here. You need to sign up for an ODL account, and a part of that process will be to associate yourself with a company (if applicable). You can also see breakdowns by authors on the ODL Spectrometer.

That’s all for now. Happy trails, newbie.

Watch for Allan’s blog next week where he will share his Top 10 learnings as a new developer contributing to ODL.

Service Providers Are Placing Big Bets on Open Source Software Networking – Should You?

The service provider market is undergoing earth-shaking changes. These changes impact the way that applications for consumers and business users are deployed in the network and cloud as well as the way that the underlying data transport networks are built.

At Lumina, we’ve had the chance to work with several large service providers on their software transformation initiatives and get an up-close look at what works and what doesn’t. Three factors are particularly favorable in setting up successful projects for frictionless progress from the lab through prototype and proof of concept and into the production network.

Top-Down Advantage

Our first observation is that top-down initiatives and leadership work better than bottom-up or “grass roots” approaches. The experience of AT&T strongly illustrates the advantage. While a few of the hyperscale cloud providers had already launched open source initiatives and projects, the first big move among the established service providers was AT&T’s Domain 2.0, led by John Donovan in 2013. Domain 2.0 was not a precise description of everything that AT&T wanted to do, but through that initiative, the leadership created an environment where transformative projects are embraced and resistance to change is pushed aside.

While lack of top down support is not a showstopper, it is extremely helpful to get past obstacles and overcome organizational resistance to change. If top-down support in your organization is lacking or weak, it is worth your effort to influence and educate your executives. In engaging executives focus on the business value of open software networking. The advantages of open source in software networks include eliminating lock-in and spurring innovation. As our CEO, Andrew Coward, wrote in his recent blog, Why Lumina Networks? Why Now?: “Those who build their own solutions—using off-the-shelf components married to unique in-house developed functionality—build-in the agility and options for difference that are necessary to stay ahead.”

Although it may create a slower start, from what we have seen, taking the time to do early PoCs to onboard executive support so that they deeply attach to the value is time well worth spent. Sometimes a slow start is just what is needed to move fast.

Collaboration through Open Source

The second observation is that industry collaboration can work. I read an interesting comment by Radhika Venkatraman, senior vice president and CIO of network and technology at Verizon, in her interview with SDxCentral. She said, “We are trying to learn from the Facebooks and Googles about how they did this.” One of the best ways to collaborate with other thought leaders in the industry is to join forces within the developer community at open source projects. The Linux Foundation’s OpenDaylight Project includes strong participation from both the vendor community and global service providers including AT&T, Alibaba Group, Baidu, China Mobile, Comcast and Tencent. Tencent, for one, has over 500 million subscribers that utilize their OpenDaylight infrastructure, and they are contributing back to the community as are many others.

A great recent example of industry collaboration is the newly announced ONAP (Open Network Automation Platform) project. Here, the origins of the project have roots in work done by AT&T, China Mobile and others. And now, we have a thriving open source developer community consisting of engineers and innovators who may not have necessarily collaborated in the past.

These participants see benefits of collaboration not only to accelerate innovation but also to build the software run time across many different types of environments and use cases so as to increase reliability. Providers recognize that in their transformation to software networks there’s much they can do together to drive technology, while using how they define and deliver services to stand out from each other in the experiences created for customers.

What about your organization? Do you engage in the OpenDaylight community? Have you explored how ONAP can help you? Do you use OpenStack in your production network? And importantly, do you engage in the discussions and share back what you learn and develop?

Pursuit of Innovation

A third observation is the growing ability for service providers to create and innovate at levels not seen before. A prime example here is the work done by CenturyLink to develop Central Office Re-architected as a Datacenter platform to deliver DSL services running on OpenDaylight. CenturyLink used internal software development teams along with open source and Agile approaches to create and deploy CORD as part of a broad software transformation initiative.

One might have thought that you would only see this level of innovation at Google, Facebook or AWS, but at Lumina we are seeing this as an industry-wide trend. The customer base, business model, and operations of service providers vary widely from one to another based on their historical strengths and legacy investment. All have an opportunity to innovate in a way that advances their particular differences and competitive advantages.

Closing Thoughts

So we encourage you to get on the bandwagon! Don’t stand on the sidelines. A combination of leadership, collaboration and innovation are the ingredients you need to help your organization drive the software transformation needed to stay competitive. There is no other choice.

Stay tuned for my next blog where we will discuss some of the specifics of the advantages, development and innovation using open source.

NetDevEMEA : OpenStack, Opendaylight and VTN Feature

NetDevEMEA : OpenStack, Opendaylight and VTN Feature

  1. Introduction




OpenDaylight Virtual Tenant Network (VTN) is an opendaylig’s feature that provides multi-tenant virtual network. It alows aggregate multiple ports from the many both physical and Virtual to form a single isolated virtual network called Virtual Tenant Network. Each tenant network has the capabilities to function as an individual switch.

The objective of this tutorial is to facilitate a configuration/integration of Openstack Mitaka with Opendalylight that permits explore VTN feature.




Figure  above, shows the logical architecture after this guide.

  1. Virtualbox configuration:

    1. Download and install Virtualbox

      Install VirtualBox (version 5.0 and up), and VirtualBox Extension packs (follow instructions for Extension packs here).

    2. Download a Centos Centos 7.2
      Main download page for CentOS7
    3. Run Virualbox and Create 2 x Host Only Networks

      To create a host-only connection in VirtualBox, start by opening the preferences in VirtualBox. Go to the Network tab, and click on add a new Host-only Network.

      Host-Only networks configuration
      # This will be used for data i.e. vxlan tunnels
      #VirtualBox Host-Only Ethernet Adapter 1
      IPv4 Address
      DHCP Server Disabled
      # This will be used for mgmt, i.e. I connect from Windows to the VMs, or for VM to VM communication
      #VirtualBox Host-Only Ethernet Adapter 2
      IPv4 Address
      DHCP Server Disabled
    4. Create 3 x VirtualBox Machines running Centos 7.2 (installed from ISO). Setting up
      RAM 4096
      Processors 2
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured enp0s9)
      RAM 4096
      Processors 2
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured enp0s9)
      RAM 4096
      Processors 2
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured enp0s9)
    5. Interface Configuration Files

      First thing to do after run each VM is edit all interface configuration files. In CentOS system they could be found here  /etc/sysconfig/network-scripts/. Here is a sample of how must to look the interface files for the openstack-controller.  Make sure they look similar on all 3 of your machines

      IPADDR= #Modify this value in case you are configuring openstack-compute or Opendaylight controller
  2. Openstack Install

    1. Pre-Configuration

      Edit /etc/hosts on all three VMs so it looks like this:

      /etc/hosts compute opendaylight

      Disable Firewall and NetworkManager

      systemctl disable firewalld systemctl stop firewalld
      systemctl disable NetworkManager 
      systemctl stop NetworkManager
      systemctl enable network 
      systemctl start network

      Disable SE Linux and install Openstack packstack RDO

      setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
      yum install -y centos-release-openstack-mitaka.noarch
      yum install -y openstack-packstack
      yum update -y

      Install Openstack Run the following command int openstack-controller (openstack-compute VM must to be running)

    2. Test the environemnt

      On the same directory as the previous command was run:

      source keystonerc_admin
      curl -O
      glance image-create --file ./cirros-0.3.4-x86_64-disk.img --visibility public --container-format bare --disk-format qcow2 --name cirros

      Login to the UI (User: admin, pass: helsinki)

      Create a network, and router.  Create VM and check that it gets an IP

  3. Install OpenDaylight


    1. Install Opendaylight Boron SR2 and enable the necessary features:
      There are two OpenStack plugins.  odl-ovsdb-opesntack and odl-netvirt-openstack.  The ovsdb is the older plugin and will be depreciated shortly, but this is what is used within BSC.   The next release of BSC should switch to the netvirt plugin.
      curl -O --insecure
      ln -s distribution-karaf-0.5.2-Boron-SR2 odl
      cd odl/bin
      ./client -u karaf
      feature:install odl-ovsdb-openstack odl-dlux-core odl-ovsdb-ui

      Verify the install by browsing to the DLUX UI (admin : admin)

      Also check that the REST API works and is returning an empty set of networks

      curl –u admin:admin

  4. Openstack Integration

    1. Erase all VMs, networks, routers and ports in the Controller Node

      Start by deleting any VMs,Networks and routers that you have already created during the testing. Before integrating the OpenStack with the OpenDaylight, you must clean up all the unwanted data from the OpenStack database. When using OpenDaylight as the Neutron back-end, ODL expects to be the only source for Open vSwitch configuration. Because of this, it is necessary to remove existing OpenStack and Open vSwitch settings to give OpenDaylight a clean slate.Following steps will guide you through the cleaning process!

      # Delete instances
      $ nova list
      $ nova delete <instance names>
      # Remove link from subnets to routers
      $ neutron subnet-list
      $ neutron router-list
      $ neutron router-port-list <router name>
      $ neutron router-interface-delete <router name> <subnet ID or name>
      # Delete subnets, nets, routers
      $ neutron subnet-delete <subnet name>
      $ neutron net-list
      $ neutron net-delete <net name>
      $ neutron router-delete <router name>
      # Check that all ports have been cleared – at this point, this should be an empty list
      $ neutron port-list
      # Stop the neutron service
      $ service neutron-server stop

      While Neutron is managing the OVS instances on compute and control nodes, OpenDaylight and Neutron can be in conflict. To prevent issues, we turn off Neutron server on the network controller and Neutron’s OpenvSwitch agents on all hosts.

    2. Add an external bridge port

      Create  a new interface configuration file  /etc/sysconfig/network-scripts/ifcfg-br-ex

      It should look something like this (change the IPs to match your system – this should be the IP previously assigned to enp0s3)

      IPADDR= # Previous IP associate to your enp0s3
      # Previous IP mask
      GATEWAY= # Previous gateway

      Update enp0s3 – you can comment out the original settings, and add the new lines below

       vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

      #DEVICE=enp0s3 #ONBOOT=yes
  5. Connect Openstack Controller and Compute  OVS to ODL

    Run next commands in both Openstack nodes:

    1. Set ODL Management IP
      export ODL_IP=
      export OS_DATA_INTERFACE=enp0s8

      Stop Neutron

      systemctl stop neutron-server
      systemctl stop neutron-openvswitch-agent
      systemctl stop neutron-l3-agent.service
      systemctl stop neutron-dhcp-agent.service
      systemctl stop neutron-metadata-agent
      systemctl stop neutron-metering-agent

      Stop Neutron OVS  Processes  You must remove this package otherwise when you restart ovswitch it will get started and trash your  ovsdb

      systemctl stop neutron-openvswitch-agent
      systemctl disable neutron-openvswitch-agent
      yum remove -y openstack-neutron-openvswitch.noarch

      Clean Switches on controller

      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      ovs-dpctl del-if ovs-system enp0s3
      ovs-dpctl del-if ovs-system vxlan_sys_4789
      ovs-dpctl show
      data_interface=$(facter ipaddress_${OS_DATA_INTERFACE})
      read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
      ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=${data_interface}
      ovs-vsctl set-manager tcp:${ODL_IP}:6640
      ovs-vsctl list Manager echo
      ovs-vsctl list Open_vSwitch

      Bring br-ex and associated interface up and down

      ifdown br-ex
      ifdown enp0s3
      ifup enp0s3
      ifup br-ex
    2. Checking

      OVS configuration. 

      [user@openstackController ~]$ sudo ovs-vsctl show
          Manager "tcp:"
          Bridge br-int
              Controller "tcp:"
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
            Bridge br-ex
              Port br-ex
                  Interface br-ex
                      type: internal
              Port "enp0s3"
                  Interface "enp0s3"
          ovs_version: "2.5.0"


    3. External Connectivity still works


      At this point you can now check the dlux UI, to ensure both switches show up


  6. Connect Openstack Neutron to ODL

    1. Install ODL Plugin for Neutron
      yum install -y python-networking-odl.noarch
    2. Configure Neutron ml2 to connect to ODL
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat cat <<EOF | tee --append /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_odl] password = admin username = admin url = http://${ODL_IP}:8181/controller/nb/v2/neutron EOF
    3. Clean database

      mysql -e "drop database if exists neutron;"
      mysql -e "create database neutron character set utf8;"
      mysql -e "grant all on neutron.* to 'neutron'@'%';" 
      neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
      systemctl start neutron-server
      systemctl start neutron-l3-agent.service
      systemctl start neutron-dhcp-agent.service
      systemctl start neutron-metadata-agent
      systemctl start neutron-metering-agent
  7. Virtual Tenant Network Feature

    From OpenDaylight’s console

    1. Install the required features for VTN.

      feature:install odl–vtn-manager-rest

      feature:install odl–vtn-manager-neutron

    2. Test rest API

      VTN Manager provides REST API for virtual network functions.

      Create a virtual tenant network
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X POST \ \
      -d '{"input":{"tenant-name":"vtn1"}}'

      Check if was created

      Get info
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X GET \

      more examples [1]

  8. Mininet

    1. Download Mininet.
    2. Launch the Mininet VM with VirtualBox.
      RAM 1024
      Processors 1
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured eth0)
      NIC 1 Bridged Adapter (Provides internet connectivity)(eth1)
    3. Log on to the Mininet VM with the following credentials:
      • user: mininet
      • password: mininet
    4. Interface configuration file 
      vi /etc/network/interfaces

      This configuration match with actual environment

      # The loopback network interface
      auto lo
      iface lo inet loopback
      # The primary network interface
      auto eth0
      iface eth0 inet static
      auto eth1
      iface eth1 inet dhcp


    5. start a virtual network:

      sudo mn –controller=

      more info [3]

  9. References

    [1] DevStack and different versions of Openstack)

    [2] (Old OpenStack version)


vtn_example.JPG (image/jpeg)

Capture.JPG (image/jpeg)

Pin It on Pinterest