Investing in Open Source Business Models

Investing in Open Source Business Models

Interest in evaluating and investing in open source startups is on the rise again after a dip in the past couple of years. There is a huge shift toward using open source platforms, particularly in the networking space. New open source initiatives such as ONAP (Open Network Automation Platform), OSM (Open Source MANO) and OpenConfig are being driven by the service provider community and large enterprises. As a young growing company focused on open source for networking with a unique approach (that tier-1 service providers appreciate), we are invited to many network transformation discussions with industry giants. Because of this role in the ecosystem, we can see relevant market trends before most. Based on what we’re hearing from across the aisle, we believe now is the time that investors should look closely at opportunities to drive and benefit from the emerging wave in this space.

It would be easy to conclude that this trend is just the industry’s way to get cheaper software and shift profits from the large vendors (like Cisco and Juniper) to the network buyers. But that would be an oversimplification of the opportunity. Lowering CAPEX and OPEX costs are surely in play. However, equally important are:

  • The need for better network optimization and automation tools
  • The requirement to move faster than the traditional hardware-based networking vendors can move and
  • The desire to co-innovate and take more control of the tools

These factors are driving the open source trend just as much, if not more than the cost factors. Wise investors should look at these drivers as opportunities. Companies that can deliver these benefits using open source software stand to gain significantly.

All of this raises questions about the business model of open source. Conventional wisdom goes something like: “RedHat is the only company that has been successful at monetizing open source”. Nothing could be further from the truth. Many companies have been successful using open source as the centerpiece of their product strategies. Elastic, GitHub, Cloudera and Mulesoft are recent examples of open source-based businesses. In the past several years, MySQL, Zensource, Springsource, Zimbra, JBoss and Suse have all had very successful exits while building a business on open source platforms.

There are many different types of open source products and services have proven to be valuable to customers and revenue-generating for vendors.  Vendors that provide these types of services on an unchanged open source base are referred to as “pure-play” vendors. The intention of pure-play vendors is to ensure that the customers reap the full benefit of utilizing open source software, including no vendor lock-in. Let’s take a look at these types of these pure-play offerings:

  1. Product support for the open source code base – Examples of this include defect resolution, testing, continuous integration, extended support, upgrade and migration support and open source community advocacy.  All of these are valuable services not typically provided by open source communities.
  2. Applications running on an open source platform – These applications perform specific use cases or user-facing functions. Typical applications perform zero-touch configuration, overlay/underlay network control, traffic engineering, planning, configuration management, inventory, analytics, service assurance, fault remediation or policy control. Narrowly-focused, these applications can be of great value to service providers and of even greater benefit when combined with other open source platforms.
  3. Device or systems integration – There’s a lot of value and ongoing benefit to assuring that the open source platform interfaces to various devices continue to operate across revisions as well as support for new network devices. It’s also no secret that deploying open source platforms is complicated. Working with experts in the space, and your named platform cannot only accelerate integration and deployment times but can be part of a knowledge transfer initiative as well.
  4. Custom software development –  All network operators have functions or services that they want to be specific to their environment or service. Vendors are in a very good position to provide custom software development because of their deep understanding of the open source platform.

Are there questions about the business open source business model for the networking space? Yes, absolutely there are yet-to-be-answered questions. But by the time we have all the answers, the best investment opportunities will have gone. So, now is the time to look at investing in open source networking – call us if you want to understand more about this space.

Want to learn more?

  • Hear what some of the industry’s thought leaders are saying on this topic – this Telecom TV video with VMware, Amdocs and Lumina Networks highlights the power and relevance of open source in a transformational network.
  • Learn more about relevant open source projects with Linux Foundation.
Neglected Factors in 5G: SDN-R & Wireless Transport SDN

Neglected Factors in 5G: SDN-R & Wireless Transport SDN

Software Defined Networking (SDN) concepts are alive and well in the indomitable march to 5G. Control plane to data plane separation, network programmability and closed-loop automation techniques are being defined, developed and tested in numerous PoCs and early trials. The specification bodies and open source organizations are cooperating to define the architectures and protocols required for next-generation wireless. One of the more prominent open source projects in the 5G space is ONAP (the Linux Foundation’s Open network Automation Project) which is developing the SDN-C and APP-C controllers, based on OpenDaylight. More recently, ONAP is working to develop the OpenDaylight-based SDN-R controller, which is the focus for this article.

The SDN-R project is aimed specifically at wireless transport SDN. “Transport SDN” is the concept of multiplexing and carrying multiple services or network slices across the backhaul network, usually over a geographic distance, a fundamental concept in 5G networking. The “R” stands for radio. SDN-R is concerned specifically with coordinating the functionality needed to support services traversing Optical or Microwave networks.  Although a new group in ONAP, SDN-R builds on several years of work and earlier trials within the ONF (Open Networking Foundation). This is welcomed cooperation between two different standardization groups, something we need to see more of in the orchestration space.

SDN-R-Diagram

The original proof of concept trial focused on the ONF’s OpenFlow interface as the controller protocol. However, more recent PoCs have also utilized NETCONF and have focused on the information models as well as YANG. The concept of a mediator has been introduced, recognizing the reality that there will be a wide variety of vendor equipment, both PNFs and VNFs, used with a variety of interfaces and APIs. Lumina’s SDN Controller architecture is ideally suited to create mediators and have them function as a model translator for OpenDaylight.

As the SDN-R project proposal explains very clearly, the objective is to port the models and controller of the ONF Wireless Transport project into the ONAP framework.  Since, starting in 2015, the Wireless Transport Project within the Open Transport group of the ONF has pursued the goals of defining a shared data model for wireless network elements and developing a Software Defined Network (SDN) controller to manage a network made up of equipment from several manufacturers.  The model is defined in the ONF Technical Reference TR-532, and the SDN controller is based on OpenDaylight. Because the controller is based on OpenDaylight, it is consistent with the ONAP architecture, and the majority of the software for the applications can be ported into ONAP with only minor modifications.

Lumina Networks looks forward to continuing our contributions to the SDN-R and to ONAP broadly. The development of 5G networking is complex, as it will involve a combination of new and old equipment, VNFs and PNFs, virtual machines and containers and vendor equipment based on new standards as well as equipment based on proprietary APIs. 5G services will need to be applied end-to-end consistently across all of these and with closed-loop automation to provide service assurance.

The SDN-R leverages Lumina’s development and deployment experience with NETCONF. NETCONF interfaces are the most commonly deployed for Lumina’s customers and we have experience dealing with voluminous YANG models in production, and as always, we have contributed our efforts back into the open source community. Specifically, for SDN-R, the NETCONF/YANG constructs give us the ability to manage the transport service end-to-end in addition to device-by-device.  And, as regards to devices, we have developed the tools and software to extend control to non-NETCONF devices as well, so that legacy equipment can be used for SDN transport.

As we have established in this blog series, the only way to achieve this type of architecture is with an open-source control plane, most often based on OpenDaylight. Therein lies the neglected factors in 5G – that is, while much of the 5G network will be new, the only way to achieve cost-effective deployment in a reasonable timeframe will be by using existing protocols and already-installed systems wherever possible. Building on what we already have and bringing to its open-source-based control and orchestration software will be the formula that makes 5G possible for most service providers.

Learn more about the neglected 5G factors:

  1. Neglected Factors in 5G: Network Slicing
  2. Neglected 5G Factors: How SDN will Enable Brownfield Deployments

Deploy-5G-in-a-brownfield-environment

Neglected Factors in 5G:  Network Slicing

Neglected Factors in 5G: Network Slicing

It may seem odd to long-time networkers that “slicing” is discussed extensively in relation to 5G deployment. After all haven’t we been using VLAN, VPNs, VRFs and a whole host of ways to slice and dice networks for many years? Keep in mind that for established 4G networks, there has been no easy way to create logical segments or divide the network into granular slices, even in cases where the equipment may be capable of it. While legacy services hosted MBB, voice, SMS and even MVNOs on the same infrastructure, it was built in a way that was either physically discrete, channelized or rigid – not the way a packet network, and thus 5G, would do things. This approach is monolithic and will need to be updated for successful 5G deployments.

With packet networking, software-defined networking and network function virtualization coming into the network buildouts for 5G, network slicing is becoming an important part of service agility. The power of 5G is not just in higher-data rates, higher subscriber capacity and lower latencies – it is in the fact that services and logical networks can be orchestrated together. This is critical for deployment of connected car, IOT (Internet of Things), big data and sensor networks, emergency broadcast networks and all the amazing things that 5G will be able to support.

But there’s an often-overlooked element of the 5G rollout, slicing deployment over existing equipment, is something that Lumina Networks is uniquely equipped to enable.

Most presentations that you will see on 5G (especially from the vendors) just assume that the provider will be buying all-new hardware and software for the 5G buildout. In reality, a lot of existing equipment will need to be used, especially components that are physically distributed and expensive or impossible to access. How will the new network slices traverse these legacy devices?

Network slicing will traverse from the mobile edge, continue through the mobile transport, including fronthaul (FH) and backhaul (BH) segments, and the slices will terminate within the packet core, probably within a data center.  Naturally, this will involve transport and packet networking equipment in both the backhaul and fronthaul network. The packet core will also likely involve existing equipment. These systems will rely on the BGP protocol at the L3 transport networking layer, even when they are newer platforms.

The 3GPP organization’s definition of a slice is “a composition of adequately configured network functions, network applications, and the underlying cloud infrastructure (physical, virtual or even emulated resources, RAN resources etc.), that are bundled together to meet the requirements of a specific use case, e.g., bandwidth, latency, processing, and resiliency, coupled with a business purpose”

Given the number of elements involved in a slice, sophisticated cloud-based orchestration tools will be required. It’s noteworthy that many of the optical transport vendors have acquired orchestration tools companies to build these functions for their platforms. However, since the start of the Open Network Automation Project (ONAP) at the Linux Foundation, it is clear that the service providers will demand open-source based platforms for their orchestration tools. Rightfully so, an open solution to this problem reinforces operators’ desire to end vendor lock-in and enable more flexible, service-creation enabled networks.

The creation of a “slice” in a 5G network will often involve the instantiation of relevant and dedicated virtual network functions (VNFs) for the slices and this is a key aspect of the work going on in the ONAP project. VNFs, in addition to participating as connectivity elements within the slice, will provide important functions such as security, policies, analytics and many other capabilities.

5G-Network-Slicing

The good news here is that established open source projects such as OpenDaylight have the control protocols that will be used for legacy equipment, such as NETCONF, CLI-CONF, BGP-LS and PCEP, as well as the newer protocols that will be used for virtual L3 slicing such as COE and OVSDB.

Some of the network slicing capabilities that these protocols enable are:

  • Supporting end-to-end QoS, including latency and throughput guarantees
  • Isolation from both the data plane and orchestration/management plane
  • Policies to assure service intent
  • Failure detection and remediation
  • Autonomic slice management and operation

ONAP utilizes the OpenDaylight controller as it’s “SDN-C”. And, more recently ONAP has a new project to develop an OpenDaylight SDN-R to configure radio transport services.

This blog series, “Neglected 5G Factors” will address  SDN-R more our next blog. For now, be sure you’ve read the first of the series “How SDN will Enable Brownfield Deployments.”

Deploy-5G-in-a-brownfield-environment

NFV and SDN Nuts & Bolts: Cloudify Lumina Flow Manager Plug-in Demonstration

NFV and SDN Nuts & Bolts: Cloudify Lumina Flow Manager Plug-in Demonstration

Last month in The Hague, we announced a strategic alliance with Cloudify Platform Ltd. to bring agility to service providers with legacy and virtualized networks. Leveraging open source-based solutions, we are working together to automate service delivery without vendor lock-in.

With a successful tier-one service provider production deployment in the books, our new open source plug-in, Cloudify Lumina Flow Manager, enables service orchestration for brownfield networks that want to deploy NFV. The plug-in enables the Cloudify cloud native orchestration platform to provision network connectivity with unified controls of underlay equipment via a network path-based application, the Lumina Flow Manager, on the Lumina SDN Controller.

Plugin Demonstration

In the demonstration below, the Cloudify LFM (Lumina Flow Manager) Plug-in deploys and manages network services. It enables Paths creation and ELines for both VLan and Ethernet – a great starting point for extension to other services and customization.

Our goal in this demonstration will be to allow and send ARP and IP packets between host 101 to host 303 and I’ll demonstrate how to:

  • Package and install the plugin
  • Deploy an ELine service to the LFM controller
  • Verify it’s working on Mininet

 

Tutorial – How to Use the Cloudify Lumina Flow Manager

Required Environment

We used Cloudify, Mininet and LFM Vagrant VMs. Tutorials on how to set up these environments on Virtualbox or Libvirt will be explained in another article soon, so stay tuned.

LFM – 192.168.50.21
LFM is setup without SSL using plain http for any requests. (But you can also use HTTPS)

Mininet – 192.168.50.40
The topology used includes: controller, 8 servers and 9 switches and links connecting between the devices and is provided in the examples folder: https://github.com/lumina-networks/cloudify-lfm-plugin/blob/master/examples/mininet/topo.yml

The Mininet server is also used for Nginx and Ansible for any supporting files and deployments needed by LFM or Cloudify.

Cloudify – 192.168.50.30
Cloudify image is provided in QCow2 image format from their downloads page: https://cloudify.co/download/

Upload Plugin to Cloudify Manager

SSH into the Cloudify vm.

Create a virtual environment for Python
virtualenv .venv
source .venv/bin/activate

Clone the plugin repository https://github.com/lumina-networks/cloudify-lfm-plugin
git clone https://github.com/lumina-networks/cloudify-lfm-plugin
cd cloudify-lfm-plugin

Package and install the plugin on Cloudify
make wagon
make upload

or package and upload manually:
wagon create -r dev-requirements.txt -f .
cfy plugin upload -y plugin.yaml cloudify_lfm_plugin-*.wgn

if the plugin already exists, you can automatically delete it with the replace script
make replace

or manually:
cfy plugin list
cfy plugin delete plugin_id

Next the blueprints provided in the examples/blueprint folder reference the plugin.yaml.
You have a few options on how to reference it.
http://file-server/spec/cloudify-lfm-plugin/0.1.0/plugin.yaml

If you change the references, make sure to delete the blueprint.zip file in the examples folder and compress the blueprint folder again.

Mininet Nginx
To host the site locally on Nginx you will either need to update the ip address
e.g http://192.168.50.40/spec/cloudify-lfm-plugin/0.1.0/plugin.yaml

Or setup a DNS entry on the Cloudify server:
vi /etc/hosts
append: 192.168.50.40 file-server

The topology is started with the following command: sudo topology-yaml mininet start topo.yml.

Github
If your servers has access to the internet, you can also try to reference it from Github:
https://raw.githubusercontent.com/lumina-networks/cloudify-lfm-plugin/master/plugin.yaml

Now that we have the plugin installed and the plugin.yaml hosted, we can create blueprints and deploy the services.

Create and deploy an ELine Blueprint service

Login to Cloudify http://192.168.50.30/stage/login

Go to the Local Blueprints on the left hand sidebar: http://192.168.50.30/stage/page/local_blueprints

Click on the upload button at the top right and select the blueprint.zip file provided in the examples. (make sure the folder is up to date with the plugin.yaml reference update above).

If you’re using Vagrant VMs you’ll need to use the eline-ethernet blueprint.

Set a name for the blueprint e.g eline-ethernet

Once the blueprint is uploaded it should show in the Blueprints list.

* if uploads don’t seem to work, it could be anything but is often an environment setup issue, with debugging may lead to rabbitmq breaking which relates to network / ip address issue on the Cloudify vagrant machine.

Go to the Deployments page on the left menu bar or go to: http://192.168.50.30/stage/page/deployments

Set a name to the deployment e.g my-eline-test1 and set the blueprint uploaded above e.g eline-ethernet

Then set the fields or use the example inputs to autofill the inputs using the examples/inputs/eline-ethernet-test.yaml

Then click deploy.

Once the deployment is setup, you can go into the deployment and see the topology and blueprint information.

To install the blueprint and deploy the service to LFM, click the hamburger icon on the right and click install. Once the installation is successful, the topology should show green ticks next to each node.

Verify Services Created

To verify the Path and Elines were created go to the controller UI http://192.168.50.21:9001/apps/lsc-app-flowmanager/paths
and you should see it listed under Paths and Services tabs.

Or you can use the controller API with Curl or Postman:
curl -k -X GET http://192.168.50.21:8181/restconf/config/lumina-flowmanager-path:paths/ -H 'Accept: application/json' -H 'Content-Type: application/json' --user admin:admin --insecure -i

curl -k -X GET http://192.168.50.21:8181/restconf/config/lumina-flowmanager-eline:elines/ -H 'Accept: application/json' -H 'Content-Type: application/json' --user admin:admin --insecure -i

Testing with Mininet

We will use two terminal windows.

Before installing an ELine

terminal 1 – Mininet ssh tcpdump
Run a tcpdump in the Mininet server for switch 303 so we can capture packets: sudo tcpdump -i s303-eth1

the output looks like this:
$ sudo tcpdump -i s303-eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on s303-eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:35:25.381007 LLDP, length 75: openflow:303
13:35:30.797614 LLDP, length 75: openflow:303
...

terminal 2 – Mininet console ping/arp
In the Mininet terminal
mininet>h101 ping s303
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.030 ms
mininet>h101 arp -a
? (10.0.2.3) at on h101-eth0

After installing an ELine

Starting tcpdump again, once we try to ping or arp as described below, we’ll see it coming through in the output.

terminal 1 – mininet ssh tcpdump
$ sudo tcpdump -i s303-eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on s303-eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:44:10.797842 LLDP, length 75: openflow:303
13:44:14.614855 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28
13:44:15.619532 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28
13:44:16.214305 LLDP, length 75: openflow:303
13:44:16.622273 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28
13:44:19.624919 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28
13:44:20.629490 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28
13:44:21.631191 LLDP, length 75: openflow:303
13:44:21.633951 ARP, Request who-has 10.0.2.3 tell 10.0.1.1, length 28

terminal 2 – Mininet console ping/arp

mininet> h101 ping s303
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.034 ms
^C
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.023/0.028/0.034/0.007 ms
mininet> h101 arp -a
? (10.0.2.3) at on h101-eth0

Uninstall an ELine Blueprint service

To uninstall the services, click the hamburger button again, and click uninstall.

Additional information

You can find more information on how to work with plugins and extend them on the Cloudify documentation page: https://docs.cloudify.co/4.4.0/

or more technical information on Github on the plugin page: https://github.com/lumina-networks/cloudify-lfm-plugin

If you have any questions or would like to see a demo please contact us.

Lumina SDN Controller and Lumina Topology Manager 7.3.0 Release Notes

Lumina SDN Controller and Lumina Topology Manager 7.3.0 Release Notes

Lumina Adds Support for Nitrogen SR3

Lumina is proud to announce the release of version 7.3.0 of our SDN Controller and Topology Manager products.

This release adds support for Nitrogen SR3, the last OpenDaylight Nitrogen release.

New features and enhancements include:

 

Full release notes and documentation can be found here: https://www.luminanetworks.com/doc/lsc/7.3.0/

If you wish to download a free trial of the Lumina SDN Controller with these updates, you can do so here: https://www.luminanetworks.com/softwares/sdn-controller-trial/

Productizing open-source integration

Andrew Coward, CEO, Lumina Networks

We asked Lumina Networks’ CEO Andrew Coward, how companies can make best use of open source. “Open source is not a spectator sport,” says Andrew. “Sitting around and waiting for somebody to show up and deliver the equivalent of your existing vendor’s offering is not the right approach. So we work best when our customers are very engaged. And really,  it’s all about how you automate things.”

“It’s only been nine months, but what’s next from Lumina?” asked Guy.

“Productization. Up to recently we’ve been about individual customers and bringing them on this journey. So we’re now taking the best bits we’ve done and productizing them so they can be more easily adopted by mainstream, not just carriers, but large enterprises.”

 

 

Read the original article here.

The Future of SDN and NFV

This post contains excerpts from Lumina Networks CEO Andrew Coward’s interview with TelecomTV.

The original source can be found here.

Network functions virtualization is on the rise, but not in the way that many thought. TelecomTV sat down with several industry leaders to discuss the future of NFV and SDN, and the role it will play in business technology and transformation in the years to come.

Lumina doesn’t believe in selling turnkey solutions, but also doesn’t believe in leaving the introduction and integration of its products to the CSP. We believe that we can serve as the catalyst for a company’s digital transformation initiatives, helping out on the heavy lifting while teaching our customers how to manage their network from the core to the edge and think outside the (hardware) box. By working closing with a CSP’s internal NetDev team to give them the tools they need to succeed, we set them up to win the long-term process of transformation without sacrificing short-term gains.

“[We] soon came to realize that our market could be divided into early adopters and laggards. CSPs’ likely willingness (or not) to engage properly in this way could be gauged by how diligently they approached things like a [request for proposal],” he says. “We found this created a self-selection process for us because the ones that asked the right questions were more receptive to us and more willing to “play catch” with some of the open source projects.”

However some went the other way, saying “We don’t need any help, we’re going to do everything ourselves and manage everything. But inevitably some of those customers found it was a Herculean task to do all the integration, manage the new open source code, compile it, keep it reliable and keep up with the changes.”

So some of those companies that had originally struck out on their own subsequently had a change of strategy and came back saying, “You know what, it doesn’t make sense for us to manage the relationship with open source or adding new features when you guys can do that.”

That turned out to be a viable business model for Lumina. “On one level we help with the integration, but what we really do is provide abstraction,” claims Andrew. “With SDN we’re trying to separate the business logic of the carrier – which defines the services – from the underlying hardware and from the vendors […].

“The great thing is that everything that gets built gets put back into the community and makes the job much easier the next time around.”

The abstraction layer also hopefully avoids the CSP customer accruing what’s known as ‘technical debt’. That occurs when devices are integrated directly or tactically (without an abstraction layer) creating a debt that will have to be paid back with interest in integration difficulties later.

“Five years ago we didn’t comprehend the need for CSP culture change to enable transformation,” says Andrew. “But things have changed greatly with SDNFV over the past four years especially. The industry has had to move from a science project through to ‘available in the lab’ and then to something that could be deployable. In the great scheme of things I think we’ve moved remarkably quickly on the open source side of things to make that happen.”

Most importantly it’s turned out that the industry wasn’t – as it perhaps at first thought – introducing a new technical framework and, ‘Oh by the way, you might have to change how you do things a little’. It now looks as though we’re introducing new ways of engaging with customers software, services and suppliers with some necessary and useful technology coming along for the ride. Culture change in other words, has become the prize, not the price.

There’s no doubt the process has been slower than thought. Why?

Andrew thinks “a lot of stuff got stuck in the labs and there was a feeling that everything had to be new.” In too many cases that appeared to mean white boxes needed to replace legacy hardware and there was a feeling that “before we can adopt this technology we need to put data centres in,” Andrew maintains.

“Actually, on the SDN side it’s predominantly all about the existing equipment. So not about replacing, but making the ‘physical’ equipment work with the new virtual environment,” he says.

Another reason software might stay in the lab might be a pervasive fear of ‘failure’ on the part of many CSPs, somewhat at odds with the IT “fail fast” credo. Allied to this can be a reluctance to upgrade the network – in sharp contrast to the constant upgrading undertaken by the hyperscale players many carriers would like to emulate.

Overcoming the upgrade phobia would help the new software ‘escape the lab’ on a more timely basis says Andrew.

“We’re looking for customers who have captured this technology and understand what it is they want to do. Typically they have stuff in the labs and they now want to get it out and they need a partner to help them do that. They don’t want to hand the task off to an outsourcing company because they’ll lose the learnings that they have and they won’t be in control of the outcomes. So they want to keep doing it but they know they need some expertise to help them with that process.”

 

Lumina Networks is proud to be a partner for the Linux Foundation. We will be exhibiting our industry-leading SD Controller at the Open Networking Summit next week in Los Angeles, and look forward to meeting with attendees to help them learn how to get the most out of the network and start on the path toward full digital transformation and business digitization.

 

Lumina and NoviFlow demo SD-Core at Mobile World Congress

Lumina and NoviFlow demo SD-Core at Mobile World Congress

This week, we were proud to partner with NoviFlow at Mobile World Congress to demo the world’s first SD-Core networking solution in a use case that involved the siphoning of select classes of traffic to the newer, higher performance network.

What is SD-Core? SD-Core defines an architecture and technology set for deploying scalable, MPLS/Segment routing capabilities so that network providers can offer newer and more affordable services to subscribers with carrier-class reliability. The SD-Core approach uses a layered software-based architecture with adaptable components based on open source rather than vendor-proprietary system. The use cases are numerous as SD-Core enables network providers to elevate their core networks to SDN while retaining protocols, products and reliability of their existing networks. They’re also able to reduce costs by enabling traffic sharing between the switched and routed domains.

Both Lumina and NoviFlow were honored to share this groundbreaking technology with Mobile World Congress attendees!

“We believe in bringing open software networking out of the lab an into the live network, solving real problems for real customers,” said Andrew Coward, CEO of Lumina Networks. “It’s rare that a new solution is simultaneously lower cost, higher-performance and ready for production, so working with NoviFlow to evolve the core of our MPLS customers’ networks is a real demonstration of the power of SDN used with white box technology. We’re delighted to showcase this solution at Mobile World Congress.”

“The dynamic partnership provides a compelling new end-to-end solution for MPLS and Segment Routing,” said Dominique Jodoin, President and CEO of NoviFlow. “NoviFlow is honored to join forces with a key player such as Lumina Networks to offer unprecedented feature/ performance in commercial SDN solutions.”

The Components of Lumina SD-Control: Building Blocks for Network Transformation

There’s a lot of talk today among the world’s leading ISPs about the need to transform their network to keep their business viable, and to keep it from totally commoditizing. Network transformation requires being able to bring new and innovative services to market quickly and ahead of the competition. That, in turn, depends on having a flexible network that not only supports but enables new services that couldn’t reasonably be done with a legacy network alone.

This is why there’s very high interest in software-defined networking (SDN) among carriers and ISPs. They all want to be able to deliver more lucrative services, more quickly, and in a cost-effective manner—and SDN can make this possible.

As exciting as the prospect of a software-defined network is to carriers and ISPs, they can’t just rip and replace the existing legacy network they are heavily invested in. They can’t simply build a new service provider network from scratch and, on day one, be full SDN. They need a strategy to get from legacy proprietary networking infrastructure to SDN without disruption of service to customers, and with an affordable investment plan.

A Bridge to the Future

This is the problem that Lumina Networks solves with our SD-Control solution. We offer a practical strategy to bridge that gap to gracefully go from legacy to software-defined network, at a pace determined by the carrier or ISP.

When we first looked at this problem, we realized that the standard components of SDN work well in the lab, but maybe not so much in the real world where carrier and ISP networks need to scale to global levels. We really wanted to put our motto – “Out of the Lab, Into the Network” – to work on this issue.

Lumina SD-Control is an SDN design philosophy to build networks that are capable of global scaling. We’ve taken common building blocks that people might not associate with SDN – some of those building blocks are protocols, others are barely more than a concept – and put them together in conjunction with common SDN network technologies to solve the scalability problem.

For example, our SD-Control solution builds on the solutions of large MPLS networks, while reducing costs and enabling modern SDN and NFV architectures. This is not merely theoretical, and not bound to the lab, but in actual operation today for very large ISP customers.

Let’s have a look at how we build SD-Control.

The Lumina SDN Controller

The first building block of SD-Control is the Lumina SDN Controller, or LSC. It’s based on OpenDaylight (ODL), which gives us some capabilities that other SDN controllers don’t have. We chose ODL as the basis for our controller because of its mature OpenFlow implementation and support for legacy network protocols like BGP. In addition, it’s a stable and fully capable NETCONF server and a NETCONF client. NETCONF and BGP are important when legacy network, non-OpenFlow components, are requirements. The ability to manage configuration using the NETCONF protocol, in addition to being able to command the OpenFlow network, is an important distinction between LSC and other OpenFlow controllers. We at Lumina like the possibilities this opens up for us and our customers.

LSC can speak BGP and, more specifically, Multi-Protocol (MP) BGP – essentially the protocol that makes the Internet work today – gives LSC its universal translator role in the SD-Control solution. It’s a simple conclusion to bridge the gap between legacy networking and the software-defined networks by having a network controller that can control both the old network and the new network. MP-BGP is important to control of the legacy network.

The practical path forward is that the services that are delivered on the old network can be translated into equivalent functions or equivalent instructions to the SDN network. With LSC it’s possible to define, build and manage network services that traverse the legacy network and terminate on the SDN network, interworking between the old and the new. This provides the ability to move customers and data from the old network onto the new network in pretty much a seamless fashion and driven by business perspective. That’s the power of the Lumina SDN Controller, and the practical sense of Lumina SD-Control to create the bridge from legacy networking to SDN.

Segment Routing

Another critical component of SD-Control is segment routing. This is what creates the ultimate scalability in the SDN. Though the concept of segment routing has been kicked around for a while, the technology itself is still cutting-edge. The big networking players, the Junipers and the Ciscos of the world, are still developing their technologies in this space. We just don’t hear a lot about practical implementations of segment routing.

Once again, this is something that Lumina Networks is taking out of the lab and putting into the network. Segment routing is a twist on MPLS labels in a non-traditional way. Basically, segment routing removes any kind of requirement for signaling state in the network, and it also decouples the network-paths in the network from the per-flow forwarding entries on the intermediate nodes creating aggregate flows in the core. We can already do this in a programmable SDN network, and we have demonstrated that it works on a large scale. Segment routing solves the scalability issues inherent in OpenFlow in core networks, allowing SDN based services to be implemented at global scale.

Path Computation Element Protocol (PCEP)

PCEP is a protocol that we have borrowed from the traditional IP/MPLS network world. It gives us a way to offload the traffic engineering computations from the network data-plane in a given network to an out-of-band engine. This engine runs the algorithms and analyzes how resources are used as data moves through the network in order to best organize MPLS paths based on specific business requirements. Once the optimal paths are decided, the Path Computation Element Protocol is used to give the forwarding solution details to the SDN control-plane, which in turn programs the network data-plane elements that actually move the data packets. This can be Multi-Protocol BGP for the legacy network, OpenFlow for the new network or some combination of both.

This notion of how to do this – the algorithms, the set of problems inside the PCE – is not new, but we’ve appropriated that technology to an SDN network design and built a new paradigm to solve an SDN problem. Using PCEP in this way is something new, and it’s something that SD-Control and our solution bring to the table.

The Sum of the Parts

So, to connect the dots here, we have a network controller that’s responsible for programming the forwarding on the commodity data-plane components—the white box switches. The network controller can converse with the legacy network using MP-BGP. Services are abstracted as data-models in the controller; designing, deploying and managing services between the Legacy network to SDN network is natural and effective.

We have an MPLS segment routing forwarding architecture. SD-Control uses traditional MP-BGP signaling and discovery methods to interwork with legacy MPLS-based services (L3VPN, EVPN, L2VPN, VPLS, etc) and dynamically stitch them together with OpenFlow built MPLS-SR paths. No MPLS signaling protocol or session state maintained in the SDN network data-plane. MPLS transport is a common for all services; Legacy and SDN.

The LSC uses out-of-band PCE to analyze and decide how MPLS paths use the legacy network and SDN data-plane topology. This enables SD-Control to utilize legacy network and SDN resources for transport based on business policy.

That is SD-Control. It’s not a switch feature or single software product by itself, but rather a philosophy of where decisions are made in the network and how those decisions become instructions for the data-plane. The decisions about technologies and how they are integrated are critical.

In theory, a lot of people agree that this is the way to do it; the network transformation, bridging the gap between legacy networks and SDN. There are people who say it can’t be done, but we can show you that it can be done. We’ve been developing this solution with our clients— global telecom leaders that are transforming their network using Lumina SD-Control.

 

Linux Foundation Announces New Networking Fund for Open Source

Linux Foundation Announces New Networking Fund for Open Source

Lumina Networks is proud to announce its Founding Gold-level membership for the Linux Foundation’s new networking fund. This is a major consolidation of LF’s networking projects into a larger umbrella networking group. While it won’t change the individual projects for OpenDaylight, ONAP, OPNFV and others, it will definitely raise the stature of networking as one of the Linux Foundation’s primary areas of focus.

As a key contributor and leader of OpenDaylight, I’d like to comment from Lumina Network’s experience on how open source is playing a role in the generational change we are seeing in networking.

Open Source is Fast

Open Source is the quickest way to take ideas from concept to testing. In the past, ideas would be argued within the IETF or ETSI, sometimes for years, before vendors would create a compliant derivative. In the Open Source world, the whole approach is to write code first to prove concepts and try things that others can build upon. Bottom line, if you build your PoCs and ultimately production systems on Open Source platforms, you’re going to be moving quicker.

Open Source Changes the Balance of Power

Second, Open Source is a mechanism you can use to influence your vendors. If the vendor is supporting a platform, you should insist that platform be based on or compliant with the Open Source platform. If your vendor creates applications that run on a platform, you can insist that the application run on the Open Source platform and be portable to different distributions. This clarifies the work needed by the vendor and reduces the need for behemoth RFI/RFP documents to specify platform functions.

Open Source Attracts Innovation

Third and perhaps most important, Open Source will beckon a new class of innovators and technologists within your organization. Let’s face it, most of the “movers and shakers” in the industry are now involved in Open Source projects. When an Open Source community thrives, there’s no better way for the thought leaders in your organization to contribute their ideas at an industry-level and sharpen their skills as technologists.

All of these benefits- faster development cycles, increased influence on the vendor community and advancing the technology skills of your organization are essential in order to compete in the new software-defined world. Think of Open Source first, it’s one of the best ways to get to where you are going quickly.

Pin It on Pinterest