+1 (669) 231-3838 or +1 (800) 930-5144

Lumina Networks at CableLabs Summer Conference

Lumina is proud to be attending and exhibiting at CableLabs Summer Conference 2018.

When CableLabs was founded, it strove to live at the crossroads of innovation and technical achievement. Now, 30 years later, CableLabs Summer Conference has become one of the premiere executive conferences for key playerss in the cable networking industry.

Lumina Networks will be exhibiting our version of the CableLabs PCMM initiative. PacketCable Multimedia (PCMM) specification is a program initiative from CableLabs to support the deployment of general Multimedia services by providing a technical definition of several IP-based signaling interfaces that leverage core QoS and policy management capabilities native to DOCSIS Versions 1.1 and greater.

Current PCMM deployments use expensive, legacy applications with limited enhancements, upgrades and support. Lumina’s SDN Controller provides a supported, low cost alternative to manage PCMM using an open source and open partner solution.  Lumina’s extensive work with the Linux Foundation and OpenDaylight have delivered proven SDN Controller projects that manage and control networks for service providers around the globe.

Key Building Blocks of SD-Control: Segment Routing and Path Computation Element

In my previous article earlier this year, I talked about the main components for the Lumina SD-Control (LSC) approach to software-defined networking (SDN), I mentioned that we use segment routing and a path computation element (PCE) in an innovative way to create ultimate scalability. This is what enables ISPs to grow their modern networks to a global scale in an efficient, cost-effective manner.

Now let’s go into a bit more depth about these key building blocks of LSC. I’ll start with a basic overview of network traffic engineering.

A basic Internet Protocol (IP) network (i.e., one without any sort of traffic engineering) moves data packets across the network in a very elementary way. Using basic algorithms, the network calculates how to get data from the source node to the destination node using the shortest route possible— “shortest” meaning the smallest number of intermediate devices or autonomous systems to traverse.

In theory the shortest path sounds good, but there can be numerous reasons why it’s an undesirable approach. For example, the experience of the data might be poor, the economics of moving data over the shortest path might be too costly, or the shortest path can be too congested while other routes sit idle. Therefore, the notion of steering traffic through the network based on some sort of policy was devised. This is traffic engineering (TE).

Steering traffic through a network

 

A traffic engineering policy can make more qualitative decisions about traffic flows. The theoretical utopia of traffic engineering is that you get to consume all the resources of the network, such as bandwidth. Traffic engineering tries to accomplish using all the resources and all the variety of paths in your network from point A to point B. Hence, you can basically get more from your investment in the network infrastructure, irrespective of the specific data flow patterns that would naturally occur.

The predominant method for traffic engineering on IP networks today uses Resource Reservation Protocol, or RSVP, where the router at the source edge of the network runs an algorithm that considers these additional requirements based on the policy. You can tell it, “The data flow from A to B needs to traverse through a particular area of the network,” or “The entire path needs to have a certain amount of resources available.” It is literally a reservation protocol through the network. Every router in the network participates in this reservation request and confirmation system. The specific strategy for this form of traffic engineering is known as RSVP-TE.

RSVP adds a needed layer of intelligence to the network. It facilitates this conversation about which data traffic needs what resources and confirms that those resources are allocated. As it does this, it creates “state” in the network. The network is more aware of what data it’s moving around, but at the same time, the routers and the intermediate systems that comprise this network and the path through the network all have to keep track of what’s going on—kind of like an accounting ledger.

Keeping track of the ledger consumes a lot of compute power on the network devices, which leads to problems in building very large networks using RSVP-TE. This approach requires expensive routers with the capability and capacity to track the reservations, and even then, there’s a scalability limit to what these devices can handle in a practical sense. The problem is magnified if there is a transmission failure in the middle of the network and all the devices must be notified, and the reservations resolicited. Such changes in the network cause performance issues.

The bottom line is, RSVP-TE – the most dominant form of traffic engineering for IP networks today – accomplishes some important things but it’s problematic at global or very large scale, to say the least. What’s more, it’s actually quite complicated to understand what’s happening at any particular point in the network—what just went wrong, why it’s not working.

Taking a fresh look at traffic engineering

 

With SDN, there is a different philosophy in building networks. It gives an opportunity to solve the same problems but in a different way. Segment routing provides a built-in answer to traffic engineering, eliminating the downsides but delivering on all the upsides of TE.

Segment routing is essentially the idea that the control plane – in our solution, the Lumina SDN controller – is separate from the data plane. The LSC makes decisions upfront and then programs the data plane. In this case, the control plane will decide the exact way that this path should go through every segment of the network. It tells the input node to use a stack, or a sequence of MPLS labels, and output a particular interface. Therein is a segment routing path.

The label stack determines how the traffic will move from one node to the next. Every node in the network has a unique label. The network will automatically move the data through the prescribed sequence of nodes to get to the endpoint.

This method uses a protocol whereby the nodes announce their labels to each other – “Hey, I am label 100” – and every device in the system hears that message. The next node that chooses a label will know which labels the other devices have chosen and will choose an as-yet-unclaimed label. If there is a conflict, the nodes will resolve it amongst themselves.

The important aspect here is that in segment routing, there is no state. There are no reservations or commitments that individual routers in the data plane have to keep track of and manage. This is an important characteristic that resolves the scalability problems of strategies like RSVP-TE. All the decisions, and the reasoning behind why decisions were made, are kept in the control plane, not the data plane. Moreover, that control plane can run on regular servers that are optimized for that type of thing. You don’t need specialized routers and switches that are intended to pipe data around.

The benefits of segment routing

 

Segment routing delivers the upside of traffic engineering with the ability to utilize all of the paths in your network, based on whatever policy the business wants to drive to make those decisions. You don’t have to buy fancy expensive. In fact, you don’t have to use routers at all; the devices can be OpenFlow switches, which comprise the most bargain-basement data plane that you can buy. The story here is, a cheap but efficient data plane.

The network can keep growing as business dictates. If you need to add more nodes and you run out of resources on your control plane, you can just buy a bigger server or add more memory chips to the same server. Upgrade from 64 gigs of RAM to 128, and you can double the size of your network. It brings everything into a nice place of how the scalability aspect happens.

Then finally there is the performance issue. When an event happens in the network and segment routing is being used, there’s no protocol that is organizing which labels to use or where the nodes are. This method eliminates the situation where something happened to the network and you have to cascade a notice throughout the network that there is a problem and get every node to make a new plan of what to do next. With segment routing, the control plane can understand what happened and make the decision of next steps on its own. Thus, the total amount of work required to resolve the issue is smaller and therefore quicker.

That’s essentially traffic engineering with segment routing.

Making the path decisions

 

In the context of SD-Control, there’s that aspect of making decisions about what these paths should be, and what that label stack is that is used to define the segment routing path. The path computation element, or PCE, determines that. The Lumina SDN Controller (LSC) and the other SD-Control components, specifically Path Manager, will tell the network what the decision of the path computation engine is. It does that by sending the specifications to the LSC, and the LSC sends those details down to the forwarding plane, the OpenFlow router or the switch.

Let me back up a step and talk about these other components of SD-Control: the Service Manager and the Path Manager. The Service Manager is where the business policy is executed: What are the rules? What are the different ways that we can define how data moves through the network? Service Manager handles all the constraints and the rules, and it makes requests to Path Manager to tell it to build a path with a particular set of constraints from point A to point B.

Path Manager then asks the PCE to determine all the ways that we can get from A to B meeting all these constraints. PCE runs its algorithm and returns a path, or maybe a collection of paths that all satisfy the need. Path Manager will filter it down to exactly one path. Path Manager keeps track of the path and tells the Lumina SDN Controller to program the ingress node and apply a particular label stack to it. Then the network naturally knows what to do from there on out.

Putting it all together, this is how the stack of components of SD-Control accomplishes traffic engineering.

Using this method, you can scale your control plane to whatever you need. By scale I mean compute power, memory, storage—it boils down to a generic compute problem. But the data plane doesn’t care; you only have to worry about the most basic things such as, how many links do I have in my network, and what are those 1 Gb links, 10 Gb links, 100 Gb links? And now you’re sizing your data plane and understanding how much service you can deliver with it. It’s a pretty easy calculation that helps solve that higher-level business problem of how much to invest in the network to get a certain level of services delivered. It’s all tied to revenue, profitability and the other aspects of building networks as a business. It really simplifies the business dynamics.

The ”interwork” protocol that enables transformation

 

There’s one more important aspect of SD-Control I should mention here. It’s multiprotocol border gateway protocol, and specifically a particular mode of operation called link state—otherwise known as multiprotocol BGP-LS. This is the protocol that all the routers and switches use to announce their label. They all communicate with each other, and in that communication process, the Lumina SDN controller is speaking multiprotocol BGP-LS.

In order to get segment routing organized, you just have to turn on BGP-LS on all the devices and get them talking. BGP itself is not new either; it’s a mechanism that already exists on legacy networks. We can teach the SDN control plane to speak BGP-LS, and now it can automatically learn all the labeling and all the new information.

BGP-LS is used to do segment routing in the legacy network, and now it can be used in a software defined network powered by SD-Control. This is incredibly important, as it’s what enables SD-Control to basically build services that traverse or interwork between the legacy networks and the new SDN networks.

That ability to interwork facilitates the transformation of these carrier networks, taking them into the future with SDN. It’s what we’re all about at Lumina Networks.

Using Open Source to Enable Network Evolution

Kevin Woods, VP, Product Management, Lumina Networks

The network core is ripe for disruption. While the industry migrates to 5G, SD-WAN, IOT, Mobile Edge Computing and other advanced networking technologies with exponentially increasing traffic levels, the core has remained relatively static.

The economics and deployment approach for existing core network technologies will not meet this coming demand.  “SD-Core”* is the application of use cases and software defined networking technology to the meet the scalability and service agility requirements that service providers now need.

Lumina Networks’ unique approach to SD-CORE* defines a carrier-grade architecture and technology-set for deploying scalable MPLS and Segment Routing based networks using SDN and white box technologies, enabling network providers to evolve their core networks to SDN, while retaining the protocols, products and reliability of their existing networks.

 

*Note: SD-Core is now known as SD-Control.

Original article posted by TelecomTV. You can read it here.

Verizon Ventures CEO Center Stage: Andrew Coward, President and CEO of Lumina Networks

Original article posted on the Verizon Ventures blog. Read it here.

What was the inspiration for starting Lumina Networks?

Finding disruptive technology is always exciting, and never more so than when disruption has the ability to completely change an industry, such as in the case of Software Defined Networking (SDN). Until Lumina Networks came to the market with new Open Daylight (ODL) -based solutions, the networking equipment business had become staid and boring. Sure, networking vendors were releasing ever bigger and faster products, but the cost points weren’t coming down fast enough, and unlike everything else in the data center, all the equipment was proprietary and vendor lock-in was normal. The proprietary nature of this networking equipment wasn’t cosmetic. It has impeded the ability of large enterprises and telecom operators to digitize and automate their processes and networks because each hardware vendor has their own exclusive set of tools and management methods that didn’t work with other vendors.

To address this, and over the last four years, the trifecta of SDN, network function virtualization (NFV) and open source has radically disturbed this ecosystem, bringing the same disruption to networking that standardized servers, Linux and virtualization has brought to compute.

So when open source arrived into the networking arena we knew (in Brocade) there was an opportunity to lead the industry by packaging and productizing the leading SDN open source project – OpenDaylight. Four years ago, Brocade released their first commercial OpenDaylight distribution and built a series of companion applications and services.

Rolling the clock forward three years (to August 2017), we formed Lumina Networks as a spin-off from Brocade to receive these open source SDN controller assets and take advantage of this industry paradigm shift. This opportunity came about very uniquely, following the Brocade/Broadcom acquisition, where all of Brocade’s business units were sold, leaving Broadcom with Brocade’s Fibre Channel assets.

It was clear from Brocade’s customer base (Verizon included) that there was a passion to make sure that the SDN controller would end up in an independent company and not be swallowed by a traditional networking vendor. As a consequence, we received a huge amount of support from these customers as we went through the Brocade spin-out process.

Your background is focused on building, growing and managing products for a variety of networking organizations. Can you tell us about the problem Lumina Networks aims to solve?

It was clear from the outset that many service providers and large enterprises are committed to using open source in their networks but needed support to take projects out of their labs and into live deployments. We choose the OpenDaylight controller as the base for our software because it has the unique ability to bring SDN control to existing optical and IP networks, to virtual network functions, and to white-box deployments.

While most existing SDN deployments have focused on using overlay technology, which assumes the existing network is already provisioned and working (think Contrail or SD-WAN), Lumina set out to bring SDN control to existing services such as E-Line and E-Tree, and enable these existing services to be incorporated with new virtual network functions and white boxes.

Congrats on your recent funding round! How will your new fund help Lumina Networks?

We are very happy to be working with Verizon Ventures who has been very supportive through the entire divestiture process and through this funding round. We are using these new funds to further package and productize OpenDaylight and our applications so our SDN controller can reach a wider set of customers. We have strong interest (and customers) in Europe and Japan, where we intend to focus sales and implementation services

Can you tell us about your growth in the past year?

Since we spun out of Brocade in August 2017, we’ve secured major contracts with two large US operators (including Verizon), and a large operator in Asia. We’ve also worked with a number of web-scale companies including Snapfish to automate their data centers. On the product side, we’re now on our third OpenDaylight release, and have productized around a number of solutions including SD-Core (enabling white-box deployment in MPLS networks), Kubernetes and legacy network integration.

What big trends is Lumina Networks following in open source software and SDN?

OpenDaylight is now part of the Linux Foundation which includes other significant open source projects including ONAP, Kubernetes and OPNFV (to name just a few). As networks move towards virtualization and open source, and away from end-to-end proprietary solutions, it’s important that Lumina fits into this larger ecosystem. To this end, we’re building SDN connectors into many of these projects.

It’s now been five years since the industry started down the SDN and NFV journey and it’s only now we’re starting to see real deployments at scale. Lumina is uniquely positioned to take advantage of this shift with an industry-accepted opensource controller and a ready team to bring these projects out of the lab.

Pin It on Pinterest