Neglected Factors in 5G: Network Slicing

It may seem odd to long-time networkers that “slicing” is discussed extensively in relation to 5G deployment. After all haven’t we been using VLAN, VPNs, VRFs and a whole host of ways to slice and dice networks for many years? Keep in mind that for established 4G networks, there has been no easy way to create logical segments or divide the network into granular slices, even in cases where the equipment may be capable of it. While legacy services hosted MBB, voice, SMS and even MVNOs on the same infrastructure, it was built in a way that was either physically discrete, channelized or rigid – not the way a packet network, and thus 5G, would do things. This approach is monolithic and will need to be updated for successful 5G deployments.

With packet networking, software-defined networking and network function virtualization coming into the network buildouts for 5G, network slicing is becoming an important part of service agility. The power of 5G is not just in higher-data rates, higher subscriber capacity and lower latencies – it is in the fact that services and logical networks can be orchestrated together. This is critical for deployment of connected car, IOT (Internet of Things), big data and sensor networks, emergency broadcast networks and all the amazing things that 5G will be able to support.

But there’s an often-overlooked element of the 5G rollout, slicing deployment over existing equipment, is something that Lumina Networks is uniquely equipped to enable.

Most presentations that you will see on 5G (especially from the vendors) just assume that the provider will be buying all-new hardware and software for the 5G buildout. In reality, a lot of existing equipment will need to be used, especially components that are physically distributed and expensive or impossible to access. How will the new network slices traverse these legacy devices?

Network slicing will traverse from the mobile edge, continue through the mobile transport, including fronthaul (FH) and backhaul (BH) segments, and the slices will terminate within the packet core, probably within a data center.  Naturally, this will involve transport and packet networking equipment in both the backhaul and fronthaul network. The packet core will also likely involve existing equipment. These systems will rely on the BGP protocol at the L3 transport networking layer, even when they are newer platforms.

The 3GPP organization’s definition of a slice is “a composition of adequately configured network functions, network applications, and the underlying cloud infrastructure (physical, virtual or even emulated resources, RAN resources etc.), that are bundled together to meet the requirements of a specific use case, e.g., bandwidth, latency, processing, and resiliency, coupled with a business purpose”

Given the number of elements involved in a slice, sophisticated cloud-based orchestration tools will be required. It’s noteworthy that many of the optical transport vendors have acquired orchestration tools companies to build these functions for their platforms. However, since the start of the Open Network Automation Project (ONAP) at the Linux Foundation, it is clear that the service providers will demand open-source based platforms for their orchestration tools. Rightfully so, an open solution to this problem reinforces operators’ desire to end vendor lock-in and enable more flexible, service-creation enabled networks.

The creation of a “slice” in a 5G network will often involve the instantiation of relevant and dedicated virtual network functions (VNFs) for the slices and this is a key aspect of the work going on in the ONAP project. VNFs, in addition to participating as connectivity elements within the slice, will provide important functions such as security, policies, analytics and many other capabilities.

5G-Network-Slicing

The good news here is that established open source projects such as OpenDaylight have the control protocols that will be used for legacy equipment, such as NETCONF, CLI-CONF, BGP-LS and PCEP, as well as the newer protocols that will be used for virtual L3 slicing such as COE and OVSDB.

Some of the network slicing capabilities that these protocols enable are:

  • Supporting end-to-end QoS, including latency and throughput guarantees
  • Isolation from both the data plane and orchestration/management plane
  • Policies to assure service intent
  • Failure detection and remediation
  • Autonomic slice management and operation

ONAP utilizes the OpenDaylight controller as it’s “SDN-C”. And, more recently ONAP has a new project to develop an OpenDaylight SDN-R to configure radio transport services.

This blog series, “Neglected 5G Factors” will address  SDN-R more our next blog. For now, be sure you’ve read the first of the series “How SDN will Enable Brownfield Deployments.”

Deploy-5G-in-a-brownfield-environment

SDN in the Real World

Over the last five years we’ve seen a lot of SDN technologies emerge – from the original Openflow switches, to the myriad of overlay technologies such as SD WAN, Contrail, Nuage, and more, not to mention a number of open source projects.

One thing we can say about all of the them though, is that they have been divergent network strategies.  That is to say, that implementing any one of these technologies requires that you deploy a new network, either physical or virtual.  They all require that this new domain is managed, separate and distinct from the existing network and they all require a new data plane to work.

In parallel with the SDN movement, has been the move to “digitize” the network – that is to say, to bring the same level of automation to networking that we’ve seen in the compute space, and preferably without having to rebuild the entire network.

SDN can and should play a key role in the move to automation, and yet most of these technologies complicate, not simplify the network.  Alongside the effort to automate is lost while yet another proprietary system is deployed. For example, debugging a network connectivity problem now involves solving for both overlay and underlay issues – the new virtual network and the underlying network.

Convergent Software Defined NetworkWhen we started Lumina Networks, we had a different vision for SDN – as a convergent, not divergent technology.  By selecting OpenDaylight as our open source controller, we made a conscious choice to enable the SDN control of every domain across the network, and worked with the community on building the necessary protocols and interfaces to allow control of almost anything in the network, even if the product was never designed with SDN in mind.

This focus on inclusion, and with it, multi-domain SDN control, is vital for anyone intent on automating their network and abstracting the business logic from the network control logic.  This is why OpenDaylight is controlling more devices than any other open source networking project and why it is a key part of transformational projects such as ONAP.

As an industry, we need to get over the distraction of divergent SDN solutions and stick to the task at hand, which is to radically reduce the time and cost associated with deploying, provisioning and running networks, through the converged automation of end-to-end services. Our customers know this well, which is why they choose Lumina to open their networks to SDN.

Lumina Networks Demonstrates SD-Core Enhancements Using Newly Released SDN Controller Version 8

Support for Multiple Domains Allow Users Greater Flexibility in Migrating to Segment Routing, NFV or Whitebox Switching

September 25, 2018, AMSTERDAM, Netherlands, Open Networking Summit — Today Lumina Networks introduced version 8 of the company’s SDN Controller, powered by OpenDaylight™. Lumina will also be demonstrating advances in its SD-Core software at the Open Networking Summit in Amsterdam this week.

The Lumina SDN Controller, along with the Flow Manager software, seamlessly defines and implements a single end-to-end service over a multi-domain environment, mixing traditional routers with white box switches and virtual network functions. To illustrate maximized flexibility, the demo leverages Cisco XRv routers via segment routing (BGP-LS and PCEP), White box switches via OpenFlow, Pseudowires and a multi-protocol path computation engine using PCEP. Integrity of the service is maintained through network changes and failures as BGP-LS, PCEP, OpenFlow and NETCONF interfaces are all supported simultaneously. Lumina’s unique capabilities leverage OpenDaylight’s ability to support multiple control protocols at the same time.

On the tail of the company’s recently closed series A funding from AT&T, Verizon Ventures and Rahi Systems, Lumina Network’s eighth platform release focuses on improved network scalability, stability, and tools. Included in the Oxygen-based release comes increased cross-project code quality, and updates to Karaf 4.1.3, to enable developers with improved agility and compliance of future upgrades.

“As OpenDaylight continues to mature, our job at Lumina is to deliver a carrier-grade, fully tested implementation of each OpenDaylight release,” says Andrew Coward, CEO of Lumina Networks. “We combine these releases with use cases and additional functionality that enables our customers to bridge the gap and bring SDN to both existing and new network equipment.”

In production with numerous tier one communication service providers around the world, Lumina SDN Controller is leveraged in common use cases including container networking, service automation, SD-Core, intent-based networking, and 5G application placement. In addition to the SDN Controller, and NetDev Services, the company offers specific applications such as Fabric Manager for Kubernetes®, Lumina Flow Manager, for path computation and management, and VNF Manager for configuration of virtualized network functions.

Lumina Networks offers free downloads and support for trial at www.luminanetworks.com.

About Lumina Networks

Lumina Networksbelieves the future is open software networks where service providers are in control of their development. Lumina is the catalyst that brings open software networking out of the lab and into the live network. We develop open source platforms and provide NetDev Services to jointly deliver production systems and to transfer know-how in Agile Software Development methods. For more information, please visit www.luminanetworks.com.

© 2018 Lumina Networks. All Rights Reserved.

Lumina Networks and the Lumina networks logo and symbol are trademarks or registered trademarks of Lumina Networks, Inc. in the United States and in other countries. Other marks may belong to third parties.

Getting Started Upstream in ODL

By Allan Clarke and Anil Vishnoi

 

 

Getting started as an upstream contributor to OpenDaylight is not easy. The controller ecosystem is big, there are many projects, and there are millions of lines of code. What is a new ODL developer to do? Here is some pragmatic advice on where to begin to become an active contributor.

Fix Bugs

One of the easiest ways to get to know a code base is to start fixing bugs. Peruse the ODL bugs list on Bugzilla, say with the NETCONF project. You want to find bugs that aren’t likely being worked on and are of limited scope (to match your limited understanding of the project). Ideally bugs will have an owner assigned to indicate that they are actively being worked on, but it is not always a great indicator. In particular, someone may run across a bug, file a report, then jump into fixing it—and forget to assign it to themselves. This is most likely with the project contributors, so figure out who are the project contributors and look at the date of the report. If it was a project contributor and a newish date, then that bug might be being worked on. You should read through the report and try to decide how much domain knowledge is needed—as a newbie, smaller is better.

Once you have selected a bug to work on, click on the “take” link. Also add a comment to the bug. If someone already is working on it, they should get a notice and respond. You can also try the ODL mailing lists and give notice there. You mainly want to avoid duplicate work, of course.

Review Patches

Reviewing patches is a great way to contribute. You can access patches via Gerrit, and we’ll use the NETCONF patches as an example. Doing code reviews is a great way to not only see existing code but also to interact with other developers.

  • If you have some domain expertise and know the code, you can review the functionality that is being pushed.
  • If you have neither of these, you can do the review based on Java best practices and good software engineering practice.

Address Technical Debt

ODL uses Sonar for analytics of the upstream project. Here is an example for the NETCONF issues. Note that the ODL project has coding conventions, and the Sonar Qube has some best practices. This list shows violations that should be addressed. As a newbie, you can work on these with little domain knowledge required. You can also see that the code coverage varies for the NETCONF coverage, so adding NETCONF unit tests to boost the coverage in the weakest areas would be very helpful.

Sonar has a lot of interesting metrics. You can explore some of them starting here including coverage, tech debt, etc. If you look at the Sonar dashboard, it will point out a lot of available work that does not require a large span of time to invest. Doing some of this work is a great step towards getting your first patch submitted.

Follow Best Practices

With well over a million lines of code and many contributors from many companies, the ODL project has quite a girth. To manage the code entropy, ODL has some best practices that you should become familiar with. These cover a diverse set of topics, including coding practices, debugging, project setup and workflow. We strongly recommend that you carefully read these. They will save you a lot of time and will pay back your investment quickly. They will help you skate through code reviews. These practices are really time-tested advice from all the ODL developers, so don’t ignore them.

Support Attribution

Attribution is an important insight into most if not all open source projects. Attribution allows stakeholders to see who is contributing what, from the individual up through sponsoring companies. It allows both a historical and current view of the project. You can see an example of why attribution is illuminating here. You need to sign up for an ODL account, and a part of that process will be to associate yourself with a company (if applicable). You can also see breakdowns by authors on the ODL Spectrometer.

That’s all for now. Happy trails, newbie.

Watch for Allan’s blog next week where he will share his Top 10 learnings as a new developer contributing to ODL.

From Open Source to Product; A Look Inside the Sausage Making Factory

I’ve spent the last few months working closely with the OpenDaylight and OpenStack developer teams here at Brocade and I’ve gained a heightened appreciation for how hard it is to turn a giant pile of source code from an open source project into something that customers can deploy and rely on.

Kevin Woods

Kevin Woods

Not to criticize open source in any way – it’s a great thing.   These new open source projects in the networking industry, such as OpenDaylight, OpenNFV and OpenStack are going to do great things to advance networking technology.

No, it’s just the day to day grind of delivering a real product that challenges our team every day.

On any given day, when we are trying to build the code, we’ll get new random errors and in many cases it’s not immediately obvious where the problem is.   In another test we’ll get unexpected compatibility problems between different controller elements.   Again, somebody made a change and you can’t trace the problem.  On some days, certain features will stop working for no known reason.  Because of the above, we need to continuously update and revise test automation and test plans – that is also done daily.

When it comes to debugging a problem, unless you’re working with the source code and regularly navigating to find problems, diagnosis is difficult.    Some of the controller internals are extremely complex, for example the MD-SAL.   Digging into that to make either enhancements or fixes is not for the faint of heart.

The OpenDaylight controller is actually several projects that must be built separately and then loaded via Karaf.  This can be non-intuitive.

Another area of complexity is around managing your own development train.   If you’re going to have a non-forked controller that stays very close to the open source, you cannot risk being totally dependent upon the project (for the above reasons and others), and so you basically have to manage a parallel development thread.   At the same time, you find problems or want to make minor enhancements that you need in service, but cannot contribute immediately back to the project (that takes some review and time).    So you’re left with this problem of branching and re-converging all the time.   Balancing the pace of this with the projects pace is a challenge every day.

Then there’s all the maintenance associated with managing our own development thread, supporting internal teams, maintaining and fixing the documentation etc.   Contributing or committing code back to the project, when needed, is not a slam dunk either.   There is a commit and review process for that.  It takes some time and effort.

I think we’ll find the quality of the new Helium release to be significantly better than Hydrogen.  Lithium will no doubt be an improvement over Helium and so on.   The number of features and capabilities of the controller will also increase rapidly.

But after going through this product development effort the last few months I have a real appreciation for the value that a commercial distribution can bring.    And that’s just for the controller itself – what about support, training and so on?    Well, I leave those things for another blog.

Originally Published on the Brocade Community on 10/9/2014

Brocade SDN Solutions Help Customers to Move to an Open, Reliable and Scalable Architecture

Brocade today announces the availability of the Brocade SDN Controller based on OpenDaylight’s Boron release that took place last week. Brocade SDN Controller is an open source architecture that’s fully tested, documented, and quality assured. We are the first vendor to commercially distribute OpenDaylight.

In the release of the Brocade SDN Controller 4.0, the engineers have fine-tuned the code in the OpenDaylight upstream by providing various scripts and tools to make it easier for the customers to utilize the functionality of high availability, backup, and restore with ease and keep the customers environment up and running across geographies. This way it ensures that the customer SLAs are not disturbed.

The engineers have also developed the smarts wherein the customers can export the data file from the database of the older version and import it into the latest version while deploying the software. This enhances the ease of transition and upgrades to the latest versions with minimal error, thereby improving operational efficiency and productivity.

One of the greatest attestments to this is Brocade’s win at Arizona State University. Arizona State University, ranked one of the “most innovative schools” by U.S. News & World Report, has continued on its groundbreaking path by using software-defined networking tools developed by Brocade. At any given time at the university, there are 250 research projects that undergraduates, graduate students and post-doctoral students are working on. Many of the roughly 90,000 students on campus are also using mobile classroom tools.

Jay Etchings, Director of Research Computing at ASU, said in one of his interviews that they would like to see “at a moment’s glance” if there is a problem on the network, in real time. “Brocade was able to give us a package, and that package included some highly dense devices. Brocade was able to meet many, if not almost all, of our security requirements.” ASU has deployed Brocade’s MLXe Core Routers, SDN controller, and Brocade Flow Optimizer and achieved significant success. Etchings said “It’s a simple, easy-to-use interface, so that’s very nice for us,” he said. “It requires less maintenance, because we can give folks access to devices they need and not have to manage their accounts.”

Read Brocade Press Release to get more details of the ASU win.

Brocade Flow Optimizer (BFO) helps improve business agility by streamlining SDN and existing network operations via policy-driven visibility and control of network flows. It provides distributed attack mitigation by programmatically sensing and clipping DDoS flows at router and switch ports. It extracts network-wide visibility of Layer 2 through Layer 4 traffic flows through sFlow and OpenFlow collected from network devices, delivering real-time control of flows (drop, meter, remark, mirror, normal forward) through OpenFlow rules pushed to entire network for deterministic forwarding driven by policy. Customers can automate polices applied via an embedded UI or through open APIs.

Learn more about Brocade SDN Controller and Brocade Flow Optimizer.
Download free trial bits of Brocade SDN Controller and Brocade Flow Optimizer.

Originally Posted on the Brocade Community on 9/27/22016

Pin It on Pinterest