Neglected 5G Factors: How SDN will Enable Brownfield Deployments

Neglected 5G Factors: How SDN will Enable Brownfield Deployments

There’s a lot to consider in 5G service roll-out, but there are a few key areas that are often overlooked. Lumina Networks feels a responsibility to the community to illuminate the truth behind 5G transformation, to highlight these overlooked “secrets” which can make-or-break your 5G strategy. The factors you’ll read about in this blog series have a drastic impact on a service providers digital transformation – they are the software-defined networking components that play a critical role in making 5G services possible and accelerating deployments.

5G DeploymentAs an industry, we’ve all agreed long ago the agility needed to run 5G services will come from a more software-based network. While virtualized components and functions are powerful tools in enable flexibility, we need to deploy 5G in brownfields, not green.

It’s with this in mind that the first factor which requires more consideration is the role legacy, purpose-built infrastructure needs to play in performing the functions necessary for 5G services. We call this use case SDN Adaption. Adaption involves providing a common control plane for different types of networks. In network services prior to 5G it would be acceptable to provision and configure each part of the network separately, taking days or even weeks. You can understand why this area is often overlooked in the market conversation as large vendors with a lot at stake, and substantial marketing budgets, prefer to keep it that way.

By necessity, 5G networks will be “intent-based” networks where the end-to-end service will be defined in a high-level language such as TOSCA (Topology and Orchestration Specification for Cloud Applications). The instantiation or configuration of the network functions beneath that, whether virtual or physical, will be automated. Closed-loop operational monitoring will help assure that the service meets the pre-defined SLA.

The common control plane needed to configure and monitor the network elements works in two different ways.

Container orchestration for virtualized applications

 

The services behind 5G will be “cloud-ready”, that is they will be container-based and leverage microservices.  They will also need to be orchestrated (see our blog on what it means to be cloud ready). OpenDaylight’s Container Orchestration Engine (COE) project is a vital tool in orchestrating the network components for container-based applications. The COE project, led by Lumina’s own Prem Sankar Gopannan is designed to provide L2 and L3 networking between containers or between containers and virtual machines for Kubernetes-managed environments. COE includes interfaces to KVM and Containers and OpenStack (via Kuryr). A northbound CNI is available for bare-metal based containers.  This is important for micro-datacenters where compute resources are scarce. As such, COE can support a variety of use cases where cloud-based networking is required such as in 5G services networks.

Tying Existing Routers to 5G Services

While COE takes care of the future, what about the past?  That is, what about equipment that is already installed or VNFs that are based on legacy network OS’s? There’s good news on this front as OpenDaylight and now COE specifically will support the NETCONF control interface that is common on today’s hardware and software routers. NETCONF is a stateful control interface that was designed to allow an external controller to manage the configuration of routers. NETCONF is optimized for routers that publish YANG information models and is a commonly supported interface for Cisco, Juniper and other types of switches common in 5G front haul and backhaul networks.

Putting the above two capabilities together, it is possible to orchestrate both the virtual and physical network elements to create the network slices and intent-based services necessary for 5G. OpenDaylight is a particularly well-suited controller to implement for 5G networks because it has both COE and NETCONF support, as well as a variety of other common control interfaces. This type of multi-cloud and multi-protocol support is absolutely essential for being able to leverage the existing network for 5G services.

Realizing this type of virtualized network architecture is the future, our major tier-1 CSP partners are already aligning to the new network architecture. In fact, AT&T has already been working on this for several years.  

In my next blog, we will delve into a common 5G use case that depends upon this type of converged network architecture.

Deploy-5G-in-a-brownfield-environment

 

The Future of SDN and NFV

This post contains excerpts from Lumina Networks CEO Andrew Coward’s interview with TelecomTV.

The original source can be found here.

Network functions virtualization is on the rise, but not in the way that many thought. TelecomTV sat down with several industry leaders to discuss the future of NFV and SDN, and the role it will play in business technology and transformation in the years to come.

Lumina doesn’t believe in selling turnkey solutions, but also doesn’t believe in leaving the introduction and integration of its products to the CSP. We believe that we can serve as the catalyst for a company’s digital transformation initiatives, helping out on the heavy lifting while teaching our customers how to manage their network from the core to the edge and think outside the (hardware) box. By working closing with a CSP’s internal NetDev team to give them the tools they need to succeed, we set them up to win the long-term process of transformation without sacrificing short-term gains.

“[We] soon came to realize that our market could be divided into early adopters and laggards. CSPs’ likely willingness (or not) to engage properly in this way could be gauged by how diligently they approached things like a [request for proposal],” he says. “We found this created a self-selection process for us because the ones that asked the right questions were more receptive to us and more willing to “play catch” with some of the open source projects.”

However some went the other way, saying “We don’t need any help, we’re going to do everything ourselves and manage everything. But inevitably some of those customers found it was a Herculean task to do all the integration, manage the new open source code, compile it, keep it reliable and keep up with the changes.”

So some of those companies that had originally struck out on their own subsequently had a change of strategy and came back saying, “You know what, it doesn’t make sense for us to manage the relationship with open source or adding new features when you guys can do that.”

That turned out to be a viable business model for Lumina. “On one level we help with the integration, but what we really do is provide abstraction,” claims Andrew. “With SDN we’re trying to separate the business logic of the carrier – which defines the services – from the underlying hardware and from the vendors […].

“The great thing is that everything that gets built gets put back into the community and makes the job much easier the next time around.”

The abstraction layer also hopefully avoids the CSP customer accruing what’s known as ‘technical debt’. That occurs when devices are integrated directly or tactically (without an abstraction layer) creating a debt that will have to be paid back with interest in integration difficulties later.

“Five years ago we didn’t comprehend the need for CSP culture change to enable transformation,” says Andrew. “But things have changed greatly with SDNFV over the past four years especially. The industry has had to move from a science project through to ‘available in the lab’ and then to something that could be deployable. In the great scheme of things I think we’ve moved remarkably quickly on the open source side of things to make that happen.”

Most importantly it’s turned out that the industry wasn’t – as it perhaps at first thought – introducing a new technical framework and, ‘Oh by the way, you might have to change how you do things a little’. It now looks as though we’re introducing new ways of engaging with customers software, services and suppliers with some necessary and useful technology coming along for the ride. Culture change in other words, has become the prize, not the price.

There’s no doubt the process has been slower than thought. Why?

Andrew thinks “a lot of stuff got stuck in the labs and there was a feeling that everything had to be new.” In too many cases that appeared to mean white boxes needed to replace legacy hardware and there was a feeling that “before we can adopt this technology we need to put data centres in,” Andrew maintains.

“Actually, on the SDN side it’s predominantly all about the existing equipment. So not about replacing, but making the ‘physical’ equipment work with the new virtual environment,” he says.

Another reason software might stay in the lab might be a pervasive fear of ‘failure’ on the part of many CSPs, somewhat at odds with the IT “fail fast” credo. Allied to this can be a reluctance to upgrade the network – in sharp contrast to the constant upgrading undertaken by the hyperscale players many carriers would like to emulate.

Overcoming the upgrade phobia would help the new software ‘escape the lab’ on a more timely basis says Andrew.

“We’re looking for customers who have captured this technology and understand what it is they want to do. Typically they have stuff in the labs and they now want to get it out and they need a partner to help them do that. They don’t want to hand the task off to an outsourcing company because they’ll lose the learnings that they have and they won’t be in control of the outcomes. So they want to keep doing it but they know they need some expertise to help them with that process.”

 

Lumina Networks is proud to be a partner for the Linux Foundation. We will be exhibiting our industry-leading SD Controller at the Open Networking Summit next week in Los Angeles, and look forward to meeting with attendees to help them learn how to get the most out of the network and start on the path toward full digital transformation and business digitization.

 

Getting Started Upstream in ODL

By Allan Clarke and Anil Vishnoi

 

 

Getting started as an upstream contributor to OpenDaylight is not easy. The controller ecosystem is big, there are many projects, and there are millions of lines of code. What is a new ODL developer to do? Here is some pragmatic advice on where to begin to become an active contributor.

Fix Bugs

One of the easiest ways to get to know a code base is to start fixing bugs. Peruse the ODL bugs list on Bugzilla, say with the NETCONF project. You want to find bugs that aren’t likely being worked on and are of limited scope (to match your limited understanding of the project). Ideally bugs will have an owner assigned to indicate that they are actively being worked on, but it is not always a great indicator. In particular, someone may run across a bug, file a report, then jump into fixing it—and forget to assign it to themselves. This is most likely with the project contributors, so figure out who are the project contributors and look at the date of the report. If it was a project contributor and a newish date, then that bug might be being worked on. You should read through the report and try to decide how much domain knowledge is needed—as a newbie, smaller is better.

Once you have selected a bug to work on, click on the “take” link. Also add a comment to the bug. If someone already is working on it, they should get a notice and respond. You can also try the ODL mailing lists and give notice there. You mainly want to avoid duplicate work, of course.

Review Patches

Reviewing patches is a great way to contribute. You can access patches via Gerrit, and we’ll use the NETCONF patches as an example. Doing code reviews is a great way to not only see existing code but also to interact with other developers.

  • If you have some domain expertise and know the code, you can review the functionality that is being pushed.
  • If you have neither of these, you can do the review based on Java best practices and good software engineering practice.

Address Technical Debt

ODL uses Sonar for analytics of the upstream project. Here is an example for the NETCONF issues. Note that the ODL project has coding conventions, and the Sonar Qube has some best practices. This list shows violations that should be addressed. As a newbie, you can work on these with little domain knowledge required. You can also see that the code coverage varies for the NETCONF coverage, so adding NETCONF unit tests to boost the coverage in the weakest areas would be very helpful.

Sonar has a lot of interesting metrics. You can explore some of them starting here including coverage, tech debt, etc. If you look at the Sonar dashboard, it will point out a lot of available work that does not require a large span of time to invest. Doing some of this work is a great step towards getting your first patch submitted.

Follow Best Practices

With well over a million lines of code and many contributors from many companies, the ODL project has quite a girth. To manage the code entropy, ODL has some best practices that you should become familiar with. These cover a diverse set of topics, including coding practices, debugging, project setup and workflow. We strongly recommend that you carefully read these. They will save you a lot of time and will pay back your investment quickly. They will help you skate through code reviews. These practices are really time-tested advice from all the ODL developers, so don’t ignore them.

Support Attribution

Attribution is an important insight into most if not all open source projects. Attribution allows stakeholders to see who is contributing what, from the individual up through sponsoring companies. It allows both a historical and current view of the project. You can see an example of why attribution is illuminating here. You need to sign up for an ODL account, and a part of that process will be to associate yourself with a company (if applicable). You can also see breakdowns by authors on the ODL Spectrometer.

That’s all for now. Happy trails, newbie.

Watch for Allan’s blog next week where he will share his Top 10 learnings as a new developer contributing to ODL.

Service Providers Are Placing Big Bets on Open Source Software Networking – Should You?

The service provider market is undergoing earth-shaking changes. These changes impact the way that applications for consumers and business users are deployed in the network and cloud as well as the way that the underlying data transport networks are built.

At Lumina, we’ve had the chance to work with several large service providers on their software transformation initiatives and get an up-close look at what works and what doesn’t. Three factors are particularly favorable in setting up successful projects for frictionless progress from the lab through prototype and proof of concept and into the production network.

Top-Down Advantage

Our first observation is that top-down initiatives and leadership work better than bottom-up or “grass roots” approaches. The experience of AT&T strongly illustrates the advantage. While a few of the hyperscale cloud providers had already launched open source initiatives and projects, the first big move among the established service providers was AT&T’s Domain 2.0, led by John Donovan in 2013. Domain 2.0 was not a precise description of everything that AT&T wanted to do, but through that initiative, the leadership created an environment where transformative projects are embraced and resistance to change is pushed aside.

While lack of top down support is not a showstopper, it is extremely helpful to get past obstacles and overcome organizational resistance to change. If top-down support in your organization is lacking or weak, it is worth your effort to influence and educate your executives. In engaging executives focus on the business value of open software networking. The advantages of open source in software networks include eliminating lock-in and spurring innovation. As our CEO, Andrew Coward, wrote in his recent blog, Why Lumina Networks? Why Now?: “Those who build their own solutions—using off-the-shelf components married to unique in-house developed functionality—build-in the agility and options for difference that are necessary to stay ahead.”

Although it may create a slower start, from what we have seen, taking the time to do early PoCs to onboard executive support so that they deeply attach to the value is time well worth spent. Sometimes a slow start is just what is needed to move fast.

Collaboration through Open Source

The second observation is that industry collaboration can work. I read an interesting comment by Radhika Venkatraman, senior vice president and CIO of network and technology at Verizon, in her interview with SDxCentral. She said, “We are trying to learn from the Facebooks and Googles about how they did this.” One of the best ways to collaborate with other thought leaders in the industry is to join forces within the developer community at open source projects. The Linux Foundation’s OpenDaylight Project includes strong participation from both the vendor community and global service providers including AT&T, Alibaba Group, Baidu, China Mobile, Comcast and Tencent. Tencent, for one, has over 500 million subscribers that utilize their OpenDaylight infrastructure, and they are contributing back to the community as are many others.

A great recent example of industry collaboration is the newly announced ONAP (Open Network Automation Platform) project. Here, the origins of the project have roots in work done by AT&T, China Mobile and others. And now, we have a thriving open source developer community consisting of engineers and innovators who may not have necessarily collaborated in the past.

These participants see benefits of collaboration not only to accelerate innovation but also to build the software run time across many different types of environments and use cases so as to increase reliability. Providers recognize that in their transformation to software networks there’s much they can do together to drive technology, while using how they define and deliver services to stand out from each other in the experiences created for customers.

What about your organization? Do you engage in the OpenDaylight community? Have you explored how ONAP can help you? Do you use OpenStack in your production network? And importantly, do you engage in the discussions and share back what you learn and develop?

Pursuit of Innovation

A third observation is the growing ability for service providers to create and innovate at levels not seen before. A prime example here is the work done by CenturyLink to develop Central Office Re-architected as a Datacenter platform to deliver DSL services running on OpenDaylight. CenturyLink used internal software development teams along with open source and Agile approaches to create and deploy CORD as part of a broad software transformation initiative.

One might have thought that you would only see this level of innovation at Google, Facebook or AWS, but at Lumina we are seeing this as an industry-wide trend. The customer base, business model, and operations of service providers vary widely from one to another based on their historical strengths and legacy investment. All have an opportunity to innovate in a way that advances their particular differences and competitive advantages.

Closing Thoughts

So we encourage you to get on the bandwagon! Don’t stand on the sidelines. A combination of leadership, collaboration and innovation are the ingredients you need to help your organization drive the software transformation needed to stay competitive. There is no other choice.

Stay tuned for my next blog where we will discuss some of the specifics of the advantages, development and innovation using open source.

From Open Source to Product; A Look Inside the Sausage Making Factory

I’ve spent the last few months working closely with the OpenDaylight and OpenStack developer teams here at Brocade and I’ve gained a heightened appreciation for how hard it is to turn a giant pile of source code from an open source project into something that customers can deploy and rely on.

Kevin Woods

Kevin Woods

Not to criticize open source in any way – it’s a great thing.   These new open source projects in the networking industry, such as OpenDaylight, OpenNFV and OpenStack are going to do great things to advance networking technology.

No, it’s just the day to day grind of delivering a real product that challenges our team every day.

On any given day, when we are trying to build the code, we’ll get new random errors and in many cases it’s not immediately obvious where the problem is.   In another test we’ll get unexpected compatibility problems between different controller elements.   Again, somebody made a change and you can’t trace the problem.  On some days, certain features will stop working for no known reason.  Because of the above, we need to continuously update and revise test automation and test plans – that is also done daily.

When it comes to debugging a problem, unless you’re working with the source code and regularly navigating to find problems, diagnosis is difficult.    Some of the controller internals are extremely complex, for example the MD-SAL.   Digging into that to make either enhancements or fixes is not for the faint of heart.

The OpenDaylight controller is actually several projects that must be built separately and then loaded via Karaf.  This can be non-intuitive.

Another area of complexity is around managing your own development train.   If you’re going to have a non-forked controller that stays very close to the open source, you cannot risk being totally dependent upon the project (for the above reasons and others), and so you basically have to manage a parallel development thread.   At the same time, you find problems or want to make minor enhancements that you need in service, but cannot contribute immediately back to the project (that takes some review and time).    So you’re left with this problem of branching and re-converging all the time.   Balancing the pace of this with the projects pace is a challenge every day.

Then there’s all the maintenance associated with managing our own development thread, supporting internal teams, maintaining and fixing the documentation etc.   Contributing or committing code back to the project, when needed, is not a slam dunk either.   There is a commit and review process for that.  It takes some time and effort.

I think we’ll find the quality of the new Helium release to be significantly better than Hydrogen.  Lithium will no doubt be an improvement over Helium and so on.   The number of features and capabilities of the controller will also increase rapidly.

But after going through this product development effort the last few months I have a real appreciation for the value that a commercial distribution can bring.    And that’s just for the controller itself – what about support, training and so on?    Well, I leave those things for another blog.

Originally Published on the Brocade Community on 10/9/2014

Brocade SDN Solutions Help Customers to Move to an Open, Reliable and Scalable Architecture

Brocade today announces the availability of the Brocade SDN Controller based on OpenDaylight’s Boron release that took place last week. Brocade SDN Controller is an open source architecture that’s fully tested, documented, and quality assured. We are the first vendor to commercially distribute OpenDaylight.

In the release of the Brocade SDN Controller 4.0, the engineers have fine-tuned the code in the OpenDaylight upstream by providing various scripts and tools to make it easier for the customers to utilize the functionality of high availability, backup, and restore with ease and keep the customers environment up and running across geographies. This way it ensures that the customer SLAs are not disturbed.

The engineers have also developed the smarts wherein the customers can export the data file from the database of the older version and import it into the latest version while deploying the software. This enhances the ease of transition and upgrades to the latest versions with minimal error, thereby improving operational efficiency and productivity.

One of the greatest attestments to this is Brocade’s win at Arizona State University. Arizona State University, ranked one of the “most innovative schools” by U.S. News & World Report, has continued on its groundbreaking path by using software-defined networking tools developed by Brocade. At any given time at the university, there are 250 research projects that undergraduates, graduate students and post-doctoral students are working on. Many of the roughly 90,000 students on campus are also using mobile classroom tools.

Jay Etchings, Director of Research Computing at ASU, said in one of his interviews that they would like to see “at a moment’s glance” if there is a problem on the network, in real time. “Brocade was able to give us a package, and that package included some highly dense devices. Brocade was able to meet many, if not almost all, of our security requirements.” ASU has deployed Brocade’s MLXe Core Routers, SDN controller, and Brocade Flow Optimizer and achieved significant success. Etchings said “It’s a simple, easy-to-use interface, so that’s very nice for us,” he said. “It requires less maintenance, because we can give folks access to devices they need and not have to manage their accounts.”

Read Brocade Press Release to get more details of the ASU win.

Brocade Flow Optimizer (BFO) helps improve business agility by streamlining SDN and existing network operations via policy-driven visibility and control of network flows. It provides distributed attack mitigation by programmatically sensing and clipping DDoS flows at router and switch ports. It extracts network-wide visibility of Layer 2 through Layer 4 traffic flows through sFlow and OpenFlow collected from network devices, delivering real-time control of flows (drop, meter, remark, mirror, normal forward) through OpenFlow rules pushed to entire network for deterministic forwarding driven by policy. Customers can automate polices applied via an embedded UI or through open APIs.

Learn more about Brocade SDN Controller and Brocade Flow Optimizer.
Download free trial bits of Brocade SDN Controller and Brocade Flow Optimizer.

Originally Posted on the Brocade Community on 9/27/22016

Pin It on Pinterest