Neglected Factors in 5G: Network Slicing

It may seem odd to long-time networkers that “slicing” is discussed extensively in relation to 5G deployment. After all haven’t we been using VLAN, VPNs, VRFs and a whole host of ways to slice and dice networks for many years? Keep in mind that for established 4G networks, there has been no easy way to create logical segments or divide the network into granular slices, even in cases where the equipment may be capable of it. While legacy services hosted MBB, voice, SMS and even MVNOs on the same infrastructure, it was built in a way that was either physically discrete, channelized or rigid – not the way a packet network, and thus 5G, would do things. This approach is monolithic and will need to be updated for successful 5G deployments.

With packet networking, software-defined networking and network function virtualization coming into the network buildouts for 5G, network slicing is becoming an important part of service agility. The power of 5G is not just in higher-data rates, higher subscriber capacity and lower latencies – it is in the fact that services and logical networks can be orchestrated together. This is critical for deployment of connected car, IOT (Internet of Things), big data and sensor networks, emergency broadcast networks and all the amazing things that 5G will be able to support.

But there’s an often-overlooked element of the 5G rollout, slicing deployment over existing equipment, is something that Lumina Networks is uniquely equipped to enable.

Most presentations that you will see on 5G (especially from the vendors) just assume that the provider will be buying all-new hardware and software for the 5G buildout. In reality, a lot of existing equipment will need to be used, especially components that are physically distributed and expensive or impossible to access. How will the new network slices traverse these legacy devices?

Network slicing will traverse from the mobile edge, continue through the mobile transport, including fronthaul (FH) and backhaul (BH) segments, and the slices will terminate within the packet core, probably within a data center.  Naturally, this will involve transport and packet networking equipment in both the backhaul and fronthaul network. The packet core will also likely involve existing equipment. These systems will rely on the BGP protocol at the L3 transport networking layer, even when they are newer platforms.

The 3GPP organization’s definition of a slice is “a composition of adequately configured network functions, network applications, and the underlying cloud infrastructure (physical, virtual or even emulated resources, RAN resources etc.), that are bundled together to meet the requirements of a specific use case, e.g., bandwidth, latency, processing, and resiliency, coupled with a business purpose”

Given the number of elements involved in a slice, sophisticated cloud-based orchestration tools will be required. It’s noteworthy that many of the optical transport vendors have acquired orchestration tools companies to build these functions for their platforms. However, since the start of the Open Network Automation Project (ONAP) at the Linux Foundation, it is clear that the service providers will demand open-source based platforms for their orchestration tools. Rightfully so, an open solution to this problem reinforces operators’ desire to end vendor lock-in and enable more flexible, service-creation enabled networks.

The creation of a “slice” in a 5G network will often involve the instantiation of relevant and dedicated virtual network functions (VNFs) for the slices and this is a key aspect of the work going on in the ONAP project. VNFs, in addition to participating as connectivity elements within the slice, will provide important functions such as security, policies, analytics and many other capabilities.

5G-Network-Slicing

The good news here is that established open source projects such as OpenDaylight have the control protocols that will be used for legacy equipment, such as NETCONF, CLI-CONF, BGP-LS and PCEP, as well as the newer protocols that will be used for virtual L3 slicing such as COE and OVSDB.

Some of the network slicing capabilities that these protocols enable are:

  • Supporting end-to-end QoS, including latency and throughput guarantees
  • Isolation from both the data plane and orchestration/management plane
  • Policies to assure service intent
  • Failure detection and remediation
  • Autonomic slice management and operation

ONAP utilizes the OpenDaylight controller as it’s “SDN-C”. And, more recently ONAP has a new project to develop an OpenDaylight SDN-R to configure radio transport services.

This blog series, “Neglected 5G Factors” will address  SDN-R more our next blog. For now, be sure you’ve read the first of the series “How SDN will Enable Brownfield Deployments.”

Deploy-5G-in-a-brownfield-environment

Neglected 5G Factors: How SDN will Enable Brownfield Deployments

There’s a lot to consider in 5G service roll-out, but there are a few key areas that are often overlooked. Lumina Networks feels a responsibility to the community to illuminate the truth behind 5G transformation, to highlight these overlooked “secrets” which can make-or-break your 5G strategy. The factors you’ll read about in this blog series have a drastic impact on a service providers digital transformation – they are the software-defined networking components that play a critical role in making 5G services possible and accelerating deployments.

5G DeploymentAs an industry, we’ve all agreed long ago the agility needed to run 5G services will come from a more software-based network. While virtualized components and functions are powerful tools in enable flexibility, we need to deploy 5G in brownfields, not green.

It’s with this in mind that the first factor which requires more consideration is the role legacy, purpose-built infrastructure needs to play in performing the functions necessary for 5G services. We call this use case SDN Adaption. Adaption involves providing a common control plane for different types of networks. In network services prior to 5G it would be acceptable to provision and configure each part of the network separately, taking days or even weeks. You can understand why this area is often overlooked in the market conversation as large vendors with a lot at stake, and substantial marketing budgets, prefer to keep it that way.

By necessity, 5G networks will be “intent-based” networks where the end-to-end service will be defined in a high-level language such as TOSCA (Topology and Orchestration Specification for Cloud Applications). The instantiation or configuration of the network functions beneath that, whether virtual or physical, will be automated. Closed-loop operational monitoring will help assure that the service meets the pre-defined SLA.

The common control plane needed to configure and monitor the network elements works in two different ways.

Container orchestration for virtualized applications

 

The services behind 5G will be “cloud-ready”, that is they will be container-based and leverage microservices.  They will also need to be orchestrated (see our blog on what it means to be cloud ready). OpenDaylight’s Container Orchestration Engine (COE) project is a vital tool in orchestrating the network components for container-based applications. The COE project, led by Lumina’s own Prem Sankar Gopannan is designed to provide L2 and L3 networking between containers or between containers and virtual machines for Kubernetes-managed environments. COE includes interfaces to KVM and Containers and OpenStack (via Kuryr). A northbound CNI is available for bare-metal based containers.  This is important for micro-datacenters where compute resources are scarce. As such, COE can support a variety of use cases where cloud-based networking is required such as in 5G services networks.

Tying Existing Routers to 5G Services

While COE takes care of the future, what about the past?  That is, what about equipment that is already installed or VNFs that are based on legacy network OS’s? There’s good news on this front as OpenDaylight and now COE specifically will support the NETCONF control interface that is common on today’s hardware and software routers. NETCONF is a stateful control interface that was designed to allow an external controller to manage the configuration of routers. NETCONF is optimized for routers that publish YANG information models and is a commonly supported interface for Cisco, Juniper and other types of switches common in 5G front haul and backhaul networks.

Putting the above two capabilities together, it is possible to orchestrate both the virtual and physical network elements to create the network slices and intent-based services necessary for 5G. OpenDaylight is a particularly well-suited controller to implement for 5G networks because it has both COE and NETCONF support, as well as a variety of other common control interfaces. This type of multi-cloud and multi-protocol support is absolutely essential for being able to leverage the existing network for 5G services.

Realizing this type of virtualized network architecture is the future, our major tier-1 CSP partners are already aligning to the new network architecture. In fact, AT&T has already been working on this for several years.  

In my next blog, we will delve into a common 5G use case that depends upon this type of converged network architecture.

Deploy-5G-in-a-brownfield-environment

 

Don’t Be Afraid of Open Source

Open source is arguably one of the hottest trends in tech these days and in the networking space specifically. This week we heard about IBM’s huge acquisition of Red Hat, and recently we’ve seen deals around Microsoft and GitHub, Salesforce and MuleSoft, and Cloudera and Hortonworks. In the networking space specifically, we’ve seen the initiation of ONAP (the Open Network Automation Platform), OSM (Open Source MANO), OpenConfig and the maturation of OpenDaylight.

Amidst all this positive activity, I continue to find that trepidation around open source persists, especially among people in the networking space, where the open source trend is relatively recent. To help alleviate some confusion, I’d like to take on three common myths about open source.

Myth #1 – Open source is a “do it yourself” approach

While open source platforms put control in the hands of the user, this doesn’t have to mean “DIY”. There is a long list of vendors that can provide support and work upstream in the open source community on behalf of their customers. Lumina Networks along with our partners Cloudify, Amdocs and many others offer supported distributions.

However, for those that do want to keep more of their engineering in-house, open source communities are public efforts, under public licenses and DIY is an option for those that want to work in the open source community. Bottom line, YOU are in control how much you want to balance vendor support with home-grown engineering.

Open-Source-BenefitsMyth #2 – Open source means giving away your intellectual property

Quite the opposite is true. Open source, in most cases, can allow you to leverage a common platform and spend more of your engineering effort working on proprietary software that is specific to your business. Open source platforms are, being multi-vendor, naturally architected to support proprietary extensions and vendor or service-specific interfaces.

Everybody involved in open source has some type of business interest in either selling software or services that can be differentiated. So, the open source community tends to work on common platform-oriented software that does not contain proprietary code. This is where an understanding of open source licensing can be helpful. Most open source licenses are oriented around balancing common platform development while allowing for service-specific extensions.

Myth #3 – Open source is a difficult business model

When people say this, they are usually referring to the over-simplified concept that open source software is “free” and so it is reasonable to ask: “how can anybody make any money?” On the vendor side, there are plenty of profitable business models that include services and support programs, value-added software development and vendor-specific interfaces and testing for system integration. The recent headline exits for RedHat and others testify to the potential value of the open source business model.

But I like to look at open source business models more from the engineering cost side of the balance sheet. In other words, “How much engineering cost would it require your organization to develop the open source platform that you are using?”. Looking at open source in this way sheds a different light on the myth.

In Summary

Open source has a lot to offer network providers, vendors and the industry in general in terms of speeding up the development and deployment of new technologies such as 5G. In fact, open source may be the only good mechanism we now have to develop the platforms required for these next-generation networks.

So, we at Lumina Networks, encourage looking at open source from the developer’s point of view, and the benefits it can bring from that angle. Whether you are a developer or a consumer of networking software, get involved in the open source community and become a contributor to the emerging platforms that are going to carry us all into the future. And most definitely, don’t be afraid!

Best Startup Award 2018 – SDN NFV World Congress

Last week at the SDN NFV World Congress show we were honoured to receive the Best Startup award for 2018. I’d like to mention some of the reasons why we believe Lumina Networks was selected, and why we are a unique start-up company.

SDN NFV World Congress

Lumina Networks is dedicated to a broader vision in the advancement of networking than just a set of new features or a new application. Lumina is founded on the idea that the convergence of network virtualization, cloud and DevOps will fundamentally change the way network technologies are envisioned, created and deployed.

In a virtualized or cloud environment, the network itself operates much like an application. The orchestration of the network will be closely related to the setup and arrangement of the applications and tenants. The network must dynamically accommodate application mobility and changes, especially when the applications and virtual network functions are distributed as in 5G and Mobile Edge Computing.

This requires a complete re-thinking of the way the network is architected and even how the network is managed and deployed.  The days where IT and Network Engineering could operate autonomously are becoming history. The teams will need to work together closely to make sure that application performance, data integrity and service continuity are all considered as inter-related and co-equal.

A key element in achieving this needed harmony is the use of open source platforms. In particular, Lumina Networks is the industry’s leading supplier of the OpenDaylight Controller. And, Lumina is the first networking company to focus on a pure-play open source business model. Verizon and AT&T understand the need for this model to succeed are investing in Lumina accordingly, in addition to being two of our largest customers.

Open source communities are a mechanism to foster the needed cooperation between vendors, customer network engineering teams and customer IT departments.  Furthermore, because Open source communities focus on developing the actual code, cooperation results in usable software that can be tested, trialled and ultimately deployed.

Lumina Networks is a new unique company that brings all of these dynamics together into a set of products and services that help our customers begin their journey to next-generation software networking.  Lumina’s open source-based platform provides the basis for the deployment of SDN and NFV with vendor support and without vendor lock-in.  Our upstream open source community leadership and contributions ensure that our customers get the benefits of multi-vendor platforms.  Finally, Lumina’s NetDev services help our customer integrate the technology into their existing and future network environments while we transfer the skills and knowledge to our customers that makes them self-sufficient.

Thanks again to our friends in the analyst community who served as judges for the contest and our friends at Layer123 for the opportunity and venue.

What Does Cloud-Ready Mean?

I’m sure you’ve heard the question, “It is cloud-ready?” Is there any real substance to this question? Or is cloud-ready just another marketing term used by the vendors? At Lumina Networks, we believe there’s substance behind the cloud-ready question and a lot to consider. I’d like to propose a longer form of the same question. In other words: Can your application be deployed and automated as a virtual network function, independent of the underlying infrastructure?

This is a critical question, not only for the optimization of user applications and services, but true cloud-readiness also allows us to evolve or swap-out the underlying infrastructure with minimal disruption of the services – an often over-looked sub-point. So, let’s dissect this heavily loaded question into some basic parts.

Getting Could Ready

The Cloud Native Computing Foundation has a three-part definition for applications that can operate cloud-native.

  1. Containerized – Each part (applications, processes, etc.) is packaged in its own container. This facilitates reproducibility, transparency and resource isolation.
  2. Dynamically orchestrated – Containers are actively scheduled and managed to optimize resource utilization.
  3. Microservices-oriented – Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.

An infrastructure built in this way is intrinsically agile and can be scaled on demand. And, in this cloud-native world some of the clear distinctions between cloud applications and networking applications, or virtual network functions (VNFs) disappear. Network elements run like applications and are often deployed on the same compute infrastructure as the applications they serve.

As we consider moving our applications and network functions toward cloud-readiness, there are a myriad of options in terms of surrounding tools and infrastructure to make cloud-readiness possible. In addition, there is a complex maze of support compatibilities and incompatibilities between tools and versions. And all of this is evolving in time to make the task even more daunting. Project and tool names include OpenStack, Kubertetes, Lifecycle Orchestration, ONAP, OSM, just to name a few, and within each of these, there are numerous projects and options. It’s no wonder that the SDN and NFV market is nowhere near the size we thought it would be by now.

There are, however, parts of these architectures where the tools are more mature, stable and can be a sensible choice. This week, at the SDN and NFV World Congress show in the Hauge, we were happy to announce our solution partnership with Cloudify. Lumina and Cloudify can, together, help you start your cloud-ready journey with relatively low complexity and risk. Even better, buying into the platforms that Lumina and Cloudify together support, get you on the road to what will be the eventual open source standards. OpenDaylight, which Lumina uses as its open source platform and ONAP, which is, in part, based on Cloudify’s open source orchestration initiative, include some of the more stable components of a cloud-ready architecture.

At least one tier-one has deployed a joint Lumina and Cloudify solution, with the understanding that this step will lead in the right direction toward their long-term architecture. And there’s more to come. In the next few weeks, we will be publishing more information about practical steps that you can take now to move toward cloud-ready applications.

Cloud-readiness is an important question and the answer can be complex. Good news – there are some low-risk moves you can start today that will get you going in the right direction. Lumina and Cloudify can help.

SDN in the Real World

Over the last five years we’ve seen a lot of SDN technologies emerge – from the original Openflow switches, to the myriad of overlay technologies such as SD WAN, Contrail, Nuage, and more, not to mention a number of open source projects.

One thing we can say about all of the them though, is that they have been divergent network strategies.  That is to say, that implementing any one of these technologies requires that you deploy a new network, either physical or virtual.  They all require that this new domain is managed, separate and distinct from the existing network and they all require a new data plane to work.

In parallel with the SDN movement, has been the move to “digitize” the network – that is to say, to bring the same level of automation to networking that we’ve seen in the compute space, and preferably without having to rebuild the entire network.

SDN can and should play a key role in the move to automation, and yet most of these technologies complicate, not simplify the network.  Alongside the effort to automate is lost while yet another proprietary system is deployed. For example, debugging a network connectivity problem now involves solving for both overlay and underlay issues – the new virtual network and the underlying network.

Convergent Software Defined NetworkWhen we started Lumina Networks, we had a different vision for SDN – as a convergent, not divergent technology.  By selecting OpenDaylight as our open source controller, we made a conscious choice to enable the SDN control of every domain across the network, and worked with the community on building the necessary protocols and interfaces to allow control of almost anything in the network, even if the product was never designed with SDN in mind.

This focus on inclusion, and with it, multi-domain SDN control, is vital for anyone intent on automating their network and abstracting the business logic from the network control logic.  This is why OpenDaylight is controlling more devices than any other open source networking project and why it is a key part of transformational projects such as ONAP.

As an industry, we need to get over the distraction of divergent SDN solutions and stick to the task at hand, which is to radically reduce the time and cost associated with deploying, provisioning and running networks, through the converged automation of end-to-end services. Our customers know this well, which is why they choose Lumina to open their networks to SDN.

Lumina Networks at CableLabs Summer Conference

Lumina is proud to be attending and exhibiting at CableLabs Summer Conference 2018.

When CableLabs was founded, it strove to live at the crossroads of innovation and technical achievement. Now, 30 years later, CableLabs Summer Conference has become one of the premiere executive conferences for key playerss in the cable networking industry.

Lumina Networks will be exhibiting our version of the CableLabs PCMM initiative. PacketCable Multimedia (PCMM) specification is a program initiative from CableLabs to support the deployment of general Multimedia services by providing a technical definition of several IP-based signaling interfaces that leverage core QoS and policy management capabilities native to DOCSIS Versions 1.1 and greater.

Current PCMM deployments use expensive, legacy applications with limited enhancements, upgrades and support. Lumina’s SDN Controller provides a supported, low cost alternative to manage PCMM using an open source and open partner solution.  Lumina’s extensive work with the Linux Foundation and OpenDaylight have delivered proven SDN Controller projects that manage and control networks for service providers around the globe.

Key Building Blocks of SD-Control: Segment Routing and Path Computation Element

In my previous article earlier this year, I talked about the main components for the Lumina SD-Control (LSC) approach to software-defined networking (SDN), I mentioned that we use segment routing and a path computation element (PCE) in an innovative way to create ultimate scalability. This is what enables ISPs to grow their modern networks to a global scale in an efficient, cost-effective manner.

Now let’s go into a bit more depth about these key building blocks of LSC. I’ll start with a basic overview of network traffic engineering.

A basic Internet Protocol (IP) network (i.e., one without any sort of traffic engineering) moves data packets across the network in a very elementary way. Using basic algorithms, the network calculates how to get data from the source node to the destination node using the shortest route possible— “shortest” meaning the smallest number of intermediate devices or autonomous systems to traverse.

In theory the shortest path sounds good, but there can be numerous reasons why it’s an undesirable approach. For example, the experience of the data might be poor, the economics of moving data over the shortest path might be too costly, or the shortest path can be too congested while other routes sit idle. Therefore, the notion of steering traffic through the network based on some sort of policy was devised. This is traffic engineering (TE).

Steering traffic through a network

 

A traffic engineering policy can make more qualitative decisions about traffic flows. The theoretical utopia of traffic engineering is that you get to consume all the resources of the network, such as bandwidth. Traffic engineering tries to accomplish using all the resources and all the variety of paths in your network from point A to point B. Hence, you can basically get more from your investment in the network infrastructure, irrespective of the specific data flow patterns that would naturally occur.

The predominant method for traffic engineering on IP networks today uses Resource Reservation Protocol, or RSVP, where the router at the source edge of the network runs an algorithm that considers these additional requirements based on the policy. You can tell it, “The data flow from A to B needs to traverse through a particular area of the network,” or “The entire path needs to have a certain amount of resources available.” It is literally a reservation protocol through the network. Every router in the network participates in this reservation request and confirmation system. The specific strategy for this form of traffic engineering is known as RSVP-TE.

RSVP adds a needed layer of intelligence to the network. It facilitates this conversation about which data traffic needs what resources and confirms that those resources are allocated. As it does this, it creates “state” in the network. The network is more aware of what data it’s moving around, but at the same time, the routers and the intermediate systems that comprise this network and the path through the network all have to keep track of what’s going on—kind of like an accounting ledger.

Keeping track of the ledger consumes a lot of compute power on the network devices, which leads to problems in building very large networks using RSVP-TE. This approach requires expensive routers with the capability and capacity to track the reservations, and even then, there’s a scalability limit to what these devices can handle in a practical sense. The problem is magnified if there is a transmission failure in the middle of the network and all the devices must be notified, and the reservations resolicited. Such changes in the network cause performance issues.

The bottom line is, RSVP-TE – the most dominant form of traffic engineering for IP networks today – accomplishes some important things but it’s problematic at global or very large scale, to say the least. What’s more, it’s actually quite complicated to understand what’s happening at any particular point in the network—what just went wrong, why it’s not working.

Taking a fresh look at traffic engineering

 

With SDN, there is a different philosophy in building networks. It gives an opportunity to solve the same problems but in a different way. Segment routing provides a built-in answer to traffic engineering, eliminating the downsides but delivering on all the upsides of TE.

Segment routing is essentially the idea that the control plane – in our solution, the Lumina SDN controller – is separate from the data plane. The LSC makes decisions upfront and then programs the data plane. In this case, the control plane will decide the exact way that this path should go through every segment of the network. It tells the input node to use a stack, or a sequence of MPLS labels, and output a particular interface. Therein is a segment routing path.

The label stack determines how the traffic will move from one node to the next. Every node in the network has a unique label. The network will automatically move the data through the prescribed sequence of nodes to get to the endpoint.

This method uses a protocol whereby the nodes announce their labels to each other – “Hey, I am label 100” – and every device in the system hears that message. The next node that chooses a label will know which labels the other devices have chosen and will choose an as-yet-unclaimed label. If there is a conflict, the nodes will resolve it amongst themselves.

The important aspect here is that in segment routing, there is no state. There are no reservations or commitments that individual routers in the data plane have to keep track of and manage. This is an important characteristic that resolves the scalability problems of strategies like RSVP-TE. All the decisions, and the reasoning behind why decisions were made, are kept in the control plane, not the data plane. Moreover, that control plane can run on regular servers that are optimized for that type of thing. You don’t need specialized routers and switches that are intended to pipe data around.

The benefits of segment routing

 

Segment routing delivers the upside of traffic engineering with the ability to utilize all of the paths in your network, based on whatever policy the business wants to drive to make those decisions. You don’t have to buy fancy expensive. In fact, you don’t have to use routers at all; the devices can be OpenFlow switches, which comprise the most bargain-basement data plane that you can buy. The story here is, a cheap but efficient data plane.

The network can keep growing as business dictates. If you need to add more nodes and you run out of resources on your control plane, you can just buy a bigger server or add more memory chips to the same server. Upgrade from 64 gigs of RAM to 128, and you can double the size of your network. It brings everything into a nice place of how the scalability aspect happens.

Then finally there is the performance issue. When an event happens in the network and segment routing is being used, there’s no protocol that is organizing which labels to use or where the nodes are. This method eliminates the situation where something happened to the network and you have to cascade a notice throughout the network that there is a problem and get every node to make a new plan of what to do next. With segment routing, the control plane can understand what happened and make the decision of next steps on its own. Thus, the total amount of work required to resolve the issue is smaller and therefore quicker.

That’s essentially traffic engineering with segment routing.

Making the path decisions

 

In the context of SD-Control, there’s that aspect of making decisions about what these paths should be, and what that label stack is that is used to define the segment routing path. The path computation element, or PCE, determines that. The Lumina SDN Controller (LSC) and the other SD-Control components, specifically Path Manager, will tell the network what the decision of the path computation engine is. It does that by sending the specifications to the LSC, and the LSC sends those details down to the forwarding plane, the OpenFlow router or the switch.

Let me back up a step and talk about these other components of SD-Control: the Service Manager and the Path Manager. The Service Manager is where the business policy is executed: What are the rules? What are the different ways that we can define how data moves through the network? Service Manager handles all the constraints and the rules, and it makes requests to Path Manager to tell it to build a path with a particular set of constraints from point A to point B.

Path Manager then asks the PCE to determine all the ways that we can get from A to B meeting all these constraints. PCE runs its algorithm and returns a path, or maybe a collection of paths that all satisfy the need. Path Manager will filter it down to exactly one path. Path Manager keeps track of the path and tells the Lumina SDN Controller to program the ingress node and apply a particular label stack to it. Then the network naturally knows what to do from there on out.

Putting it all together, this is how the stack of components of SD-Control accomplishes traffic engineering.

Using this method, you can scale your control plane to whatever you need. By scale I mean compute power, memory, storage—it boils down to a generic compute problem. But the data plane doesn’t care; you only have to worry about the most basic things such as, how many links do I have in my network, and what are those 1 Gb links, 10 Gb links, 100 Gb links? And now you’re sizing your data plane and understanding how much service you can deliver with it. It’s a pretty easy calculation that helps solve that higher-level business problem of how much to invest in the network to get a certain level of services delivered. It’s all tied to revenue, profitability and the other aspects of building networks as a business. It really simplifies the business dynamics.

The ”interwork” protocol that enables transformation

 

There’s one more important aspect of SD-Control I should mention here. It’s multiprotocol border gateway protocol, and specifically a particular mode of operation called link state—otherwise known as multiprotocol BGP-LS. This is the protocol that all the routers and switches use to announce their label. They all communicate with each other, and in that communication process, the Lumina SDN controller is speaking multiprotocol BGP-LS.

In order to get segment routing organized, you just have to turn on BGP-LS on all the devices and get them talking. BGP itself is not new either; it’s a mechanism that already exists on legacy networks. We can teach the SDN control plane to speak BGP-LS, and now it can automatically learn all the labeling and all the new information.

BGP-LS is used to do segment routing in the legacy network, and now it can be used in a software defined network powered by SD-Control. This is incredibly important, as it’s what enables SD-Control to basically build services that traverse or interwork between the legacy networks and the new SDN networks.

That ability to interwork facilitates the transformation of these carrier networks, taking them into the future with SDN. It’s what we’re all about at Lumina Networks.

Using Open Source to Enable Network Evolution

Kevin Woods, VP, Product Management, Lumina Networks

The network core is ripe for disruption. While the industry migrates to 5G, SD-WAN, IOT, Mobile Edge Computing and other advanced networking technologies with exponentially increasing traffic levels, the core has remained relatively static.

The economics and deployment approach for existing core network technologies will not meet this coming demand.  “SD-Core”* is the application of use cases and software defined networking technology to the meet the scalability and service agility requirements that service providers now need.

Lumina Networks’ unique approach to SD-CORE* defines a carrier-grade architecture and technology-set for deploying scalable MPLS and Segment Routing based networks using SDN and white box technologies, enabling network providers to evolve their core networks to SDN, while retaining the protocols, products and reliability of their existing networks.

 

*Note: SD-Core is now known as SD-Control.

Original article posted by TelecomTV. You can read it here.

Pin It on Pinterest