+1 (669) 231-3838 or +1 (800) 930-5144

Key Building Blocks of SD-Control: Segment Routing and Path Computation Element

In my previous article earlier this year, I talked about the main components for the Lumina SD-Control (LSC) approach to software-defined networking (SDN), I mentioned that we use segment routing and a path computation element (PCE) in an innovative way to create ultimate scalability. This is what enables ISPs to grow their modern networks to a global scale in an efficient, cost-effective manner.

Now let’s go into a bit more depth about these key building blocks of LSC. I’ll start with a basic overview of network traffic engineering.

A basic Internet Protocol (IP) network (i.e., one without any sort of traffic engineering) moves data packets across the network in a very elementary way. Using basic algorithms, the network calculates how to get data from the source node to the destination node using the shortest route possible— “shortest” meaning the smallest number of intermediate devices or autonomous systems to traverse.

In theory the shortest path sounds good, but there can be numerous reasons why it’s an undesirable approach. For example, the experience of the data might be poor, the economics of moving data over the shortest path might be too costly, or the shortest path can be too congested while other routes sit idle. Therefore, the notion of steering traffic through the network based on some sort of policy was devised. This is traffic engineering (TE).

Steering traffic through a network

 

A traffic engineering policy can make more qualitative decisions about traffic flows. The theoretical utopia of traffic engineering is that you get to consume all the resources of the network, such as bandwidth. Traffic engineering tries to accomplish using all the resources and all the variety of paths in your network from point A to point B. Hence, you can basically get more from your investment in the network infrastructure, irrespective of the specific data flow patterns that would naturally occur.

The predominant method for traffic engineering on IP networks today uses Resource Reservation Protocol, or RSVP, where the router at the source edge of the network runs an algorithm that considers these additional requirements based on the policy. You can tell it, “The data flow from A to B needs to traverse through a particular area of the network,” or “The entire path needs to have a certain amount of resources available.” It is literally a reservation protocol through the network. Every router in the network participates in this reservation request and confirmation system. The specific strategy for this form of traffic engineering is known as RSVP-TE.

RSVP adds a needed layer of intelligence to the network. It facilitates this conversation about which data traffic needs what resources and confirms that those resources are allocated. As it does this, it creates “state” in the network. The network is more aware of what data it’s moving around, but at the same time, the routers and the intermediate systems that comprise this network and the path through the network all have to keep track of what’s going on—kind of like an accounting ledger.

Keeping track of the ledger consumes a lot of compute power on the network devices, which leads to problems in building very large networks using RSVP-TE. This approach requires expensive routers with the capability and capacity to track the reservations, and even then, there’s a scalability limit to what these devices can handle in a practical sense. The problem is magnified if there is a transmission failure in the middle of the network and all the devices must be notified, and the reservations resolicited. Such changes in the network cause performance issues.

The bottom line is, RSVP-TE – the most dominant form of traffic engineering for IP networks today – accomplishes some important things but it’s problematic at global or very large scale, to say the least. What’s more, it’s actually quite complicated to understand what’s happening at any particular point in the network—what just went wrong, why it’s not working.

Taking a fresh look at traffic engineering

 

With SDN, there is a different philosophy in building networks. It gives an opportunity to solve the same problems but in a different way. Segment routing provides a built-in answer to traffic engineering, eliminating the downsides but delivering on all the upsides of TE.

Segment routing is essentially the idea that the control plane – in our solution, the Lumina SDN controller – is separate from the data plane. The LSC makes decisions upfront and then programs the data plane. In this case, the control plane will decide the exact way that this path should go through every segment of the network. It tells the input node to use a stack, or a sequence of MPLS labels, and output a particular interface. Therein is a segment routing path.

The label stack determines how the traffic will move from one node to the next. Every node in the network has a unique label. The network will automatically move the data through the prescribed sequence of nodes to get to the endpoint.

This method uses a protocol whereby the nodes announce their labels to each other – “Hey, I am label 100” – and every device in the system hears that message. The next node that chooses a label will know which labels the other devices have chosen and will choose an as-yet-unclaimed label. If there is a conflict, the nodes will resolve it amongst themselves.

The important aspect here is that in segment routing, there is no state. There are no reservations or commitments that individual routers in the data plane have to keep track of and manage. This is an important characteristic that resolves the scalability problems of strategies like RSVP-TE. All the decisions, and the reasoning behind why decisions were made, are kept in the control plane, not the data plane. Moreover, that control plane can run on regular servers that are optimized for that type of thing. You don’t need specialized routers and switches that are intended to pipe data around.

The benefits of segment routing

 

Segment routing delivers the upside of traffic engineering with the ability to utilize all of the paths in your network, based on whatever policy the business wants to drive to make those decisions. You don’t have to buy fancy expensive. In fact, you don’t have to use routers at all; the devices can be OpenFlow switches, which comprise the most bargain-basement data plane that you can buy. The story here is, a cheap but efficient data plane.

The network can keep growing as business dictates. If you need to add more nodes and you run out of resources on your control plane, you can just buy a bigger server or add more memory chips to the same server. Upgrade from 64 gigs of RAM to 128, and you can double the size of your network. It brings everything into a nice place of how the scalability aspect happens.

Then finally there is the performance issue. When an event happens in the network and segment routing is being used, there’s no protocol that is organizing which labels to use or where the nodes are. This method eliminates the situation where something happened to the network and you have to cascade a notice throughout the network that there is a problem and get every node to make a new plan of what to do next. With segment routing, the control plane can understand what happened and make the decision of next steps on its own. Thus, the total amount of work required to resolve the issue is smaller and therefore quicker.

That’s essentially traffic engineering with segment routing.

Making the path decisions

 

In the context of SD-Control, there’s that aspect of making decisions about what these paths should be, and what that label stack is that is used to define the segment routing path. The path computation element, or PCE, determines that. The Lumina SDN Controller (LSC) and the other SD-Control components, specifically Path Manager, will tell the network what the decision of the path computation engine is. It does that by sending the specifications to the LSC, and the LSC sends those details down to the forwarding plane, the OpenFlow router or the switch.

Let me back up a step and talk about these other components of SD-Control: the Service Manager and the Path Manager. The Service Manager is where the business policy is executed: What are the rules? What are the different ways that we can define how data moves through the network? Service Manager handles all the constraints and the rules, and it makes requests to Path Manager to tell it to build a path with a particular set of constraints from point A to point B.

Path Manager then asks the PCE to determine all the ways that we can get from A to B meeting all these constraints. PCE runs its algorithm and returns a path, or maybe a collection of paths that all satisfy the need. Path Manager will filter it down to exactly one path. Path Manager keeps track of the path and tells the Lumina SDN Controller to program the ingress node and apply a particular label stack to it. Then the network naturally knows what to do from there on out.

Putting it all together, this is how the stack of components of SD-Control accomplishes traffic engineering.

Using this method, you can scale your control plane to whatever you need. By scale I mean compute power, memory, storage—it boils down to a generic compute problem. But the data plane doesn’t care; you only have to worry about the most basic things such as, how many links do I have in my network, and what are those 1 Gb links, 10 Gb links, 100 Gb links? And now you’re sizing your data plane and understanding how much service you can deliver with it. It’s a pretty easy calculation that helps solve that higher-level business problem of how much to invest in the network to get a certain level of services delivered. It’s all tied to revenue, profitability and the other aspects of building networks as a business. It really simplifies the business dynamics.

The ”interwork” protocol that enables transformation

 

There’s one more important aspect of SD-Control I should mention here. It’s multiprotocol border gateway protocol, and specifically a particular mode of operation called link state—otherwise known as multiprotocol BGP-LS. This is the protocol that all the routers and switches use to announce their label. They all communicate with each other, and in that communication process, the Lumina SDN controller is speaking multiprotocol BGP-LS.

In order to get segment routing organized, you just have to turn on BGP-LS on all the devices and get them talking. BGP itself is not new either; it’s a mechanism that already exists on legacy networks. We can teach the SDN control plane to speak BGP-LS, and now it can automatically learn all the labeling and all the new information.

BGP-LS is used to do segment routing in the legacy network, and now it can be used in a software defined network powered by SD-Control. This is incredibly important, as it’s what enables SD-Control to basically build services that traverse or interwork between the legacy networks and the new SDN networks.

That ability to interwork facilitates the transformation of these carrier networks, taking them into the future with SDN. It’s what we’re all about at Lumina Networks.

Pin It on Pinterest