+1 (669) 231-3838 or +1 (800) 930-5144

Service Providers Are Placing Big Bets on Open Source Software Networking – Should You?

The service provider market is undergoing earth-shaking changes. These changes impact the way that applications for consumers and business users are deployed in the network and cloud as well as the way that the underlying data transport networks are built.

At Lumina, we’ve had the chance to work with several large service providers on their software transformation initiatives and get an up-close look at what works and what doesn’t. Three factors are particularly favorable in setting up successful projects for frictionless progress from the lab through prototype and proof of concept and into the production network.

Top-Down Advantage

Our first observation is that top-down initiatives and leadership work better than bottom-up or “grass roots” approaches. The experience of AT&T strongly illustrates the advantage. While a few of the hyperscale cloud providers had already launched open source initiatives and projects, the first big move among the established service providers was AT&T’s Domain 2.0, led by John Donovan in 2013. Domain 2.0 was not a precise description of everything that AT&T wanted to do, but through that initiative, the leadership created an environment where transformative projects are embraced and resistance to change is pushed aside.

While lack of top down support is not a showstopper, it is extremely helpful to get past obstacles and overcome organizational resistance to change. If top-down support in your organization is lacking or weak, it is worth your effort to influence and educate your executives. In engaging executives focus on the business value of open software networking. The advantages of open source in software networks include eliminating lock-in and spurring innovation. As our CEO, Andrew Coward, wrote in his recent blog, Why Lumina Networks? Why Now?: “Those who build their own solutions—using off-the-shelf components married to unique in-house developed functionality—build-in the agility and options for difference that are necessary to stay ahead.”

Although it may create a slower start, from what we have seen, taking the time to do early PoCs to onboard executive support so that they deeply attach to the value is time well worth spent. Sometimes a slow start is just what is needed to move fast.

Collaboration through Open Source

The second observation is that industry collaboration can work. I read an interesting comment by Radhika Venkatraman, senior vice president and CIO of network and technology at Verizon, in her interview with SDxCentral. She said, “We are trying to learn from the Facebooks and Googles about how they did this.” One of the best ways to collaborate with other thought leaders in the industry is to join forces within the developer community at open source projects. The Linux Foundation’s OpenDaylight Project includes strong participation from both the vendor community and global service providers including AT&T, Alibaba Group, Baidu, China Mobile, Comcast and Tencent. Tencent, for one, has over 500 million subscribers that utilize their OpenDaylight infrastructure, and they are contributing back to the community as are many others.

A great recent example of industry collaboration is the newly announced ONAP (Open Network Automation Platform) project. Here, the origins of the project have roots in work done by AT&T, China Mobile and others. And now, we have a thriving open source developer community consisting of engineers and innovators who may not have necessarily collaborated in the past.

These participants see benefits of collaboration not only to accelerate innovation but also to build the software run time across many different types of environments and use cases so as to increase reliability. Providers recognize that in their transformation to software networks there’s much they can do together to drive technology, while using how they define and deliver services to stand out from each other in the experiences created for customers.

What about your organization? Do you engage in the OpenDaylight community? Have you explored how ONAP can help you? Do you use OpenStack in your production network? And importantly, do you engage in the discussions and share back what you learn and develop?

Pursuit of Innovation

A third observation is the growing ability for service providers to create and innovate at levels not seen before. A prime example here is the work done by CenturyLink to develop Central Office Re-architected as a Datacenter platform to deliver DSL services running on OpenDaylight. CenturyLink used internal software development teams along with open source and Agile approaches to create and deploy CORD as part of a broad software transformation initiative.

One might have thought that you would only see this level of innovation at Google, Facebook or AWS, but at Lumina we are seeing this as an industry-wide trend. The customer base, business model, and operations of service providers vary widely from one to another based on their historical strengths and legacy investment. All have an opportunity to innovate in a way that advances their particular differences and competitive advantages.

Closing Thoughts

So we encourage you to get on the bandwagon! Don’t stand on the sidelines. A combination of leadership, collaboration and innovation are the ingredients you need to help your organization drive the software transformation needed to stay competitive. There is no other choice.

Stay tuned for my next blog where we will discuss some of the specifics of the advantages, development and innovation using open source.

My Experience at OpenStack Summit Barcelona 2016

When my colleague Jon Castro told me early this year that the next OpenStack summit was going to be in Barcelona and that we should go I thought, yeah right, keep dreaming! How could we possibly land a valid excuse to travel halfway across the world to eat Paella and drink Sangria, oh and attend some talks of course 😃 . So we submitted an abstract for the work we had done with NTT West around the OpenDaylight controller and OpenStack, and thankfully we were selected!

Being my first OpenStack summit I had no preconceived notions as to what to expect (so hopefully this recap is bias free), and if you’re a fan of TLDRs like I am, then here is mine: Lots of NFV, SFC, Orchestration talk, OpenStack is no longer a science project but an enterprise solution, and Barcelona is good fun. Now into the details.

My Talk
As I was at the summit for the reason of presenting, I thought it’s best to go over this topic first. My talk was focused on a project we did with NTT West around providing programmatic/API access to a traditional firewall that could only be configured via CLI, and then driving this API through a new Horizon dashboard. The end goal being an operator (tenant) could be assigned a firewall and be able to administer the device without having to be experienced in the specifics of the command line interface. If this sounds interesting to you, you can watch the presentation on YouTube to get a better understanding of how we achieved this.

NFV Stuff
Just by looking at the Summit schedule it is clear that NFV is a key use case for many companies using and contributing to OpenStack. There were 24 presentations and/or demos focused around NFV, diving into particular aspects such as orchestration, service function chaining (SFC) and use cases such as virtual evolved packet core (vEPC). I can’t talk about NFV without mentioning OPNFV (Open Platform for NFV) which is a reference NFV platform built by the open source community under the linux foundation. OPNFV is primarily a testing and integration project that brings together all the bits and pieces (OpenStack, OpenDaylight, OVS, DPDK, etc) you would need to launch a NFV cloud ready platform, that has been thoroughly tested, automated and put through a CI/CD pipeline, all packaged in a nice installer(s) with documentation. You can learn more about OPNFV by visiting the following siteOne OPNFV presentation at the summit was highlighting the advancements in neutron that benefit NFV use cases such as VLAN aware VMs and also showing the current shortfalls of neutron and how other networking solutions such as OpenDaylight can play nicely with Neutron to fulfil these shortfalls.

SFC Stuff
SFC was another hot topic, there were a couple of talks that showcased the benefits of using SFC and how it was being implemented in OpenStack. The following presentation was one of those, showing how traffic could be redirected based on a classification such as optimising IPSec, or video traffic. This is achieved through using Network Service Headers (NSH) a SFC encapsulation protocol still in a IETF draft but quickly becoming reality and some aspects can already be used in the Mitaka release of OpenStack.

OVN Stuff
Other great network-centric talks included the OVN presentation which shows some powerful features if you’re operating a OpenStack cloud using OVS and OVN to perform L2 and L3 functions (networking-ovn plugin). It allows for extended OVS scaling (distributed DHCP on the OVS agent), L3 performance becomes on-par with L2 performance through flow caching, and pre-calculation of network hops. New debugging features called ovn-trace allows for “what-if” analysis on packet classifications so you can see how a packet will traverse the network and flow table(s). They also spoke about BPF Datapath which provides a sandboxed environment in the linux kernel allowing new functionality to be inserted at runtime without having to write new linux kernel modules which can cause headaches not only to write, but to maintain and be supported in various linux distributions. This means new network and tunnelling protocols developed for a particular use case can be created and potentially be portable across Linux distributions.

Orchestration Stuff
Orchestration was also a big topic at the summit, with many presentations from the Tacker team, Cloudify and others. All of whom seem to be converging on the use of TOSCA as the modelling language for NFV services. I recommend the following presentations for those who are interested in all things orchestration:

Other Stuff and Conclusion
Stepping back from the technical aspects of the summit and reading between the lines it can be said that OpenStack has really matured, the distributors such as Mirantis, Ubuntu and RedHat have really gone to lengths to ease the pain of installation through project such as Ironic, OpenStack Ansible, Fuel, Packstack etc. The CI/CD system in place for OpenStack has also meant that code that is pushed upstream is properly reviewed (through gerrit), tests are written (Unit + Integration) and run automatically by the CI system (Jenkins) the results of this process is a stable system made up of many different components with hundreds of contributors from around the world, quite an achievement in itself. I believe the maturity and stability of the product is driving more adoption into telcos who generally set a high bar when it comes to production ready software, so this can only be a good sign for the project.

Last but not least, the city of Barcelona is an amazing place, and the perfect setting to catch up with colleagues and meet new ones all over some fantastic food, drinks and laughs. 10/10 would do it again.

Originally Published on the [https://netdevservice.atlassian.net] website on 4/27/174

 

From Open Source to Product; A Look Inside the Sausage Making Factory

I’ve spent the last few months working closely with the OpenDaylight and OpenStack developer teams here at Brocade and I’ve gained a heightened appreciation for how hard it is to turn a giant pile of source code from an open source project into something that customers can deploy and rely on.

Kevin Woods

Kevin Woods

Not to criticize open source in any way – it’s a great thing.   These new open source projects in the networking industry, such as OpenDaylight, OpenNFV and OpenStack are going to do great things to advance networking technology.

No, it’s just the day to day grind of delivering a real product that challenges our team every day.

On any given day, when we are trying to build the code, we’ll get new random errors and in many cases it’s not immediately obvious where the problem is.   In another test we’ll get unexpected compatibility problems between different controller elements.   Again, somebody made a change and you can’t trace the problem.  On some days, certain features will stop working for no known reason.  Because of the above, we need to continuously update and revise test automation and test plans – that is also done daily.

When it comes to debugging a problem, unless you’re working with the source code and regularly navigating to find problems, diagnosis is difficult.    Some of the controller internals are extremely complex, for example the MD-SAL.   Digging into that to make either enhancements or fixes is not for the faint of heart.

The OpenDaylight controller is actually several projects that must be built separately and then loaded via Karaf.  This can be non-intuitive.

Another area of complexity is around managing your own development train.   If you’re going to have a non-forked controller that stays very close to the open source, you cannot risk being totally dependent upon the project (for the above reasons and others), and so you basically have to manage a parallel development thread.   At the same time, you find problems or want to make minor enhancements that you need in service, but cannot contribute immediately back to the project (that takes some review and time).    So you’re left with this problem of branching and re-converging all the time.   Balancing the pace of this with the projects pace is a challenge every day.

Then there’s all the maintenance associated with managing our own development thread, supporting internal teams, maintaining and fixing the documentation etc.   Contributing or committing code back to the project, when needed, is not a slam dunk either.   There is a commit and review process for that.  It takes some time and effort.

I think we’ll find the quality of the new Helium release to be significantly better than Hydrogen.  Lithium will no doubt be an improvement over Helium and so on.   The number of features and capabilities of the controller will also increase rapidly.

But after going through this product development effort the last few months I have a real appreciation for the value that a commercial distribution can bring.    And that’s just for the controller itself – what about support, training and so on?    Well, I leave those things for another blog.

Originally Published on the Brocade Community on 10/9/2014

NetDevEMEA : Vyatta router integration with OpenStack

  1. Introduction

    The target objective will be to deliver a demonstrations using a architecture consisting of:

 

Figure  above, shows the functional architecture of a lightweight demonstration. This topology will be able to demonstrate simple deployments of VNFs, applications and how to programmatically drive configuration for the vRouter, vADC and the VCS fabric itself

  • Brocade SDN ControllerA commercial wrap of the OpenDayLight controller, stabilised with support. OpenDayLight is the defacto open source SDN controller that can deliver OpenFlow, NETCONF and PCEP functionality out of the box.
  • OpenStackThe VIM (Infrastructure Orchestration) layer in this demo will be OpenStack at a code level that has proven integration with Brocade products. The developer automated build of OpenStack. Neutron, the OpenStack network project, will connect to the SDN controller, the vRouter and the VDX switching architecture.
  • Brocade vRouterThe Brocade vRouter is a fully functional routing platform based in software and specifically the 5600 platform has DPDK enhancements to packet forwarding as well as availability of a NETCONF interface. In this demo, the vRouter will provide tenant routing, firewalling and edge VPN services for site-to-site or remote access.
  • Brocade VDX Switches (VCS Fabric): The underlay network will be formed of Brocade VDX datacentre switches formed in to an Ethernet Fabric. Underlay networks and physical network elements are supposed to be easy to configure and operate, which the VCS (Ethernet fabric configuration) is. OpenStack can integrate in to the VCS fabric using a Neutron plugin.

Figures  above, shows the physical architecture and the logic created after this guide.

Installing Brocade Software Defined Networking Controller

This guide describes the steps to install the Brocade SDN Controller (BSC).

The BSC is a commercial version of OpenDaylight based on the Boron release.  It includes the Brocade topology manager and a number of features that are pre-installed to support integration with Openstack and manage openflow hardware.

  1. Disable SELinux and Firewalls
    BSC Pre-reqs
    # Install Pre-reqs
    sudo yum install createrepo \
     java-1.8.0-openjdk \
     nodejs.x86_64 \
     zip \
     unzip \
     python-psycopg2 \
     psmisc \
     python2-pip
    
    pip install --upgrade pip
    pip install pyhocon
    
    
    # Disable SELinux
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
    
    
    # Disable and stop the firewall
    systemctl  disable firewalld
    systemctl stop firewalld

Install BSC

The BSC software comes packaged for a number of operating systems including RPMs, and Deb.  Get the latest version and install it using the steps below.

 

The latest version used at the time of writing this is ‘SDN-Controller-dist-4.1.0-rpm.tar.gz’

  1. Download the latest release from http://www.mybrocade.com
  2. Copy the file to your host system
  3. untar and install
Install BSC Software
# Untar and run install script
tar -zxvf SDN-Controller-dist-4.1.0-rpm.tar.gz
cd SDN-Controller-dist-4.1.0 && ./install

 

Verify Installation:

The BSC software install Opendaylight and the Brocade Topology manager.  These should be started automatically, and you can confirm they are running by doing the following:

  1. Browse to the topology manager http://<hostname/ip>:9001
  2. Login with the username admin, password admin
  3. You should see a screen similar to below

From the CLI you can also confirm that the ODL RESTful API is responding.
The above command should return a json block containing an empty array of networks.

Query ODL RESTful API
curl -u admin:admin http://192.168.100.13:8181/controller/nb/v2/neutron/networks
BSC Response
{
   "networks" : [ ]
}[

Starting / Stopping Services

The BSC is composed of two processes.

  • brcd-bsc  (opendaylight controller)
  • brcd-ui     (Brocade Topology Manager)

If the processes are under system v control rather than systemctl, so you will need to use the service command to stop and start them

Stopping / Starting Services
# Stop topology manager
service brcd-ui stop


# Stop controller
service brcd-bsc stop

Log into Karaf

Stopping / Starting Services
# Launch client
/opt/brocade/bsc/bin/client -u karaf

Installation Directory and Log files

The software is installed under /opt/brocade.

You will find the topology manager installed under /opt/brocade/ui

The controller is installed under /opt/brocade/bsc

 

If you are familiar with OpenDaylight, then you will find the usual ODL directory structure under  /opt/brocade/bsc/controller

Log files for the controller can be found under /opt/brocade/bsc/controller/data/log/karf.log

  1. Openstack environment & SDN controller integration

    Integration

    In order to connect the two solutions, ODL exposes a northbound API.  Neutron can call the ODL API by replacing the default openvswitch ML2 mechanism driver with the opendaylight mechanism driver.

    This means when a user makes a request to create a network in Openstack, neutron sends a API call to Opendaylight.  Opendaylight can then program the switches with openflow rules.

    The image below is a simplified view of the integration

    Why Integrate these two solutions?

    The basic integration will provide the same functionality as provided by the native Openstack solution, but their are some benefits.

    Security groups can be implemented with openflow rules instead of iptables. This means complex iptables aren’t required to implement the security groups.  This should provide better throughput for a system handling alot of flows.

    You can also use ODL to manage other network infrastructure under your cloud such as the physical switches and load balancers.  This configuration isn’t automatic, but provides the potential to get a better view of your network than that provided by Openstack neutron alone.

    Warning

    Integrating Openstack with Opendaylight is destructive. You must delete all your existing networks and routers before beginning.

    Compatibility

    The steps in this guide have been tested with the following product versions:

    Openstack Version Opendaylight Version
    Liberty Boron SR2 (netvirt-openstack)
    Mitaka Boron SR2 (netvirt-openstack)

    Openstack Deployment with RDO

    systemctl  disable firewalld
    systemctl stop firewalld
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    systemctl enable network
    systemctl start network
    
    
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
    
    
    yum install -y centos-release-openstack-liberty.noarch
    yum install -y openstack-packstack
    yum update -y
    CONTROLLER_01_IP=192.168.0.11
    packstack \
     --install-hosts=${CONTROLLER_01_IP} \
     --novanetwork-pubif=enp6s0 \
     --novacompute-privif=enp2s0f0 \
     --novacompute-privif=enp2s0f1 \
     --provision-demo=n \
     --provision-ovs-bridge=n \
     --os-swift-install=n \
     --os-heat-install=y \
     --neutron-fwaas=y \
     --os-neutron-lbaas-install=y \
     --os-neutron-vpnaas-install=y \
     --nagios-install=n \
     --os-ceilometer-install=n \
     --default-password=brocade101
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat,vlan
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks extnet
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges provider
    cat <<EOF | tee --append /etc/neutron/plugins/ml2/ml2_conf.ini
    [ml2_odl]
    password = admin
    username = admin
    url = http://192.168.100.13:8181/controller/nb/v2/neutron
    [ovs]
    local_ip = 192.168.200.100
    bridge_mappings = extnet:br-ex,provider:ens161
    EOF
    systemctl restart neutron-server

    Prerequisites

    At this point you have followed both the Openstack Deployment with RDO and Installing SDN controller guides and have two clean deployments.  To integrate the two deployments we must install the networking_odl plugin on openstack, and configure neutron to use  the opendaylight driver instead.  If you are using Openstack Mitaka or newer you can install this from repos.   If you are using Liberty or older you will need to fetch the code and install the relevant version from git.

    You should also ensure all instances, networks and router have been deleted.

    Clean Openstack
    # Delete Instances
    nova list
    nova delete <instance names>
    
    
    # Delete subnet interfaces from routers
    neutron subnet-list
    neutron router-list
    neutron router-port-list <router name>
    neutron router-interface-delete <router name> <subnet ID or name>
    
    
    # Delete subnets, networks and routers
    neutron subnet-delete <subnet name>
    neutron net-list
    neutron net-delete <net name>
    neutron router-delete <router name>
    1. Prepare Openstack Controller

      1. Shutdown neturon services
      2. Disable openvswitch agent
      3. Clean Openvswitch configuration
      Prepare Openstack Controller
      # Shutdown Neutron Services
      systemctl stop neutron-server
      systemctl stop neutron-openvswitch-agent
      systemctl stop neutron-l3-agent.service
      systemctl stop neutron-dhcp-agent.service
      systemctl stop neutron-metadata-agent
      systemctl stop neutron-metering-agent
      systemctl stop neutron-lbaas-agent
      systemctl stop neutron-vpn-agent
      
      # Disable Openvswitch Agent
      systemctl disable neutron-openvswitch-agent
      
      
      # Clean Openvswitch Configuration
      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      
      
      # The next command should show there are no ports left
      ovs-dpctl show
      
      

       

      Prepare Openstack Compute Node

      1. Shutdown neturon services
      2. Disable openvswitch agent
      3. Clean Openvswitch configuration
      Prepare Openstack Compute Node
      # Shutdown Neutron Services
      systemctl stop neutron-openvswitch-agent
      
      # Disable Openvswitch Agent
      systemctl disable neutron-openvswitch-agent
      
      # Clean Openvswitch Configuration
      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      
      # The next command should show there are no ports left
      ovs-dpctl show
      
      

       

      Disabling neutron-openvswitch should be sufficent, but in some Openstack distros this is linked to openvswitch. Restarting openvswitch triggers the agent to startup again, which will trash configuration applied by ODL. To prevent this from happening you should remove the openvswitch agent package from both the controller and compute node

      yum remove -y openstack-neutron-openvswitch.noarch

       

       

      Connect Openstack vSwitches to Opendaylight

      We will now connect the openvswitches to opendaylight, so that it can manage them.

      1. On the controller configure openvswitch
      2. On the compute node configure openvswitch
        Connect Openstack OVS to ODL
        # On the controller we will specify to provider mappings (br-ex and br-mgmt these should have been configured during the Openstack installation)
        # The data interface i.e. the interface which carries the vxlan tunnel is called ens256.
        #
        opendaylight_ip=192.168.100.13
        data_interface=$(facter ipaddress_ens256)
        read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
        ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$data_interface
        ovs-vsctl set Open_vSwitch $ovstbl other_config:provider_mappings=extnet:br-ex,provider:ens161
        ovs-vsctl set-manager tcp:${opendaylight_ip}:6640
        ovs-vsctl list Manager
        echo
        ovs-vsctl list Open_vSwitch
        
        
        # On the compute we will specify to provider mappings (br-mgmt these should have been configured during the Openstack installation)
        # The data interface i.e. the interface which carries the vxlan tunnel is called ens256.
        #
        opendaylight_ip=192.168.100.13
        data_interface=$(facter ipaddress_ens256)
        read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
        ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$data_interface
        ovs-vsctl set Open_vSwitch $ovstbl other_config:provider_mappings=extnet:br-ex,provider:ens161
        ovs-vsctl set-manager tcp:${opendaylight_ip}:6640
        ovs-vsctl list Manager
        echo
        ovs-vsctl list Open_vSwitch
        
        
      3. Verify that both switches are now connected to ODL http://<ODL IP>:8181/index.html
        You should see a screen similair to below:

      Configure Neutron and the Networking ODL Plugin

      If you are using Mitaka or later you can install the networking ODL plugin by executing the following

      Install Networking ODL from source
      yum install -y python-networking-odl.noarch

      If you are using Liberty or an earlier release, you need to check the plugin out from github, and switch to the relevant branch

      Install Networking ODL from source
      # Clone
      #
      yum install -y git
      git clone https://github.com/openstack/networking-odl.git
      cd networking-odl/
      
      
      # Switch branch to liberty
      #
      git fetch origin
      git checkout -b liberty-test remotes/origin/stable/liberty
      
      # Modify networking_odl/ml2/mech_driver.py
      # Add constants.TYPE_FLAT
      # TODO Add patch here
      
      # Install pip
      yum install -y python-pip
      
      
      # Install the plugin dependancies and the plugin
      #
      pip install -r requirements.txt
      python setup.py install

       

      Configure Neutron

      Configuring Neutron
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat,vlan
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks extnet
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges provider
      
      
      [ml2_odl]
      password = admin
      username = admin
      url = http://192.168.100.13:8181/controller/nb/v2/neutron
      [ovs]
      local_ip = 192.168.200.100
      bridge_mappings = extnet:br-ex,provider:ens161
      EOF
      
      mysql -e "drop database if exists neutron;"
      mysql -e "create database neutron character set utf8;"
      mysql -e "grant all on neutron.* to 'neutron'@'%';"
      neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
      
      
      # Startup Neutron Services
      systemctl start neutron-server
      systemctl start neutron-openvswitch-agent
      systemctl start neutron-l3-agent.service
      systemctl start neutron-dhcp-agent.service
      systemctl start neutron-metadata-agent
      systemctl start neutron-metering-agent

       

      At this point your switches should look like this:

      Controller:

      Controller vSwitch
       ovs-vsctl show
      9104e5a8-c31d-4219-bf31-07027eca0ec2
          Manager "tcp:192.168.100.13:6640"
              is_connected: true
          Bridge br-mgmt
              Port br-mgmt
                  Interface br-mgmt
                      type: internal
              Port "ens224"
                  Interface "ens224"
          Bridge br-int
              Controller "tcp:192.168.100.13:6653"
                  is_connected: true
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
          Bridge br-ex
              Port br-ex
                  Interface br-ex
                      type: internal
              Port "ens192"
                  Interface "ens192"
          ovs_version: "2.5.0"

      Compute

      Compute vSwitch
      ovs-vsctl show
      c4001475-e37c-43b4-9f65-cc525df5997b
          Manager "tcp:192.168.100.13:6640"
              is_connected: true
          Bridge br-int
              Controller "tcp:192.168.100.13:6653"
                  is_connected: true
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
          Bridge br-mgmt
              Port br-mgmt
                  Interface br-mgmt
                      type: internal
              Port "ens224"
                  Interface "ens224"
          ovs_version: "2.5.0"
  2. vRouter Integration

    The vRouter or Vyatta router is a brocade software based router.  It comes in the form of a virtual machine image.

    source keystonerc_admin
    curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
    glance image-create  --file ./cirros-0.3.4-x86_64-disk.img --visibility public --container-format bare  --disk-format qcow2  --name cirros
    glance image-create  --file ./vrouter-5.0R1.qcow2 --visibility public --container-format bare  --disk-format qcow2 --name "Brocade Vyatta vRouter 5600 5.0R1"
    nova flavor-create vrouter auto 4096 4 4 --is-public true
    neutron security-group-rule-create --direction ingress --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1 --port-range-max 65535 --remote-ip-prefix 0.0.0.0/0 default
    neutron security-group-rule-create --direction ingress --protocol udp --port-range-min 1 --port-range-max 65535 --remote-ip-prefix 0.0.0.0/0 default
    neutron net-create private
    neutron subnet-create --name private_subnet  private 172.16.10.0/24
    network_id=$(openstack network show private -f value -c id)
    neutron router-create router
    neutron router-interface-add router private_subnet
    neutron net-create public -- --router:external --provider:network_type=flat --provider:physical_network=extnet
    neutron subnet-create --allocation-pool start=10.60.0.192,end=10.60.0.207 --gateway 10.60.0.1 --name public_subnet public 10.60.0.0/24 -- --enable_dhcp=False
    neutron router-gateway-set router public

    One drawback to the vRouter is the need to have the management network, and therefore it must be considered when creating the openstack deployment.

    neutron net-create \
      management \
      --shared \
      --provider:network_type=vlan \
      --provider:physical_network=provider \
      --provider:segmentation_id=121
    neutron subnet-create \
      --allocation-pool start=192.168.121.192,end=192.168.121.207 \
      --no-gateway \
      --name management_subnet \
     --enable-dhcp \
      management \
      192.168.121.0/24
    tenant_id=$(keystone tenant-list | grep admin | awk '{print $2}')
    image_id=$(openstack image show -f value -c id  "Brocade Vyatta vRouter 5600 5.0R1")
    flavor_id=$(openstack flavor show  -f value -c id  "vrouter")
    network_id=$(openstack network show -f value -c id "management")
    
    
    cat <<EOF | tee --append /etc/neutron/conf.d/neutron-server/vrouter.conf
    [vrouter]
    tenant_admin_name = admin
    tenant_admin_password = brocade101
    tenant_id = ${tenant_id}
    image_id = ${image_id}
    flavor = ${flavor_id}
    management_network_id = ${network_id}
    keystone_url=http://192.168.100.100:5000/v2.0
    EOF
    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_dns_servers 10.60.0.1
    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
    crudini --set /etc/neutron/l3_agent.ini DEFAULT enable_metadata_proxy False
    crudini --set /etc/neutron/plugin.ini ml2 extension_drivers port_security
    
    declare -a instances=$(openstack server list -f value -c ID)
    for instance in ${instances[@]}; do openstack server delete $instance; done
    declare -a routers=$(neutron router-list -f value -c id)
    for router in ${routers[@]}; do neutron router-gateway-clear router; declare -a ports=$(neutron router-port-list -f value -c id ${router}); for port in ${ports[@]}; do neutron router-interface-delete ${router} ${port}; done; neutron router-delete ${router}; done
    
    
    crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins lbaas,firewall,vpnaas,neutron.services.l3_router.brocade.vyatta.vrouter_neutron_plugin.VyattaVRouterPlugin
  3. Demo preparation

    1. Create networks:
      Login as admin in OpenStack platform:

    External networks through heat template.
    In /dashboard/project/stacks/ select launch slack and choose the “external_networks.yaml” template.

    external_networks.yaml
    heat_template_version: 2015-10-15
    
    description: |
      The heat template is used to NetDEv demo.
    resources:
     public_vlan_120:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 120
          cidr: 10.10.120.0/24
          network_start_address: 10.10.120.10
          network_end_address: 10.10.120.253
          gateway: 10.10.120.254
    
     public_vlan_122:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 122
          cidr: 10.10.122.0/24
          network_start_address: 10.10.122.10
          network_end_address: 10.10.122.253
          gateway: 10.10.122.254
    
     public_vlan_123:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 123
          cidr: 10.10.123.0/24
          network_start_address: 10.10.123.10
          network_end_address: 10.10.123.253
          gateway: 10.10.123.254

    • Add management network (/dashboard/project/networks/)
  • Create two users (/dashboard/identity/users/create_user)
  •           blue_team_leader
  •           red_team_leader

 

  • Create two projects (/dashboard/identity/)
  •           Blue Team
  •           Red Team
  •      Include users previously created in its corresponding project (Manage members)

 

  • Sign out as admin.
  • Next steps have to be executed in both tenants. Here it is only explain for blue_team_leader
  • Log on as blue_team_leader user:
  • Create tenant’s private networks:
  1. Logging as user run a new slack template.”private_network.yaml” provide a router and two interfaces attached, which is directly connected with two subnets: dataplane_subnet and database_subnet. Each of them will contain an instance.
  2. private_network.yaml
    heat_template_version: 2015-10-15
    
    
    description: |
      The heat template is used to NetDEv demo.
    
    resources:
      dataplane_network:
        type: OS::Neutron::Net
        properties:
          name: dataplane_network
    
      dataplane_subnet:
        type: OS::Neutron::Subnet
        properties:
          network_id: { get_resource: dataplane_network }
          cidr: 172.16.10.0/24
          name: dataplane_subnet
    
      database_network:
        type: OS::Neutron::Net
        properties:
          name: database_network
    
      database_subnet:
        type: OS::Neutron::Subnet
        properties:
          network_id: { get_resource: database_network }
          cidr: 172.17.10.0/24
          name: database_subnet
    
      router:
            type: OS::Neutron::Router
    
      router_interface_subnet_dataplane:
            type: OS::Neutron::RouterInterface
            properties:
              router_id: { get_resource: router }
              subnet_id: { get_resource: dataplane_subnet }
    
      router_interface_subnet_database:
            type: OS::Neutron::RouterInterface
            properties:
              router_id: { get_resource: router }
              subnet_id: { get_resource: database_subnet }

  • Add router gateway (/dashboard/project/routers/)
  1. Click on set gateway and choose an external network. In this case vRouter-blue is connected with public_123. External Fixed IP received (gw-ip-address) will be use in the next step.
  • Plumb interfaces
    • Manually
  1. Option 1:
  2. replumb_gateway.sh ${gw-ip-address}
  3. Option 2:
  4. neutron port-list | grep ${gw-ip-address}
  5. Get the output Mac and by means of ifconfig search its corresponding ${tap_interface}
  6. ip link set ${tap_interface} down
    ovs-vsctl del-port br-int ${tap_interface}
    ip link set ${tap_interface} master br${vlan_id}
    ip link set ${tap_interface} up
    brctl show br${vlan_id}
    •  BWC
  1. TODO
  2. Deploy servers

  1. servers.yaml” creates two instances. The web server is connected with the data plane subnet. During the creation process the server will be configured as a WordPress server, also it will be able to reach the database server. The data base instance is located in database subnet.
  2. Each of them have pre-configured security groups to allow a correct communication between them:
  • Database is totally isolated. It only can be accesses by the webserver
  • WordPress server exposed his functionally through an assigned external IP address. Also it can be reach by ssh and port 80 obviously.
    servers.yaml
    heat_template_version: 2015-10-15
    
    
    description: |
      The heat template is used to NetDev demo.
    
    
    parameters:
      tenant_name:
        type: string
        default: blue
      image_web_server:
        type: string
        default: web_server
      image_data_base:
        type: string
        default: database
      key:
        type: string
        description: my key.
        default: admin
      flavor:
        type: string
        default: m1.small
      public_network:
        type: string
        description: public network
        default: xxx
    
    resources:
      server_DB:
        type: OS::Nova::Server
        properties:
          name:
              str_replace:
                   template: database-teamname
                   params:
                       teamname: { get_param: [ tenant_name ] }
          image: { get_param: image_data_base }
          flavor: { get_param: flavor }
          key_name: { get_param: key }
          networks:
            - port: { get_resource: server_DB_port }
    
      server_DB_port:
        type: OS::Neutron::Port
        properties:
          network: database_network
          fixed_ips:
            - subnet_id: database_subnet
          security_groups:
            - { get_resource: data_base_security_group }
            - default
    
      server_HTTP:
        type: OS::Nova::Server
        depends_on: server_DB
        properties:
          name:
              str_replace:
                   template: web-server-teamname
                   params:
                       teamname: { get_param: [ tenant_name ] }
          image: { get_param: image_web_server }
          flavor: { get_param: flavor }
          key_name: { get_param: key }
          user_data_format: RAW
          user_data:
            str_replace:
              template: |
                #!/bin/bash -v
                echo -e "wr_ipaddr\tdatabase" > /etc/hosts
                cd /var/www/html
                timeout 300 bash -c 'cat < /dev/null > /dev/tcp/database/3306'
                wp core config --dbname=wordpress --dbuser=admin --dbpass=brocade101 --dbhost=database
                wp core install --url=http://float_ip --title="team_name" --admin_name=wordpress --admin_email=wordpress@brocade.com --admin_password=wordpress
    
              params:
                wr_ipaddr: { get_attr: [server_DB_port, fixed_ips, 0, ip_address] }
                float_ip: { get_attr: [HTTP_server_floating_ip, floating_ip_address] }
                team_name:
                  str_replace:
                     template: Team teamname
                     params:
                       teamname: { get_param: [ tenant_name ] }
          networks:
            - port: { get_resource: server_HTTP_port }
    
    
      server_HTTP_port:
        type: OS::Neutron::Port
        properties:
          network: dataplane_network
          fixed_ips:
            - subnet_id: dataplane_subnet
          security_groups:
            - { get_resource: web_server_security_group }
            - default
      HTTP_server_floating_ip:
        type: OS::Neutron::FloatingIP
        properties:
          floating_network_id: { get_param: public_network }
          port_id: { get_resource: server_HTTP_port }
    
      data_base_security_group:
          type: OS::Neutron::SecurityGroup
          properties:
            name: data_base_security
            rules:
              - remote_ip_prefix:
                  str_replace:
                     template: web_server_ip/32
                     params:
                       web_server_ip: { get_attr: [server_HTTP_port, fixed_ips, 0, ip_address] }
                protocol: tcp
                port_range_min: 3306
                port_range_max: 3306
      web_server_security_group:
          type: OS::Neutron::SecurityGroup
          properties:
            name: web_server_security
            rules:
              - remote_ip_prefix: 0.0.0.0/0
                protocol: tcp
                port_range_min: 80
                port_range_max: 80

Attachments/Gallery:

Document generated by Confluence on Jul 24, 2017 05:34

NetDevEMEA : OpenStack, Opendaylight and VTN Feature

NetDevEMEA : OpenStack, Opendaylight and VTN Feature

  1. Introduction


     

 

 

OpenDaylight Virtual Tenant Network (VTN) is an opendaylig’s feature that provides multi-tenant virtual network. It alows aggregate multiple ports from the many both physical and Virtual to form a single isolated virtual network called Virtual Tenant Network. Each tenant network has the capabilities to function as an individual switch.

The objective of this tutorial is to facilitate a configuration/integration of Openstack Mitaka with Opendalylight that permits explore VTN feature.


 

 

 

Figure  above, shows the logical architecture after this guide.

  1. Virtualbox configuration:

    1. Download and install Virtualbox

      Install VirtualBox (version 5.0 and up), and VirtualBox Extension packs (follow instructions for Extension packs here).

    2. Download a Centos Centos 7.2
      Main download page for CentOS7
    3. Run Virualbox and Create 2 x Host Only Networks

      To create a host-only connection in VirtualBox, start by opening the preferences in VirtualBox. Go to the Network tab, and click on add a new Host-only Network.

      Host-Only networks configuration
      # This will be used for data i.e. vxlan tunnels
      #VirtualBox Host-Only Ethernet Adapter 1
      IPv4 Address 192.168.254.1
      Netmask 255.255.255.0
      DHCP Server Disabled
      # This will be used for mgmt, i.e. I connect from Windows to the VMs, or for VM to VM communication
      #VirtualBox Host-Only Ethernet Adapter 2
      IPv4 Address 192.168.120.1
      Netmask 255.255.255.0
      DHCP Server Disabled
    4. Create 3 x VirtualBox Machines running Centos 7.2 (installed from ISO). Setting up
      openstack-compute
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.132 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.132 enp0s9)
      openstack-controller
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.131 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.131 enp0s9)
      OpenDaylight-Controller
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.254 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.254 enp0s9)
    5. Interface Configuration Files

      First thing to do after run each VM is edit all interface configuration files. In CentOS system they could be found here  /etc/sysconfig/network-scripts/. Here is a sample of how must to look the interface files for the openstack-controller.  Make sure they look similar on all 3 of your machines

      /etc/sysconfig/network-scripts/ifcfg-enp0s3
      TYPE=Ethernet
      BOOTPROTO=dhcp
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s3
      DEVICE=enp0s3
      ONBOOT=yes
      PEERDNS=yes
      PEERROUTES=yes
      /etc/sysconfig/network-scripts/ifcfg-enp0s8
      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=no
      IPV4_FAILURE_FATAL=yes
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s8
      DEVICE=enp0s8
      ONBOOT=yes
      IPADDR=192.168.254.131 #Modify this value in case you are configuring openstack-compute or Opendaylight controller
      PREFIX=24
      GATEWAY=192.168.254.1
      /etc/sysconfig/network-scripts/ifcfg-enp0s9
      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=no
      IPV4_FAILURE_FATAL=yes
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s9
      DEVICE=enp0s9
      ONBOOT=yes
      IPADDR=192.168.120.131
      PREFIX=24
      GATEWAY=192.168.120.1
  2. Openstack Install

    1. Pre-Configuration

      Edit /etc/hosts on all three VMs so it looks like this:

      /etc/hosts
      192.168.120.131 controller.netdev.brocade.com 192.168.120.132 compute-01.netdev.brocade.com compute
      192.168.120.254 opendaylight.brocade.com opendaylight

      Disable Firewall and NetworkManager

      systemctl disable firewalld systemctl stop firewalld
      systemctl disable NetworkManager 
      systemctl stop NetworkManager
      systemctl enable network 
      systemctl start network

      Disable SE Linux and install Openstack packstack RDO

      setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
      
      
      yum install -y centos-release-openstack-mitaka.noarch
      yum install -y openstack-packstack
      yum update -y

      Install Openstack Run the following command int openstack-controller (openstack-compute VM must to be running)

      packstack\
      --install-hosts=192.168.120.131,192.168.120.132\
      --novanetwork-pubif=enp0s9\
      --novacompute-privif=enp0s8\
      --provision-demo=n\
      --provision-ovs-bridge=n\
      --os-swift-install=n\
      --nagios-install=n\
      --os-ceilometer-install=n\
      --os-aodh-install=n\
      --os-gnocchi-install=n\
      --os-controller-host=192.168.120.131\
      --os-compute-hosts=192.168.120.132\
      --default-password=helsinki
    2. Test the environemnt

      On the same directory as the previous command was run:

      source keystonerc_admin
      curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
      glance image-create --file ./cirros-0.3.4-x86_64-disk.img --visibility public --container-format bare --disk-format qcow2 --name cirros

      Login to the UI http://192.168.120.131/dashboard (User: admin, pass: helsinki)

      Create a network, and router.  Create VM and check that it gets an IP

  3. Install OpenDaylight

     

    1. Install Opendaylight Boron SR2 and enable the necessary features:
       odl-ovsdb-openstack
       odl-dlux-core
       odl-ovsdb-ui
      There are two OpenStack plugins.  odl-ovsdb-opesntack and odl-netvirt-openstack.  The ovsdb is the older plugin and will be depreciated shortly, but this is what is used within BSC.   The next release of BSC should switch to the netvirt plugin.
      curl -O --insecure https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.5.2-Boron-SR2/distribution-karaf-0.5.2-Boron-SR2.tar.gz
      ln -s distribution-karaf-0.5.2-Boron-SR2 odl
      cd odl/bin
      ./start
      ./client -u karaf
      feature:install odl-ovsdb-openstack odl-dlux-core odl-ovsdb-ui

      Verify the install by browsing to the DLUX UI (admin : admin)

      http://192.168.120.254:8181/index.html#/topology

      Also check that the REST API works and is returning an empty set of networks

      curl –u admin:admin http://192.168.120.254:8181/controller/nb/v2/neutron/networks

  4. Openstack Integration

    1. Erase all VMs, networks, routers and ports in the Controller Node

      Start by deleting any VMs,Networks and routers that you have already created during the testing. Before integrating the OpenStack with the OpenDaylight, you must clean up all the unwanted data from the OpenStack database. When using OpenDaylight as the Neutron back-end, ODL expects to be the only source for Open vSwitch configuration. Because of this, it is necessary to remove existing OpenStack and Open vSwitch settings to give OpenDaylight a clean slate.Following steps will guide you through the cleaning process!

      # Delete instances
      $ nova list
      $ nova delete <instance names>
      # Remove link from subnets to routers
      $ neutron subnet-list
      $ neutron router-list
      $ neutron router-port-list <router name>
      $ neutron router-interface-delete <router name> <subnet ID or name>
      # Delete subnets, nets, routers
      $ neutron subnet-delete <subnet name>
      $ neutron net-list
      $ neutron net-delete <net name>
      $ neutron router-delete <router name>
      # Check that all ports have been cleared – at this point, this should be an empty list
      $ neutron port-list
      # Stop the neutron service
      $ service neutron-server stop
      

      While Neutron is managing the OVS instances on compute and control nodes, OpenDaylight and Neutron can be in conflict. To prevent issues, we turn off Neutron server on the network controller and Neutron’s OpenvSwitch agents on all hosts.

    2. Add an external bridge port

      Create  a new interface configuration file  /etc/sysconfig/network-scripts/ifcfg-br-ex

      It should look something like this (change the IPs to match your system – this should be the IP previously assigned to enp0s3)

      /etc/sysconfig/network-scripts/ifcfg-br-ex
      DEVICE=br-ex
      DEVICETYPE=ovs
      TYPE=OVSBridge
      BOOTPROTO=static
      IPADDR=172.168.0.78 # Previous IP associate to your enp0s3
      NETMASK=255.255.255.0
      # Previous IP mask
      GATEWAY=172.168.0.1 # Previous gateway
      ONBOOT=yes
      PEERDNS=yes
      PEERROUTES=yes
      

      Update enp0s3 – you can comment out the original settings, and add the new lines below

       vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

      /etc/sysconfig/network-scripts/ifcfg-enp0s3
      #TYPE=Ethernet
      #BOOTPROTO=dhcp
      #DEFROUTE=yes
      #IPV4_FAILURE_FATAL=no
      #IPV6INIT=no
      #IPV6_AUTOCONF=yes
      #IPV6_DEFROUTE=yes
      #IPV6_PEERDNS=yes
      #IPV6_PEERROUTES=yes
      #IPV6_FAILURE_FATAL=no
      #NAME=enp0s3
      #UUID=edcc0443-c780-48a0-bf2f-5de17751db78
      #DEVICE=enp0s3 #ONBOOT=yes
      #PEERDNS=yes
      #PEERROUTES=yes
      DEVICE=enp0s3
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-ex
      ONBOOT=yes
  5. Connect Openstack Controller and Compute  OVS to ODL

    Run next commands in both Openstack nodes:

    1. Set ODL Management IP
      export ODL_IP=192.168.120.254
      export OS_DATA_INTERFACE=enp0s8

      Stop Neutron

      systemctl stop neutron-server
      systemctl stop neutron-openvswitch-agent
      systemctl stop neutron-l3-agent.service
      systemctl stop neutron-dhcp-agent.service
      systemctl stop neutron-metadata-agent
      systemctl stop neutron-metering-agent

      Stop Neutron OVS  Processes  You must remove this package otherwise when you restart ovswitch it will get started and trash your  ovsdb

      systemctl stop neutron-openvswitch-agent
      systemctl disable neutron-openvswitch-agent
      yum remove -y openstack-neutron-openvswitch.noarch

      Clean Switches on controller

      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      ovs-dpctl del-if ovs-system enp0s3
      ovs-dpctl del-if ovs-system vxlan_sys_4789
      ovs-dpctl show
      data_interface=$(facter ipaddress_${OS_DATA_INTERFACE})
      read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
      ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=${data_interface}
      ovs-vsctl set-manager tcp:${ODL_IP}:6640
      ovs-vsctl list Manager echo
      ovs-vsctl list Open_vSwitch

      Bring br-ex and associated interface up and down

      ifdown br-ex
      ifdown enp0s3
      ifup enp0s3
      ifup br-ex
    2. Checking

      OVS configuration. 

      [user@openstackController ~]$ sudo ovs-vsctl show
      72e6274a-7071-4419-9f86-614e28b74d69
          Manager "tcp:192.168.120.254:6640"
          Bridge br-int
              Controller "tcp:192.168.120.254:6653"
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
            Bridge br-ex
              Port br-ex
                  Interface br-ex
                      type: internal
              Port "enp0s3"
                  Interface "enp0s3"
          ovs_version: "2.5.0"

       

    3. External Connectivity still works

      ping 8.8.8.8

      At this point you can now check the dlux UI, to ensure both switches show up

      http://192.168.120.254:8181/index.html#/topology

       

  6. Connect Openstack Neutron to ODL

    1. Install ODL Plugin for Neutron
      yum install -y python-networking-odl.noarch
    2. Configure Neutron ml2 to connect to ODL
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat cat <<EOF | tee --append /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_odl] password = admin username = admin url = http://${ODL_IP}:8181/controller/nb/v2/neutron EOF
    3. Clean database

      mysql -e "drop database if exists neutron;"
      mysql -e "create database neutron character set utf8;"
      mysql -e "grant all on neutron.* to 'neutron'@'%';" 
      neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
      
      systemctl start neutron-server
      systemctl start neutron-l3-agent.service
      systemctl start neutron-dhcp-agent.service
      systemctl start neutron-metadata-agent
      systemctl start neutron-metering-agent
      
  7. Virtual Tenant Network Feature

    From OpenDaylight’s console

    1. Install the required features for VTN.

      feature:install odl–vtn-manager-rest

      feature:install odl–vtn-manager-neutron

    2. Test rest API

      VTN Manager provides REST API for virtual network functions.

      Create a virtual tenant network
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X POST \
      http://192.168.120.254:8181/restconf/operations/vtn:update-vtn \
      -d '{"input":{"tenant-name":"vtn1"}}'

      Check if was created

      Get info
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X GET \
      http://192.168.120.254:8181/restconf/operational/vtn:vtns

      more examples [1]

  8. Mininet

    1. Download Mininet.
    2. Launch the Mininet VM with VirtualBox.
      openstack-compute
      #System
      RAM 1024
      Processors 1
      #Network
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.133 eth0)
      NIC 1 Bridged Adapter (Provides internet connectivity)(eth1)
      
    3. Log on to the Mininet VM with the following credentials:
      • user: mininet
      • password: mininet
    4. Interface configuration file 
      vi /etc/network/interfaces

      This configuration match with actual environment

      /etc/network/interfaces
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto eth0
      iface eth0 inet static
      address 192.168.120.133
      netmask 255.255.255.0
      
      auto eth1
      iface eth1 inet dhcp

       

    5. start a virtual network:

      sudo mn –controller=192.168.120.254

      more info [3]

  9. References

    [1] http://docs.opendaylight.org/en/stable-boron/user-guide/virtual-tenant-network-(vtn).html(Using DevStack and different versions of Openstack)

    [2] http://docs.opendaylight.org/en/stable-boron/opendaylight-with-openstack/openstack-with-vtn.html (Old OpenStack version)
    [3] https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet

Attachments:


vtn_example.JPG (image/jpeg)


Capture.JPG (image/jpeg)

Pin It on Pinterest