+1 (669) 231-3838 or +1 (800) 930-5144

NetDevEMEA : OpenStack, Opendaylight and VTN Feature

  1. Introduction


     

 

 

OpenDaylight Virtual Tenant Network (VTN) is an opendaylig’s feature that provides multi-tenant virtual network. It alows aggregate multiple ports from the many both physical and Virtual to form a single isolated virtual network called Virtual Tenant Network. Each tenant network has the capabilities to function as an individual switch.

The objective of this tutorial is to facilitate a configuration/integration of Openstack Mitaka with Opendalylight that permits explore VTN feature.


 

 

 

Figure  above, shows the logical architecture after this guide.

  1. Virtualbox configuration:

    1. Download and install Virtualbox

      Install VirtualBox (version 5.0 and up), and VirtualBox Extension packs (follow instructions for Extension packs here).

    2. Download a Centos Centos 7.2
      Main download page for CentOS7
    3. Run Virualbox and Create 2 x Host Only Networks

      To create a host-only connection in VirtualBox, start by opening the preferences in VirtualBox. Go to the Network tab, and click on add a new Host-only Network.

      Host-Only networks configuration
      # This will be used for data i.e. vxlan tunnels
      #VirtualBox Host-Only Ethernet Adapter 1
      IPv4 Address 192.168.254.1
      Netmask 255.255.255.0
      DHCP Server Disabled
      # This will be used for mgmt, i.e. I connect from Windows to the VMs, or for VM to VM communication
      #VirtualBox Host-Only Ethernet Adapter 2
      IPv4 Address 192.168.120.1
      Netmask 255.255.255.0
      DHCP Server Disabled
    4. Create 3 x VirtualBox Machines running Centos 7.2 (installed from ISO). Setting up
      openstack-compute
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.132 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.132 enp0s9)
      openstack-controller
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.131 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.131 enp0s9)
      OpenDaylight-Controller
      #System
      RAM 4096
      Processors 2
      #Network
      NIC 1 Bridged Adapter (Provides internet connectivity)  (enp0s3)
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.254 enp0s8)
      NIC 3 Host-Only VirtualBox Host-Only Ethernet Adapter 2 (statically configured 192.168.120.254 enp0s9)
    5. Interface Configuration Files

      First thing to do after run each VM is edit all interface configuration files. In CentOS system they could be found here  /etc/sysconfig/network-scripts/. Here is a sample of how must to look the interface files for the openstack-controller.  Make sure they look similar on all 3 of your machines

      /etc/sysconfig/network-scripts/ifcfg-enp0s3
      TYPE=Ethernet
      BOOTPROTO=dhcp
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s3
      DEVICE=enp0s3
      ONBOOT=yes
      PEERDNS=yes
      PEERROUTES=yes
      /etc/sysconfig/network-scripts/ifcfg-enp0s8
      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=no
      IPV4_FAILURE_FATAL=yes
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s8
      DEVICE=enp0s8
      ONBOOT=yes
      IPADDR=192.168.254.131 #Modify this value in case you are configuring openstack-compute or Opendaylight controller
      PREFIX=24
      GATEWAY=192.168.254.1
      /etc/sysconfig/network-scripts/ifcfg-enp0s9
      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=no
      IPV4_FAILURE_FATAL=yes
      IPV6INIT=no
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=enp0s9
      DEVICE=enp0s9
      ONBOOT=yes
      IPADDR=192.168.120.131
      PREFIX=24
      GATEWAY=192.168.120.1
  2. Openstack Install

    1. Pre-Configuration

      Edit /etc/hosts on all three VMs so it looks like this:

      /etc/hosts
      192.168.120.131 controller.netdev.brocade.com 192.168.120.132 compute-01.netdev.brocade.com compute
      192.168.120.254 opendaylight.brocade.com opendaylight

      Disable Firewall and NetworkManager

      systemctl disable firewalld systemctl stop firewalld
      systemctl disable NetworkManager 
      systemctl stop NetworkManager
      systemctl enable network 
      systemctl start network

      Disable SE Linux and install Openstack packstack RDO

      setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
      
      
      yum install -y centos-release-openstack-mitaka.noarch
      yum install -y openstack-packstack
      yum update -y

      Install Openstack Run the following command int openstack-controller (openstack-compute VM must to be running)

      packstack\
      --install-hosts=192.168.120.131,192.168.120.132\
      --novanetwork-pubif=enp0s9\
      --novacompute-privif=enp0s8\
      --provision-demo=n\
      --provision-ovs-bridge=n\
      --os-swift-install=n\
      --nagios-install=n\
      --os-ceilometer-install=n\
      --os-aodh-install=n\
      --os-gnocchi-install=n\
      --os-controller-host=192.168.120.131\
      --os-compute-hosts=192.168.120.132\
      --default-password=helsinki
    2. Test the environemnt

      On the same directory as the previous command was run:

      source keystonerc_admin
      curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
      glance image-create --file ./cirros-0.3.4-x86_64-disk.img --visibility public --container-format bare --disk-format qcow2 --name cirros

      Login to the UI http://192.168.120.131/dashboard (User: admin, pass: helsinki)

      Create a network, and router.  Create VM and check that it gets an IP

  3. Install OpenDaylight

     

    1. Install Opendaylight Boron SR2 and enable the necessary features:
       odl-ovsdb-openstack
       odl-dlux-core
       odl-ovsdb-ui
      There are two OpenStack plugins.  odl-ovsdb-opesntack and odl-netvirt-openstack.  The ovsdb is the older plugin and will be depreciated shortly, but this is what is used within BSC.   The next release of BSC should switch to the netvirt plugin.
      curl -O --insecure https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.5.2-Boron-SR2/distribution-karaf-0.5.2-Boron-SR2.tar.gz
      ln -s distribution-karaf-0.5.2-Boron-SR2 odl
      cd odl/bin
      ./start
      ./client -u karaf
      feature:install odl-ovsdb-openstack odl-dlux-core odl-ovsdb-ui

      Verify the install by browsing to the DLUX UI (admin : admin)

      http://192.168.120.254:8181/index.html#/topology

      Also check that the REST API works and is returning an empty set of networks

      curl –u admin:admin http://192.168.120.254:8181/controller/nb/v2/neutron/networks

  4. Openstack Integration

    1. Erase all VMs, networks, routers and ports in the Controller Node

      Start by deleting any VMs,Networks and routers that you have already created during the testing. Before integrating the OpenStack with the OpenDaylight, you must clean up all the unwanted data from the OpenStack database. When using OpenDaylight as the Neutron back-end, ODL expects to be the only source for Open vSwitch configuration. Because of this, it is necessary to remove existing OpenStack and Open vSwitch settings to give OpenDaylight a clean slate.Following steps will guide you through the cleaning process!

      # Delete instances
      $ nova list
      $ nova delete <instance names>
      # Remove link from subnets to routers
      $ neutron subnet-list
      $ neutron router-list
      $ neutron router-port-list <router name>
      $ neutron router-interface-delete <router name> <subnet ID or name>
      # Delete subnets, nets, routers
      $ neutron subnet-delete <subnet name>
      $ neutron net-list
      $ neutron net-delete <net name>
      $ neutron router-delete <router name>
      # Check that all ports have been cleared – at this point, this should be an empty list
      $ neutron port-list
      # Stop the neutron service
      $ service neutron-server stop
      

      While Neutron is managing the OVS instances on compute and control nodes, OpenDaylight and Neutron can be in conflict. To prevent issues, we turn off Neutron server on the network controller and Neutron’s OpenvSwitch agents on all hosts.

    2. Add an external bridge port

      Create  a new interface configuration file  /etc/sysconfig/network-scripts/ifcfg-br-ex

      It should look something like this (change the IPs to match your system – this should be the IP previously assigned to enp0s3)

      /etc/sysconfig/network-scripts/ifcfg-br-ex
      DEVICE=br-ex
      DEVICETYPE=ovs
      TYPE=OVSBridge
      BOOTPROTO=static
      IPADDR=172.168.0.78 # Previous IP associate to your enp0s3
      NETMASK=255.255.255.0
      # Previous IP mask
      GATEWAY=172.168.0.1 # Previous gateway
      ONBOOT=yes
      PEERDNS=yes
      PEERROUTES=yes
      

      Update enp0s3 – you can comment out the original settings, and add the new lines below

       vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

      /etc/sysconfig/network-scripts/ifcfg-enp0s3
      #TYPE=Ethernet
      #BOOTPROTO=dhcp
      #DEFROUTE=yes
      #IPV4_FAILURE_FATAL=no
      #IPV6INIT=no
      #IPV6_AUTOCONF=yes
      #IPV6_DEFROUTE=yes
      #IPV6_PEERDNS=yes
      #IPV6_PEERROUTES=yes
      #IPV6_FAILURE_FATAL=no
      #NAME=enp0s3
      #UUID=edcc0443-c780-48a0-bf2f-5de17751db78
      #DEVICE=enp0s3 #ONBOOT=yes
      #PEERDNS=yes
      #PEERROUTES=yes
      DEVICE=enp0s3
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-ex
      ONBOOT=yes
  5. Connect Openstack Controller and Compute  OVS to ODL

    Run next commands in both Openstack nodes:

    1. Set ODL Management IP
      export ODL_IP=192.168.120.254
      export OS_DATA_INTERFACE=enp0s8

      Stop Neutron

      systemctl stop neutron-server
      systemctl stop neutron-openvswitch-agent
      systemctl stop neutron-l3-agent.service
      systemctl stop neutron-dhcp-agent.service
      systemctl stop neutron-metadata-agent
      systemctl stop neutron-metering-agent

      Stop Neutron OVS  Processes  You must remove this package otherwise when you restart ovswitch it will get started and trash your  ovsdb

      systemctl stop neutron-openvswitch-agent
      systemctl disable neutron-openvswitch-agent
      yum remove -y openstack-neutron-openvswitch.noarch

      Clean Switches on controller

      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      ovs-dpctl del-if ovs-system enp0s3
      ovs-dpctl del-if ovs-system vxlan_sys_4789
      ovs-dpctl show
      data_interface=$(facter ipaddress_${OS_DATA_INTERFACE})
      read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
      ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=${data_interface}
      ovs-vsctl set-manager tcp:${ODL_IP}:6640
      ovs-vsctl list Manager echo
      ovs-vsctl list Open_vSwitch

      Bring br-ex and associated interface up and down

      ifdown br-ex
      ifdown enp0s3
      ifup enp0s3
      ifup br-ex
    2. Checking

      OVS configuration. 

      [user@openstackController ~]$ sudo ovs-vsctl show
      72e6274a-7071-4419-9f86-614e28b74d69
          Manager "tcp:192.168.120.254:6640"
          Bridge br-int
              Controller "tcp:192.168.120.254:6653"
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
            Bridge br-ex
              Port br-ex
                  Interface br-ex
                      type: internal
              Port "enp0s3"
                  Interface "enp0s3"
          ovs_version: "2.5.0"

       

    3. External Connectivity still works

      ping 8.8.8.8

      At this point you can now check the dlux UI, to ensure both switches show up

      http://192.168.120.254:8181/index.html#/topology

       

  6. Connect Openstack Neutron to ODL

    1. Install ODL Plugin for Neutron
      yum install -y python-networking-odl.noarch
    2. Configure Neutron ml2 to connect to ODL
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat cat <<EOF | tee --append /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_odl] password = admin username = admin url = http://${ODL_IP}:8181/controller/nb/v2/neutron EOF
    3. Clean database

      mysql -e "drop database if exists neutron;"
      mysql -e "create database neutron character set utf8;"
      mysql -e "grant all on neutron.* to 'neutron'@'%';" 
      neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
      
      systemctl start neutron-server
      systemctl start neutron-l3-agent.service
      systemctl start neutron-dhcp-agent.service
      systemctl start neutron-metadata-agent
      systemctl start neutron-metering-agent
      
  7. Virtual Tenant Network Feature

    From OpenDaylight’s console

    1. Install the required features for VTN.

      feature:install odl–vtn-manager-rest

      feature:install odl–vtn-manager-neutron

    2. Test rest API

      VTN Manager provides REST API for virtual network functions.

      Create a virtual tenant network
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X POST \
      http://192.168.120.254:8181/restconf/operations/vtn:update-vtn \
      -d '{"input":{"tenant-name":"vtn1"}}'

      Check if was created

      Get info
      curl --user "admin":"admin" -H "Accept: application/json" -H \
      "Content-type: application/json" -X GET \
      http://192.168.120.254:8181/restconf/operational/vtn:vtns

      more examples [1]

  8. Mininet

    1. Download Mininet.
    2. Launch the Mininet VM with VirtualBox.
      openstack-compute
      #System
      RAM 1024
      Processors 1
      #Network
      NIC 2 Host-Only VirtualBox Host-Only Ethernet Adapter 1 (statically configured 192.168.254.133 eth0)
      NIC 1 Bridged Adapter (Provides internet connectivity)(eth1)
      
    3. Log on to the Mininet VM with the following credentials:
      • user: mininet
      • password: mininet
    4. Interface configuration file 
      vi /etc/network/interfaces

      This configuration match with actual environment

      /etc/network/interfaces
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto eth0
      iface eth0 inet static
      address 192.168.120.133
      netmask 255.255.255.0
      
      auto eth1
      iface eth1 inet dhcp

       

    5. start a virtual network:

      sudo mn –controller=192.168.120.254

      more info [3]

  9. References

    [1] http://docs.opendaylight.org/en/stable-boron/user-guide/virtual-tenant-network-(vtn).html(Using DevStack and different versions of Openstack)

    [2] http://docs.opendaylight.org/en/stable-boron/opendaylight-with-openstack/openstack-with-vtn.html (Old OpenStack version)
    [3] https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet

Attachments:


vtn_example.JPG (image/jpeg)


Capture.JPG (image/jpeg)

Pin It on Pinterest

Share This