+1 (669) 231-3838 or +1 (800) 930-5144

NetDevEMEA : Vyatta router integration with OpenStack

  1. Introduction

    The target objective will be to deliver a demonstrations using a architecture consisting of:

 

Figure  above, shows the functional architecture of a lightweight demonstration. This topology will be able to demonstrate simple deployments of VNFs, applications and how to programmatically drive configuration for the vRouter, vADC and the VCS fabric itself

  • Brocade SDN ControllerA commercial wrap of the OpenDayLight controller, stabilised with support. OpenDayLight is the defacto open source SDN controller that can deliver OpenFlow, NETCONF and PCEP functionality out of the box.
  • OpenStackThe VIM (Infrastructure Orchestration) layer in this demo will be OpenStack at a code level that has proven integration with Brocade products. The developer automated build of OpenStack. Neutron, the OpenStack network project, will connect to the SDN controller, the vRouter and the VDX switching architecture.
  • Brocade vRouterThe Brocade vRouter is a fully functional routing platform based in software and specifically the 5600 platform has DPDK enhancements to packet forwarding as well as availability of a NETCONF interface. In this demo, the vRouter will provide tenant routing, firewalling and edge VPN services for site-to-site or remote access.
  • Brocade VDX Switches (VCS Fabric): The underlay network will be formed of Brocade VDX datacentre switches formed in to an Ethernet Fabric. Underlay networks and physical network elements are supposed to be easy to configure and operate, which the VCS (Ethernet fabric configuration) is. OpenStack can integrate in to the VCS fabric using a Neutron plugin.

Figures  above, shows the physical architecture and the logic created after this guide.

Installing Brocade Software Defined Networking Controller

This guide describes the steps to install the Brocade SDN Controller (BSC).

The BSC is a commercial version of OpenDaylight based on the Boron release.  It includes the Brocade topology manager and a number of features that are pre-installed to support integration with Openstack and manage openflow hardware.

  1. Disable SELinux and Firewalls
    BSC Pre-reqs
    # Install Pre-reqs
    sudo yum install createrepo \
     java-1.8.0-openjdk \
     nodejs.x86_64 \
     zip \
     unzip \
     python-psycopg2 \
     psmisc \
     python2-pip
    
    pip install --upgrade pip
    pip install pyhocon
    
    
    # Disable SELinux
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
    
    
    # Disable and stop the firewall
    systemctl  disable firewalld
    systemctl stop firewalld

Install BSC

The BSC software comes packaged for a number of operating systems including RPMs, and Deb.  Get the latest version and install it using the steps below.

 

The latest version used at the time of writing this is ‘SDN-Controller-dist-4.1.0-rpm.tar.gz’

  1. Download the latest release from http://www.mybrocade.com
  2. Copy the file to your host system
  3. untar and install
Install BSC Software
# Untar and run install script
tar -zxvf SDN-Controller-dist-4.1.0-rpm.tar.gz
cd SDN-Controller-dist-4.1.0 && ./install

 

Verify Installation:

The BSC software install Opendaylight and the Brocade Topology manager.  These should be started automatically, and you can confirm they are running by doing the following:

  1. Browse to the topology manager http://<hostname/ip>:9001
  2. Login with the username admin, password admin
  3. You should see a screen similar to below

From the CLI you can also confirm that the ODL RESTful API is responding.
The above command should return a json block containing an empty array of networks.

Query ODL RESTful API
curl -u admin:admin http://192.168.100.13:8181/controller/nb/v2/neutron/networks
BSC Response
{
   "networks" : [ ]
}[

Starting / Stopping Services

The BSC is composed of two processes.

  • brcd-bsc  (opendaylight controller)
  • brcd-ui     (Brocade Topology Manager)

If the processes are under system v control rather than systemctl, so you will need to use the service command to stop and start them

Stopping / Starting Services
# Stop topology manager
service brcd-ui stop


# Stop controller
service brcd-bsc stop

Log into Karaf

Stopping / Starting Services
# Launch client
/opt/brocade/bsc/bin/client -u karaf

Installation Directory and Log files

The software is installed under /opt/brocade.

You will find the topology manager installed under /opt/brocade/ui

The controller is installed under /opt/brocade/bsc

 

If you are familiar with OpenDaylight, then you will find the usual ODL directory structure under  /opt/brocade/bsc/controller

Log files for the controller can be found under /opt/brocade/bsc/controller/data/log/karf.log

  1. Openstack environment & SDN controller integration

    Integration

    In order to connect the two solutions, ODL exposes a northbound API.  Neutron can call the ODL API by replacing the default openvswitch ML2 mechanism driver with the opendaylight mechanism driver.

    This means when a user makes a request to create a network in Openstack, neutron sends a API call to Opendaylight.  Opendaylight can then program the switches with openflow rules.

    The image below is a simplified view of the integration

    Why Integrate these two solutions?

    The basic integration will provide the same functionality as provided by the native Openstack solution, but their are some benefits.

    Security groups can be implemented with openflow rules instead of iptables. This means complex iptables aren’t required to implement the security groups.  This should provide better throughput for a system handling alot of flows.

    You can also use ODL to manage other network infrastructure under your cloud such as the physical switches and load balancers.  This configuration isn’t automatic, but provides the potential to get a better view of your network than that provided by Openstack neutron alone.

    Warning

    Integrating Openstack with Opendaylight is destructive. You must delete all your existing networks and routers before beginning.

    Compatibility

    The steps in this guide have been tested with the following product versions:

    Openstack Version Opendaylight Version
    Liberty Boron SR2 (netvirt-openstack)
    Mitaka Boron SR2 (netvirt-openstack)

    Openstack Deployment with RDO

    systemctl  disable firewalld
    systemctl stop firewalld
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    systemctl enable network
    systemctl start network
    
    
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
    
    
    yum install -y centos-release-openstack-liberty.noarch
    yum install -y openstack-packstack
    yum update -y
    CONTROLLER_01_IP=192.168.0.11
    packstack \
     --install-hosts=${CONTROLLER_01_IP} \
     --novanetwork-pubif=enp6s0 \
     --novacompute-privif=enp2s0f0 \
     --novacompute-privif=enp2s0f1 \
     --provision-demo=n \
     --provision-ovs-bridge=n \
     --os-swift-install=n \
     --os-heat-install=y \
     --neutron-fwaas=y \
     --os-neutron-lbaas-install=y \
     --os-neutron-vpnaas-install=y \
     --nagios-install=n \
     --os-ceilometer-install=n \
     --default-password=brocade101
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat,vlan
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks extnet
    crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges provider
    cat <<EOF | tee --append /etc/neutron/plugins/ml2/ml2_conf.ini
    [ml2_odl]
    password = admin
    username = admin
    url = http://192.168.100.13:8181/controller/nb/v2/neutron
    [ovs]
    local_ip = 192.168.200.100
    bridge_mappings = extnet:br-ex,provider:ens161
    EOF
    systemctl restart neutron-server

    Prerequisites

    At this point you have followed both the Openstack Deployment with RDO and Installing SDN controller guides and have two clean deployments.  To integrate the two deployments we must install the networking_odl plugin on openstack, and configure neutron to use  the opendaylight driver instead.  If you are using Openstack Mitaka or newer you can install this from repos.   If you are using Liberty or older you will need to fetch the code and install the relevant version from git.

    You should also ensure all instances, networks and router have been deleted.

    Clean Openstack
    # Delete Instances
    nova list
    nova delete <instance names>
    
    
    # Delete subnet interfaces from routers
    neutron subnet-list
    neutron router-list
    neutron router-port-list <router name>
    neutron router-interface-delete <router name> <subnet ID or name>
    
    
    # Delete subnets, networks and routers
    neutron subnet-delete <subnet name>
    neutron net-list
    neutron net-delete <net name>
    neutron router-delete <router name>
    1. Prepare Openstack Controller

      1. Shutdown neturon services
      2. Disable openvswitch agent
      3. Clean Openvswitch configuration
      Prepare Openstack Controller
      # Shutdown Neutron Services
      systemctl stop neutron-server
      systemctl stop neutron-openvswitch-agent
      systemctl stop neutron-l3-agent.service
      systemctl stop neutron-dhcp-agent.service
      systemctl stop neutron-metadata-agent
      systemctl stop neutron-metering-agent
      systemctl stop neutron-lbaas-agent
      systemctl stop neutron-vpn-agent
      
      # Disable Openvswitch Agent
      systemctl disable neutron-openvswitch-agent
      
      
      # Clean Openvswitch Configuration
      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      
      
      # The next command should show there are no ports left
      ovs-dpctl show
      
      

       

      Prepare Openstack Compute Node

      1. Shutdown neturon services
      2. Disable openvswitch agent
      3. Clean Openvswitch configuration
      Prepare Openstack Compute Node
      # Shutdown Neutron Services
      systemctl stop neutron-openvswitch-agent
      
      # Disable Openvswitch Agent
      systemctl disable neutron-openvswitch-agent
      
      # Clean Openvswitch Configuration
      systemctl stop openvswitch
      rm -rf /var/log/openvswitch/*
      rm -rf /etc/openvswitch/conf.db
      systemctl start openvswitch
      ovs-vsctl show
      ovs-dpctl del-if ovs-system br-ex
      ovs-dpctl del-if ovs-system br-int
      ovs-dpctl del-if ovs-system br-tun
      
      # The next command should show there are no ports left
      ovs-dpctl show
      
      

       

      Disabling neutron-openvswitch should be sufficent, but in some Openstack distros this is linked to openvswitch. Restarting openvswitch triggers the agent to startup again, which will trash configuration applied by ODL. To prevent this from happening you should remove the openvswitch agent package from both the controller and compute node

      yum remove -y openstack-neutron-openvswitch.noarch

       

       

      Connect Openstack vSwitches to Opendaylight

      We will now connect the openvswitches to opendaylight, so that it can manage them.

      1. On the controller configure openvswitch
      2. On the compute node configure openvswitch
        Connect Openstack OVS to ODL
        # On the controller we will specify to provider mappings (br-ex and br-mgmt these should have been configured during the Openstack installation)
        # The data interface i.e. the interface which carries the vxlan tunnel is called ens256.
        #
        opendaylight_ip=192.168.100.13
        data_interface=$(facter ipaddress_ens256)
        read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
        ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$data_interface
        ovs-vsctl set Open_vSwitch $ovstbl other_config:provider_mappings=extnet:br-ex,provider:ens161
        ovs-vsctl set-manager tcp:${opendaylight_ip}:6640
        ovs-vsctl list Manager
        echo
        ovs-vsctl list Open_vSwitch
        
        
        # On the compute we will specify to provider mappings (br-mgmt these should have been configured during the Openstack installation)
        # The data interface i.e. the interface which carries the vxlan tunnel is called ens256.
        #
        opendaylight_ip=192.168.100.13
        data_interface=$(facter ipaddress_ens256)
        read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
        ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$data_interface
        ovs-vsctl set Open_vSwitch $ovstbl other_config:provider_mappings=extnet:br-ex,provider:ens161
        ovs-vsctl set-manager tcp:${opendaylight_ip}:6640
        ovs-vsctl list Manager
        echo
        ovs-vsctl list Open_vSwitch
        
        
      3. Verify that both switches are now connected to ODL http://<ODL IP>:8181/index.html
        You should see a screen similair to below:

      Configure Neutron and the Networking ODL Plugin

      If you are using Mitaka or later you can install the networking ODL plugin by executing the following

      Install Networking ODL from source
      yum install -y python-networking-odl.noarch

      If you are using Liberty or an earlier release, you need to check the plugin out from github, and switch to the relevant branch

      Install Networking ODL from source
      # Clone
      #
      yum install -y git
      git clone https://github.com/openstack/networking-odl.git
      cd networking-odl/
      
      
      # Switch branch to liberty
      #
      git fetch origin
      git checkout -b liberty-test remotes/origin/stable/liberty
      
      # Modify networking_odl/ml2/mech_driver.py
      # Add constants.TYPE_FLAT
      # TODO Add patch here
      
      # Install pip
      yum install -y python-pip
      
      
      # Install the plugin dependancies and the plugin
      #
      pip install -r requirements.txt
      python setup.py install

       

      Configure Neutron

      Configuring Neutron
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers vxlan,flat,vlan
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks extnet
      crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges provider
      
      
      [ml2_odl]
      password = admin
      username = admin
      url = http://192.168.100.13:8181/controller/nb/v2/neutron
      [ovs]
      local_ip = 192.168.200.100
      bridge_mappings = extnet:br-ex,provider:ens161
      EOF
      
      mysql -e "drop database if exists neutron;"
      mysql -e "create database neutron character set utf8;"
      mysql -e "grant all on neutron.* to 'neutron'@'%';"
      neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
      
      
      # Startup Neutron Services
      systemctl start neutron-server
      systemctl start neutron-openvswitch-agent
      systemctl start neutron-l3-agent.service
      systemctl start neutron-dhcp-agent.service
      systemctl start neutron-metadata-agent
      systemctl start neutron-metering-agent

       

      At this point your switches should look like this:

      Controller:

      Controller vSwitch
       ovs-vsctl show
      9104e5a8-c31d-4219-bf31-07027eca0ec2
          Manager "tcp:192.168.100.13:6640"
              is_connected: true
          Bridge br-mgmt
              Port br-mgmt
                  Interface br-mgmt
                      type: internal
              Port "ens224"
                  Interface "ens224"
          Bridge br-int
              Controller "tcp:192.168.100.13:6653"
                  is_connected: true
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
          Bridge br-ex
              Port br-ex
                  Interface br-ex
                      type: internal
              Port "ens192"
                  Interface "ens192"
          ovs_version: "2.5.0"

      Compute

      Compute vSwitch
      ovs-vsctl show
      c4001475-e37c-43b4-9f65-cc525df5997b
          Manager "tcp:192.168.100.13:6640"
              is_connected: true
          Bridge br-int
              Controller "tcp:192.168.100.13:6653"
                  is_connected: true
              fail_mode: secure
              Port br-int
                  Interface br-int
                      type: internal
          Bridge br-mgmt
              Port br-mgmt
                  Interface br-mgmt
                      type: internal
              Port "ens224"
                  Interface "ens224"
          ovs_version: "2.5.0"
  2. vRouter Integration

    The vRouter or Vyatta router is a brocade software based router.  It comes in the form of a virtual machine image.

    source keystonerc_admin
    curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
    glance image-create  --file ./cirros-0.3.4-x86_64-disk.img --visibility public --container-format bare  --disk-format qcow2  --name cirros
    glance image-create  --file ./vrouter-5.0R1.qcow2 --visibility public --container-format bare  --disk-format qcow2 --name "Brocade Vyatta vRouter 5600 5.0R1"
    nova flavor-create vrouter auto 4096 4 4 --is-public true
    neutron security-group-rule-create --direction ingress --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1 --port-range-max 65535 --remote-ip-prefix 0.0.0.0/0 default
    neutron security-group-rule-create --direction ingress --protocol udp --port-range-min 1 --port-range-max 65535 --remote-ip-prefix 0.0.0.0/0 default
    neutron net-create private
    neutron subnet-create --name private_subnet  private 172.16.10.0/24
    network_id=$(openstack network show private -f value -c id)
    neutron router-create router
    neutron router-interface-add router private_subnet
    neutron net-create public -- --router:external --provider:network_type=flat --provider:physical_network=extnet
    neutron subnet-create --allocation-pool start=10.60.0.192,end=10.60.0.207 --gateway 10.60.0.1 --name public_subnet public 10.60.0.0/24 -- --enable_dhcp=False
    neutron router-gateway-set router public

    One drawback to the vRouter is the need to have the management network, and therefore it must be considered when creating the openstack deployment.

    neutron net-create \
      management \
      --shared \
      --provider:network_type=vlan \
      --provider:physical_network=provider \
      --provider:segmentation_id=121
    neutron subnet-create \
      --allocation-pool start=192.168.121.192,end=192.168.121.207 \
      --no-gateway \
      --name management_subnet \
     --enable-dhcp \
      management \
      192.168.121.0/24
    tenant_id=$(keystone tenant-list | grep admin | awk '{print $2}')
    image_id=$(openstack image show -f value -c id  "Brocade Vyatta vRouter 5600 5.0R1")
    flavor_id=$(openstack flavor show  -f value -c id  "vrouter")
    network_id=$(openstack network show -f value -c id "management")
    
    
    cat <<EOF | tee --append /etc/neutron/conf.d/neutron-server/vrouter.conf
    [vrouter]
    tenant_admin_name = admin
    tenant_admin_password = brocade101
    tenant_id = ${tenant_id}
    image_id = ${image_id}
    flavor = ${flavor_id}
    management_network_id = ${network_id}
    keystone_url=http://192.168.100.100:5000/v2.0
    EOF
    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_dns_servers 10.60.0.1
    crudini --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
    crudini --set /etc/neutron/l3_agent.ini DEFAULT enable_metadata_proxy False
    crudini --set /etc/neutron/plugin.ini ml2 extension_drivers port_security
    
    declare -a instances=$(openstack server list -f value -c ID)
    for instance in ${instances[@]}; do openstack server delete $instance; done
    declare -a routers=$(neutron router-list -f value -c id)
    for router in ${routers[@]}; do neutron router-gateway-clear router; declare -a ports=$(neutron router-port-list -f value -c id ${router}); for port in ${ports[@]}; do neutron router-interface-delete ${router} ${port}; done; neutron router-delete ${router}; done
    
    
    crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins lbaas,firewall,vpnaas,neutron.services.l3_router.brocade.vyatta.vrouter_neutron_plugin.VyattaVRouterPlugin
  3. Demo preparation

    1. Create networks:
      Login as admin in OpenStack platform:

    External networks through heat template.
    In /dashboard/project/stacks/ select launch slack and choose the “external_networks.yaml” template.

    external_networks.yaml
    heat_template_version: 2015-10-15
    
    description: |
      The heat template is used to NetDEv demo.
    resources:
     public_vlan_120:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 120
          cidr: 10.10.120.0/24
          network_start_address: 10.10.120.10
          network_end_address: 10.10.120.253
          gateway: 10.10.120.254
    
     public_vlan_122:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 122
          cidr: 10.10.122.0/24
          network_start_address: 10.10.122.10
          network_end_address: 10.10.122.253
          gateway: 10.10.122.254
    
     public_vlan_123:
        type: http://10.18.255.130/provider_network.yaml
        properties:
          vlantag: 123
          cidr: 10.10.123.0/24
          network_start_address: 10.10.123.10
          network_end_address: 10.10.123.253
          gateway: 10.10.123.254

    • Add management network (/dashboard/project/networks/)
  • Create two users (/dashboard/identity/users/create_user)
  •           blue_team_leader
  •           red_team_leader

 

  • Create two projects (/dashboard/identity/)
  •           Blue Team
  •           Red Team
  •      Include users previously created in its corresponding project (Manage members)

 

  • Sign out as admin.
  • Next steps have to be executed in both tenants. Here it is only explain for blue_team_leader
  • Log on as blue_team_leader user:
  • Create tenant’s private networks:
  1. Logging as user run a new slack template.”private_network.yaml” provide a router and two interfaces attached, which is directly connected with two subnets: dataplane_subnet and database_subnet. Each of them will contain an instance.
  2. private_network.yaml
    heat_template_version: 2015-10-15
    
    
    description: |
      The heat template is used to NetDEv demo.
    
    resources:
      dataplane_network:
        type: OS::Neutron::Net
        properties:
          name: dataplane_network
    
      dataplane_subnet:
        type: OS::Neutron::Subnet
        properties:
          network_id: { get_resource: dataplane_network }
          cidr: 172.16.10.0/24
          name: dataplane_subnet
    
      database_network:
        type: OS::Neutron::Net
        properties:
          name: database_network
    
      database_subnet:
        type: OS::Neutron::Subnet
        properties:
          network_id: { get_resource: database_network }
          cidr: 172.17.10.0/24
          name: database_subnet
    
      router:
            type: OS::Neutron::Router
    
      router_interface_subnet_dataplane:
            type: OS::Neutron::RouterInterface
            properties:
              router_id: { get_resource: router }
              subnet_id: { get_resource: dataplane_subnet }
    
      router_interface_subnet_database:
            type: OS::Neutron::RouterInterface
            properties:
              router_id: { get_resource: router }
              subnet_id: { get_resource: database_subnet }

  • Add router gateway (/dashboard/project/routers/)
  1. Click on set gateway and choose an external network. In this case vRouter-blue is connected with public_123. External Fixed IP received (gw-ip-address) will be use in the next step.
  • Plumb interfaces
    • Manually
  1. Option 1:
  2. replumb_gateway.sh ${gw-ip-address}
  3. Option 2:
  4. neutron port-list | grep ${gw-ip-address}
  5. Get the output Mac and by means of ifconfig search its corresponding ${tap_interface}
  6. ip link set ${tap_interface} down
    ovs-vsctl del-port br-int ${tap_interface}
    ip link set ${tap_interface} master br${vlan_id}
    ip link set ${tap_interface} up
    brctl show br${vlan_id}
    •  BWC
  1. TODO
  2. Deploy servers

  1. servers.yaml” creates two instances. The web server is connected with the data plane subnet. During the creation process the server will be configured as a WordPress server, also it will be able to reach the database server. The data base instance is located in database subnet.
  2. Each of them have pre-configured security groups to allow a correct communication between them:
  • Database is totally isolated. It only can be accesses by the webserver
  • WordPress server exposed his functionally through an assigned external IP address. Also it can be reach by ssh and port 80 obviously.
    servers.yaml
    heat_template_version: 2015-10-15
    
    
    description: |
      The heat template is used to NetDev demo.
    
    
    parameters:
      tenant_name:
        type: string
        default: blue
      image_web_server:
        type: string
        default: web_server
      image_data_base:
        type: string
        default: database
      key:
        type: string
        description: my key.
        default: admin
      flavor:
        type: string
        default: m1.small
      public_network:
        type: string
        description: public network
        default: xxx
    
    resources:
      server_DB:
        type: OS::Nova::Server
        properties:
          name:
              str_replace:
                   template: database-teamname
                   params:
                       teamname: { get_param: [ tenant_name ] }
          image: { get_param: image_data_base }
          flavor: { get_param: flavor }
          key_name: { get_param: key }
          networks:
            - port: { get_resource: server_DB_port }
    
      server_DB_port:
        type: OS::Neutron::Port
        properties:
          network: database_network
          fixed_ips:
            - subnet_id: database_subnet
          security_groups:
            - { get_resource: data_base_security_group }
            - default
    
      server_HTTP:
        type: OS::Nova::Server
        depends_on: server_DB
        properties:
          name:
              str_replace:
                   template: web-server-teamname
                   params:
                       teamname: { get_param: [ tenant_name ] }
          image: { get_param: image_web_server }
          flavor: { get_param: flavor }
          key_name: { get_param: key }
          user_data_format: RAW
          user_data:
            str_replace:
              template: |
                #!/bin/bash -v
                echo -e "wr_ipaddr\tdatabase" > /etc/hosts
                cd /var/www/html
                timeout 300 bash -c 'cat < /dev/null > /dev/tcp/database/3306'
                wp core config --dbname=wordpress --dbuser=admin --dbpass=brocade101 --dbhost=database
                wp core install --url=http://float_ip --title="team_name" --admin_name=wordpress --admin_email=wordpress@brocade.com --admin_password=wordpress
    
              params:
                wr_ipaddr: { get_attr: [server_DB_port, fixed_ips, 0, ip_address] }
                float_ip: { get_attr: [HTTP_server_floating_ip, floating_ip_address] }
                team_name:
                  str_replace:
                     template: Team teamname
                     params:
                       teamname: { get_param: [ tenant_name ] }
          networks:
            - port: { get_resource: server_HTTP_port }
    
    
      server_HTTP_port:
        type: OS::Neutron::Port
        properties:
          network: dataplane_network
          fixed_ips:
            - subnet_id: dataplane_subnet
          security_groups:
            - { get_resource: web_server_security_group }
            - default
      HTTP_server_floating_ip:
        type: OS::Neutron::FloatingIP
        properties:
          floating_network_id: { get_param: public_network }
          port_id: { get_resource: server_HTTP_port }
    
      data_base_security_group:
          type: OS::Neutron::SecurityGroup
          properties:
            name: data_base_security
            rules:
              - remote_ip_prefix:
                  str_replace:
                     template: web_server_ip/32
                     params:
                       web_server_ip: { get_attr: [server_HTTP_port, fixed_ips, 0, ip_address] }
                protocol: tcp
                port_range_min: 3306
                port_range_max: 3306
      web_server_security_group:
          type: OS::Neutron::SecurityGroup
          properties:
            name: web_server_security
            rules:
              - remote_ip_prefix: 0.0.0.0/0
                protocol: tcp
                port_range_min: 80
                port_range_max: 80

Attachments/Gallery:

Document generated by Confluence on Jul 24, 2017 05:34

Pin It on Pinterest