Installing Cisco VTS

The following sections provide details about installing VTS on a Linux-OpenStack environment or a VMware-based environment. Ensure that you review the Prerequisites chapter, before you begin installing VTS.

Installing Cisco VTS in a Linux - OpenStack Environment

Installing Cisco VTS in an OpenStack environment involves:

Installing the VTC VM

The VTC VM is installed on controller node. The VM is provided as vtc.qcow2 file.


Note


Ensure that the following packages are installed:

  • Redhat : yum install qemu-kvm libvirt virt-manager telnet


You will need root permission to execute the following steps.


    Step 1   Connect to the controller node via SSH, and copy the vtc.qcow2 file to /var/lib/libvirt/images/ folder.
    Step 2   Copy the vtc.xml file to your controller. (sample). Modify it as per your setup.
    Step 3   Create the VTC VM using following command:
    virsh create vtc.xml
    Step 4   Run the command:
    virsh list --all

    It should display:

    Id					Name					State
    --------------------------------------------------
    2 VTC running
    Step 5   Install virt-manage. Run yum install virt-manager
    Step 6   Install display related packages. Run yum groupinstall X Window System KDE Desktop.
    Step 7   Logout and ssh into the controller back again .

    While logging in if you get error message /usr/bin/xauth: file /root/.Xauthority does not exist, logout once more and log in again.

    Step 8   Start virt-manager. Run virt-manager.
    Step 9   Once virt-manager window opens, click on the VTC VM to open up the VTC VM console.

    In the console you get the installation dialog wizard which will take you through the steps to configure VTC VM for the first time.

    Step 10   Enter VTC VM hostname, IP address, subnet mask, gateway address, DNS nameserver, domain search (ex: cisco.com), ntp server information (can be same as gateway ip address), when prompted for. The values for each should be separated by space.
    Step 11   Enter default user vts-admin password. The vts-admin user serves as a root user to do password recovery. If you login into the VTC VM using vts-admin username and password again, you will get the same dialog to go through the VTC VM setup again.
    Step 12   Enter administrative username and password. This username and password should be used to operate the VTC VM.

    VTC VM goes for a reboot at this time. Wait for two minutes for the VTC VM to be up. You can ping the IP address given for VTC VM in the setup process to see if the VTC VM is up.

    Step 13   SSH into VTC VM using the IP address, administrative username/password (given in the setup process (Not vts-admin user)

    Modifying the Credentials File

    Before logging in to VTS GUI, modify the credentials file in VTC, available at /usr/lib/python2.7/dist-packages/vtsHostAgent/credentials. This is used for the HostAgent installer to put the right credentials on the plugin and host agent as well as to put the correct versions of the plugin and host agents. This is necessary step so you do not have to manually put the plugin and host agents into the respective directories.

    When you execute change password using the GUI, it will run the host agent installer, modify the files, and restart the OpenStack Neutron services.

    Note


    In High Availability mode, modify the file in both the VTCs.


    An example is given below. The IP for NCS is the VIP IP.

    [ncs]
    
    username = admin
    
    password = admin
    
    ip = 172.20.98.246
    
    [compute1]
    
    username = admin
    
    password = cisco123
    
    ip = 172.20.98.199
    
    [compute2]
    
    username = admin
    
    password = cisco123
    
    ip = 172.20.98.200
    
    [compute3]
    
    username = admin
    
    password = cisco123
    
    ip = 172.20.98.208
    
    [compute4]
    
    username = admin
    
    password = cisco123
    
    ip = 172.20.98.209
    
    [controller]
    
    username = admin
    
    password = cisco123
    
    ip = 172.20.98.197

    Installing OpenStack Plugin

    Installation of OpenStack plugin for VTC includes replacing two files—mechanism_ncs.py and cisco_vts_plugin.py. These files will be provided as part of the package.

      Step 1   Connect to your OpenStack controller node using SSH.
      Step 2   Copy cisco-vts-agent available under host-agent build dir to /usr/bin/cisco-vts-agent
      cp cisco-vts-agent  /usr/bin/cisco-vts-agent
      Step 3   Copy the L2 plugin mechanism_ncs.py available under openstack-plugin dir to /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mechanism_ncs.py . Run:
      cp mechanism_ncs.py /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mechanism_ncs.py
      Step 4   Create the directory service_plugins using the following command:
      mkdir /usr/lib/python2.7/site-packages/neutron/plugins/cisco/service_plugins
      Step 5   Copy cisco_vts_plugin.py available under openstack-plugin dir to service_plugins directory. Run:
      cp cisco_vts_plugin.py /usr/lib/python2.7/site-packages/neutron/plugins/cisco/service_plugins
      /cisco_vts_plugin.py
      Step 6   Create __init__.py file in /usr/lib/python2.7/site-packages/neutron/plugins/cisco/service_plugins/.
      touch /usr/lib/python2.7/site-packages/neutron/plugins/cisco/service_plugins/__init__.py
      Step 7   Open /etc/neutron/neutron.conf. Replace service_plugins = line with:
      service_plugins=neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.plugins.c
      isco.service_plugins.cisco_vts_plugin.VtsL3Plugin
      Step 8   Open /etc/neutron/plugin.ini. Replace mechanism_drivers = line with:
      mechanism_drivers = ncs,openvswitch.
      Step 9   In the same file add the following content at the bottom, replacing the NCS_IP with your NCS IP address:
      [ml2_ncs]
      url = https://<VTC_IP>:8888/api/running/openstack
      username = admin
      password = admin
      timeout = 360
      Step 10   Add the following content at the bottom of the file: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
      [ml2_ncs]
      url = https://<VTC_IP>:8888/api/running/openstack
      username = admin
      password = admin
      timeout = 360
      Step 11   Restart the neutron and host agent . Run:
      sudo service neutron-server restart
      
      sudo service neutron-vts-agent restart
      Step 12   On all compute nodes, copy the host agent to the appropriate directory as in step 2 above.
      Step 13   On all compute nodes, open /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file, and repeat the procedure given in step 10.
      Step 14   Restart the host agent on all compute nodes. Run:
      Note   

      In git mechanism_ncs.py and cisco_vts_plugin.py are at vts/vmm-plugins/openstack/mechanism_ncs.py and vts/vmm-plugins/openstack/cisco_vts_plugin.py respectively. cisco-vts-agent is available under vts/openstack.


      Installing the Host Agent

      The host agent installer can be used to install or upgrade the VTS host agent on your compute/controller nodes. To install the host agent:


        Step 1   Copy the host agent installer to your VTC.
        Step 2   Create a credentials file that contains your compute, NCS, and controller credentials. Sample file is at vts/install/hostagent/credentials.
        Step 3   You can run the installer as standalone as well as a REST server that accepts REST APIs from northbound GUI.
        • Usage 1: python vts-installer.py -a <Host agent File> -I <VTC IP> -p <NCS PORT> -n -c <CREDENTIALS FILE>
          1. This will install/upgrade on all the controller/compute nodes configured in the NCS, and start a web server to monitor the installation from northbound REST APIs.

        • Usage 2: python vts-installer.py -a <Host agent File> -I <VTC IP> -p <NCS PORT> -n <Hostname> -c <CREDENTIALS FILE>
          1. This will install/upgrade on all the provided hosts and start a web server to monitor the installation from northbound REST APIs.

        • Usage 3: python vts-installer.py -a <Host agent File> -I <VTC IP> -p <NCS PORT> -c <CREDENTIALS FILE>
          1. This will start a web server through which you can start or monitor installation or upgrade from northbound APIs.
          2. North bound API to install on all nodes curl -X POST http://<VTC_IP>:8100/install --v
          3. North bound API to install on particular node curl -X POST http://<VTC_IP>:8100/install/compute1 --v
          4. North bound API to get install status for all nodes curl -X GET http://<VTC_IP>:8100/install --v
          5. North bound API to get install status for a particular node curl -X GET http://<VTC_IP>:8100/install/compute1 --v

        Installing Cisco VTS on a VMware Environment

        Installing Cisco VTS on a VMware environment includes:

        Installing VTC on ESXi

        To install VTC on an ESXi host:


          Step 1   Open the Browse Datastore window on the ESXi host where VTC needs to be installed.
          Step 2   Create a folder to upload VTC image, for example, VTC-A.
          Step 3   Upload vtc.vmdk and vtc.vmx file in this folder.
          Step 4   Right click on vtc.vmx, and select add to inventory.

          This will create the VM for VTC.

          Note   

          We recommend that you configure 8vCPUs and 16 GB virtual memory for the VTC VM, for better performance.

          Step 5   Power ON the VM, and open the console for it.

          Console window automatically starts a script and asks for your inputs to configure the following:

          • VTS Hostname

          • DHCP / Static IP configuration for static IP

          • Underlay IP address for VTC

          • Underlay Netmask

          • Underlay Gateway address

          • DNS Address

          • DNS Search domain

          • NTP address

          • Password change for user vts-admin

          • Administrator User

          • Password for administrator user


          Installing vCenter VTC Plugin

          Do the following to install the vCenter plugin:


            Step 1   Log in to your VTC server, and change directory to:
            /opt/cisco/packages/vtc/bin
            Step 2   As root, run:
            vwcregister.py -s <vCenter IP> -u <vCenter administrative user> -p <vcenter administrative user password>

            We recommend that you use the administrator@vsphere.local user when registering the plugin.

            To verify that the plugin has been installed, go to https://<vcenter-ip-address>/mob/?moid=ExtensionManager. The table should list the following:

            extensionList["com.cisco.vts.vwcplugin"]

            Initializing vCenter Plugin

            Microsoft SQL Server is the supported database. Plugin can connect to an external database or to database on vCenter server machine. You need to configure the port and username/password on the VTS Configuration tab to create database for VTS. vCenter plugin also needs to communicate to VTC to push the configuration and receive the events for provisioning the overlay.


              Step 1   Click one of the virtual distributed switches (VDS) you had created.

              VTS plugin adds three tabs to all VDS: VTS configuration, VTS Networks, and VTS Router. These tabs can accessed by clicking the Manage tab.

              Step 2   Click the Manage tab, and select VTS Configuration.
              Step 3   Enter the following details:
              • DatabaseIP
              • DatabasePort - The port that you have configured for the SQL server.
              • Database UserName
              • Database Password.
              Step 4   Click Submit.

              After you click Submit, the GUI displays a button, Set. This shows the current status of the database and means that the database configuration is set. We do not support the copying the database entries from the previous database to the new database. Updating database can be a potentially disruptive operation.

              After saving the database information, a new box appears to enter the VTS IP, username, and password.

              Step 5   Enter the VTS IP, username, and password. Make sure that the configured user has admin privileges. If both connections are successful, the plugin is ready to start creating networks, subnets, and routers. With each new network, a new port group will be created for all VDS in vCenter.

              After you click Submit, the GUI displays a button, Set. Updating the parameters can be a potentially disruptive operation.