Installing the Cisco Nexus 1000V

This chapter contains the following sections:

Guidelines and Limitations

  • In VXLAN mulicast mode, the VTEP ports on VEM need to respond to incoming IGMP query traffic for the multicast group to which they belong. However, the default firewall rules (iptables) drop the incoming IGMP query traffic from reaching the VTEP interfaces. In order to allow this traffic, a firewall rule needs to be configured on the respective compute and network hosts, as follows:

    #iptables -I INPUT 1 -p igmp -j ACCEPT
  • If no other interfaces other than the management interface come up after you reboot, you can either manually bring up the interfaces by entering ifconfig interface_name or change the ONBOOT parameter to yes in the /etc/sysconfig/network and the /etc/sysconfig/network-scripts/ifcfg-"interface name" files.

Installing the RHEL-OSP Installer Host

Procedure
    Step 1   Bring up a VM or bare metal server with RHEL 7.1 with base installation.
    Step 2   Configure the name servers and mgmt IP address.
    Step 3   If you are deploying the Cisco Nexus 1000V for KVM behind a firewall, configure a proxy host.
    1. Identify the values of the hostname and port in the /etc/yum.conf file. The line in the file that specifies the hostname and port is as follows:
      proxy=http:hostname:port
    2. In the /etc/rhsm/rhsm.conf file, modify the hostname and port variables to match the values of the ones configured in the /etc/yum.conf file.
      proxy_hostname=hostname
      proxy_port=port
    Step 4   Assign an IP address to the provisioning interface. The RHEL-OSP provides the DHCP for its clients from the subnet that you enter.
    Step 5   Register the RHEL server.
    subscription-manager register

    For more information, see the following URL:

    https:/​/​access.redhat.com/​documentation/​en-US/​Red_Hat_Subscription_Management/​

    Step 6   Attach the RHEL server to a repository pool.
    1. List the subscription pools that are available.
      subscription-manager list --available
    2. Enable the repository pool.
      subscription-manager attach --pool	pool-id
      Note   

      Make sure to enable a pool that has RHEL-OSP installer entitlement.

    Step 7   Enable repositories in the selected pools.
    subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms
    subscription-manager repos --enable=rhel-7-server-openstack-6.0-installer-rpms
    subscription-manager repos --enable=rhel-7-server-rpms
    subscription-manager repos --enable=rhel-server-rhscl-7-rpms
    
    Step 8   Install rhel-osp-installer.
    yum install -y rhel-osp-installer
    Step 9   Verify that a valid fully qualified domain name (FQDN) hostname has been configured in the /etc/hosts file by entering the following command:
    hostname -f

    If the command fails to return a hostname, you must configure one in the /etc/hosts file. The leftmost column is the ip_address to be resolved. The next column is the hostname, and the last column is an optional alias.

    Note   

    You must assign a FQDN to the machine on which you intend to install RHEL-OSP Installer. The FQDN identifies the domain that the RHEL-OSP Installer uses as its provisioning network.

    The FQDN must not conflict with any existing domain name to prevent resource conflicts.

    IPAddress     Hostname    		    Alias
    203.0.10.3    web.openna.com	 	 web
    
    Step 10   Install the RHEL-OSP Installer by executing the rhel-osp-installer on the command line of the RHEL-OSP-Installer server. For the detailed procedure, see https:/​/​access.redhat.com/​documentation/​en-US/​Red_Hat_Enterprise_Linux_OpenStack_Platform/​6/​html/​Installer_and_Foreman_Guide/​chap-Installing_​the_​RHEL_​OpenStack_​Platform_​Installer.html.
    Step 11   Access the OpenStack Platform Installer's web user interface through its public IP address. If you have a private IP address, add an iptable rule so that you can access the web user interface.
    Step 12   Log in to the OpenStack Platform Installer's web user interface using the login credentials. Username/password credentials are the ones that you see at the end of the RHEL-OSP Installer execution.
    Step 13   Enable IP forwarding by entering the following command at the command line prompt:
    sysctl -w net.ipv4.ip_forward=1
    Step 14   Add net.ipv4.ip_forward=1 to the /etc/sysctl.conf file.
    Step 15   Execute the following IP table rules at the command line prompt:
    iptables -I FORWARD -i <provisioning interface> -j ACCEPT
    iptables -I FORWARD -o <provisionng interface> -j ACCEPT
    iptables -t nat -A POSTROUTING -o <management interface> -j MASQUERADE
    

    Configuring the Provisioning Hosts

    To deploy Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP), you must add hosts to the RHEL-OSP Installer to use for provisioning. For the instructions on how to add these hosts, see Chapter 5, "Deployment Scenario 1: Basic Environment" in the https:/​/​access.redhat.com/​documentation/​en-US/​Red_Hat_Enterprise_Linux_OpenStack_Platform/​6/​html/​Installer_and_Foreman_Guide/​index.html.

    Installing OpenStack with Cisco Nexus 1000V

    Creating a New OpenStack Deployment

    Procedure
      Step 1   Launch the RHEL-OSP Installer graphical user interface (GUI).
      Step 2   From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose OpenStack Installer > Deployments.
      Step 3   From the OpenStack Deployments window, click New Deployment.
      Step 4   In the Deployment Settings pane, do the following:
      1. In the Name field, enter a name for the deployment.
      2. Use default settings for the Messaging Provider and Platform fields.
        The default vales are:
        • Message Provider—RabbitMQ

        • Platform—Red Hat Enterprise Linux OpenStack Platform 6 on RHEL 7

      3. (Optional) Provide a Service Password.
      4. Add a Cisco repository in the Custom Repos pane. For information on which repository to use, see https:/​/​cnsg-yum-server.cisco.com/​yumrepo/​.
        Note   

        The ml2 plugin package is in the Cisco repository. Hence, if you fail to add the appropriate Cisco repository while creating a new OpenStack deployment, the deployment fails due to ml2plugin installation failure.

      5. For the Neutron Networking option, click Networking.
      6. Click Next.
      Step 5   In the Network Configuration pane, create two subnets: one for external traffic and another for tenant traffic.
      1. Click New Subnet.
      2. In the New Subnet dialog box, complete the fields according to your network topology and click Create Subnet.
      3. Repeat steps a and b to create the second subnet.
      4. Drag and drop the External box from the Available Network Traffic Types area to the subnet for external traffic that you just created.
      5. Drag and drop the Tenant box from the Available Network Traffic Types area to the subnet for tenant traffic that you just created.
      6. Click Next.
      Step 6   In the Services Overview pane, click Next.
      Step 7   In the Services Configuration pane, do the following:
      1. For the Core PlugIn Type, select ML2 Core Plugin and then select N1KV ML2 Plugin and enter the VSM IP address and password.
      2. In the Services area, click Glance and choose the appropriate back end driver.
        • For the OpenStack setup in a standalone mode, choose Local File as the back end driver.

        • For the OpenStack setup in a HA mode, choose NFS as the back end driver.

      3. In the Services area, click Cinder and choose the appropriate back end driver.
        • For the OpenStack setup in a standalone mode, choose LVM as the back end driver.

        • For the OpenStack setup in a HA mode, choose NFS as the back end driver.

      4. Click Submit.

        You are returned to the Red Hat Enterprise Linux OpenStack Platform Installer window and the deployment that you just created is displayed.


      Adding the n1kv_vem Class to a Host Group

      You must add the VEM puppet class (neutron::agents::n1kv_vem) to both compute and controller host groups.

      Procedure
        Step 1   From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose Configure > Puppet Classes.
        Step 2   In the Search field, enter n1kv_vem.
        Step 3   Click the neutron::agents::n1kv_vem class name. The Edit Puppet Class pane opens.
        Step 4   In the Puppet Class tab, in the Host Group field, choose the controller and compute host groups that you want to add the VEM puppet class to.
        Step 5   Click the Smart Class Parameter tab.
        Step 6   In the Smart Class Parameter pane, click the required parameters. See table below.
        Note    Configure optional parameters as required by your network topology.
        Step 7   For each parameter, check the Override checkbox and configure the appropriate value.
        Step 8   Click Submit.
        Table 1 n1kv_vem Parameters

        Parameter Name

        Description

        Example

        Required Parameters

        n1kv_vsm_ip

        IP address of the Virtual Supervisor Module (VSM). The default is 127.0.0.1.

        127.0.0.1

        n1kv_vsm_domain_id

        Domain ID of the VSM. The default is 1000 and the value should be between 1 to 1023.

        1000 
        								

        host_mgmt_intf

        Management interface of the node where the VEM is installed. The default is eth0.

        eth0

        node_type

        Type of node, either compute or network. The default is compute.

        For controller/network nodes, the node type is 'network'.

        'compute'

        or

        'network'

        Optional Parameters1

        uplink_profile

        Uplink interfaces that will be managed by the VEM. You must also specify the uplink port profile that configures these interfaces. The default is undefined (empty).

        Note   

        You cannot configure the management interface as an uplink interface on the VEM.

         eth1: port-profile1
         eth2: port-profile2

        fastpath_flood

        Handles broadcast or unknown unicast packets in fast path (KLM). The default is enable.

        enable

        VXLAN Gateway Parameters2

        vtep_config

        Virtual tunnel interface configuration for the VXLAN tunnel endpoints. The default is undefined (empty).

        To remove one of the vteps (leaving at least one remaining vtep), delete the vtep from the vtep_config file, save the file, and trigger puppet in the compute/network nodes.

        To delete all of the vteps, remove the variable by clicking X in front of the variable in the Compute Host Group Parameters pane.

        vtep1:
         profile: virtprof1
         ipmode: dhcp
        vtep2:
         profile:virtprof2
         ipmode: static
         ipaddress: 192.0.2.1
         netmask: 255.255.255.0

        vteps_in_the_same_subnet

        Parameter to indicate whether the VXLAN tunnel interfaces (vteps) belong to the same IP subnet. If they belong to the same subnet, set this parameter to true. If they belong to different subnets, set this parameter to false. The default is false.

        If the parameter is set to true, you must modify the following sysctl:ipv4 values:

        • rp_filter (reverse path filtering)—Set this parameter to 2 (Loose). The default is 1 (Strict).

        • arp_ignore (arp reply mode)—Set this parameters to 1: reply only if target ip matches that of incoming interface. The default is 0.

        • arp_announce (arp announce mode)—Set this parameter to 1. The default is 0.

        Note   

        Setting this parameter to false causes no change in the sysctl settings and does not revert changes made if it was originally set to true.

        For detailed descriptions of these parameters, see the Linux Documentation at the following URL:

        http:/​/​lxr.free-electrons.com/​source/​Documentation/​networking/​ip-sysctl.txt

        True
        Note   

        The value is true if the vtep interfaces are in same subnet.

        1 Define these optional parameters based on the needs of your network configuration.
        2 These parameters are required if you are implementing VXLAN Gateway.

        Configuring the Controller Parameters

        You must configure the controller parameters. Ensure that you have added the VEM puppet class to controller host group for both HA and non-HA deployments.For information, see Adding the n1kv_​vem Class to a Host Group.

        Procedure
          Step 1   From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose OpenStack Installer > Deployments.
          Step 2   Click the Advanced Configuration tab.
          Step 3   Click Edit to navigate to the Neutron in the Services list.
          Step 4   Change the following parameters to the appropriate default values. See the table below:
          Table 2 Neutron Controller Parameters

          Name

          Description

          Value

          Generic Set of Variables

          n1kv_vsm_ip

          Provide comma separated VSM IP addresses for the Multi-VSM configuration.

          10.197.129.100,10.197.129.101

          n1kv_vsm_password

          n1kv_vsm_username

          admin

          Cisco nexus plugin

          The Cisco Nexus 1000V Plugin does not require this to be defined.

          Remove the value in this field.

          Cisco vswitch plugin

          Name of the Cisco Nexus Neutron Plugin.

          Remove the value in this field.

          Core plugin

          Name of the Neutron Core Plugin used.

          neutron.plugins.ml2.plugin.Ml2Plugin

          ml2_mechanism_driver

          Name of the Neutron ML2 mechanism driver used.

          ["cisco_n1kv"]

          ml2_network_vlan_ranges

          Name of the VLAN network and the corresponding VLAN range.

          ["physnet1:1000:2999"] , where physnet1 is the name of the VLAN network and 1000:2999 is an example the VLAN range.

          ml2_tenant_network_types

          ["vlan"], ["vxlan"], or ["vlan","vxlan"]

          ml2_type_drivers

          ["vlan"], ["vxlan"], or ["vlan","vxlan"]

          ml2_vni_ranges

          The VXLAN range

          5000:99999

          ml2_vxlan_group

          Multicast group for the VXLAN interface. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode.

          239.1.1.1

          n1kv_ml2_plugin_additional_params

          default_policy_profile

          Name of default policy profile published from the VSM.

          default-pp

          default_vlan_network_profile

          Default logical network for VLAN networks.

          default-vlan-np

          default_vxlan_network_profile

          Default logical network for VXLAN networks

          default-vxlan-np

          poll_duration

          The time duration ( in seconds) for which the OpenStack Neutron server polls to pull information published from the Cisco Nexus 1000V VSM. The minimum recommended value is 10 seconds.

          60

          http_pool_size

          The number of parellel HTTP connections between the OpenStack Neutron server and the Cisco Nexus1000V VSM that can be active at any given time.

          4

          http_timeout

          The time duration (in seconds) for which the OpenStack Neutron server waits to finish a REST API call with the Cisco Nexus1000V VSM. The minimum recommended value is 10 seconds.

          15

          restrict_policy_profiles

          Default is false, set this parameter to True if visibility of policy profiles for each tenant needs to be controlled.

          False

          sync_interval

          Note   

          This parameter is available only when you override the N1kv ml2 plugin additional params parameter.

          The time interval (in seconds) after which the OpenStack Neutron server synchronizes with the Cisco Nexus1000V VSM to keep the configuration updated and in-sync.

          300

          max_vsm_retries

          Note   

          This parameter is available only when you override the N1kv ml2 plugin additional params parameter.

          The maximum number of retry attempts for VSM REST API.

          2

          Step 5   Click Submit to save the changes.

          Configuring the Neutron Compute Parameters

          You must configure the n1kv_vem class parameters for the Neutron Compute host group.

          Before You Begin

          Ensure that you have added the VEM puppet class to the Neutron Compute and Controller host groups. For information, see Adding the n1kv_​vem Class to a Host Group.

          You must bring up the Neutron Compute host group with the VEM installed in both standalone and HA deployments.

          Procedure
            Step 1   From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose OpenStack Installer > Deployments.
            Step 2   Click the Advanced Configuration tab.
            Step 3   Click Edit to navigate to neutron-compute in the Services list.
            Step 4   Change the following parameters to the appropriate values.
            Table 3 Compute (Neutron) Parameters

            Parameter Name

            Description

            Example

            enable_tunneling

            false

            security_group_api

            neutron

            Step 5   Click Apply.

            Configuring Additional Parameters in the n1kv.conf File

            You can configure additional parameters in the n1kv.conf file. This is an optional process.

            Procedure
              Step 1   Log into the RHEL-OSP-Installer server.
              Step 2   Open the /usr/share/openstack-puppet/modules/neutron/templates/n1kv.conf.erb file.
              Step 3   Add the variable that you need with its corresponding value.
              Step 4   Log into the corresponding node where VEM is installed and execute "puppet agent -tv" to trigger Puppet.

              Setting Up the Cisco Yum Repository

              You can set up the Cisco Yum repository.

              Before You Begin

              Make sure that the Cisco Yum repository is reachable at the following URL:https:/​/​cnsg-yum-server.cisco.com/​yumrepo.

              Procedure
                Step 1   Edit the /etc/yum.repos.d/cisco_os.repo file.
                Step 2   Add the following configuration:
                [cisco-os]
                name=External repo for Cisco nexus 1000v served over HTTPS
                baseurl=https://cnsg-yum-server.cisco.com/yumrepo
                enabled=1
                gpgcheck=1
                gpgkey=https://cnsg-yum-server.cisco.com/yumrepo/RPM-GPG-KEY
                sslverify=1
                
                Step 3   Save and close the file.

                Deploying the Controller and Compute Hosts

                You need to deploy the controller and compute hosts.

                Procedure
                  Step 1   Assign controller and compute roles to hosts.
                  1. From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose OpenStack Installer.
                  2. In the Search field, enter the name of the deployment and click Search.
                  3. Click the deployment name.
                  4. From the list of Deployment Roles, click + next to the controller role.
                  5. From the Free Hosts pane, check the check box next to the host that you want to deploy the role on.
                  6. Repeat steps d and e for the compute role.
                  Step 2   Click Deploy.

                  Installing Additional cisco_n1kv_plugin Patches

                  There are mandatory and optional patches for the cisco_n1kv_plugin in the Cisco Yum repository. These patches are named using the RPM naming convention, as follows:
                  cisco-n1kv-python-nova-<mandatory/optional>-patch-<python-nova version>.<patch version>.noarch.rpm
                  cisco-n1kv-openstack-dashboard-<mandatory/optional>-patch-<openstack-dashboard-version>.<patch version>.noarch.rpm
                  

                  Note


                  Openstack dashboard mandatory patches must be installed on controller hosts in both OpenStack standalone and HA deployments. Python-nova mandatory patches must be installed on the compute(s) and controller hosts in both, the OpenStack standalone and HA deployments.


                  Before You Begin

                  Make sure that the Cisco Yum repository has been configured. See Setting Up the Cisco Yum Repository.

                  Make sure that the OpenStack controller node is up and running.

                  Optionally, you can also download the patch. You can use the wgetcisco_repository_path/patch_name.rpm command.

                  Patches that have a mandatory tag must be installed. Patches that have an optional tag can be installed based on your preference. Use the rpm -qpil file.rpm or yum info file.rpm command to determine which bug fixes are included in the patches. The patches are located in the Cisco Yum repository.

                  Procedure
                    Step 1   Install the patches from the Cisco-os repository.
                    yum install –y patch_name
                    Step 2   For the python-nova patch, restart the OpenStack nova API.
                    service openstack-nova-api restart =>on controller
                    service openstack-nova-compute restart => on compute
                    Step 3   For the Openstack dashboard, restart the httpd services.
                    service httpd restart
                    

                    Configuring FQDN Parameters on a Specific Host

                    If you need to configure the fully qualified domain name (FQDN) parameter differently on one host than the other hosts in a group, you can use this procedure.

                    Procedure
                      Step 1   From the Red Hat Enterprise Linus OpenStack Platform Installer window, choose Configure > Puppet Classes.
                      Step 2   Enter n1kv_vem in the Search field and click Search.
                      Step 3   Choose the neutron::agents::n1kv_vem class name and click the Smart Class Parameter tab.
                      Step 4   In the Smart Class Parameter pane, click each parameter that you want to change and check the Override checkbox in the corresponding pane to the right.
                      Step 5   Scroll down to the Override Value For Specific Hosts area and provide the host specific configuration based on the FQDN of the host.
                      Step 6   Click Submit.