The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This chapter contains the following sections:
In VXLAN multicast mode, the VTEP ports on the compute and network hosts need to respond to incoming IGMP query traffic for the multicast group to which they belong. However, the default firewall rules (iptables) drop the incoming IGMP query traffic from reaching the VTEP interfaces. In order to allow this traffic, a firewall rule needs to be configured on the respective compute and network hosts, as follows:
#iptables -I INPUT 1 -p igmp -j ACCEPT
Note | In OpenStack HA deployments, you need to set this firewall rule on the controller host as well, because the controller host also functions as a network host. |
If no other interfaces other than the management interface come up after you reboot, you can either manually bring up the interfaces by entering ifconfig interface_name or change the ONBOOT parameter to yes in the /etc/sysconfig/network and the /etc/sysconfig/network-scripts/ifcfg-"interface name" files.
Step 1 | Bring up a VM or bare metal server with RHEL 6.6 with base installation. | ||
Step 2 | Configure the name servers and mgmt IP address (eth0). | ||
Step 3 | If you are
deploying the
Cisco Nexus 1000V for KVM behind a firewall, configure a proxy
host.
| ||
Step 4 | Assign an IP address to the provisioning interface. The RHEL-OSP provides the DHCP for its clients from the subnet that you enter. | ||
Step 5 | Register the
RHEL server.
subscription-manager register | ||
Step 6 | Attach the RHEL
server to a repository pool.
| ||
Step 7 | Enable
repositories in the selected pools.
subscription-manager repos --enable=rhel-6-server-openstack-5.0-rpms subscription-manager repos --enable=rhel-6-server-openstack-foreman-rpms subscription-manager repos --enable=rhel-6-server-rpms subscription-manager repos --enable=rhel-6-server-rhscl-6-rpms | ||
Step 8 | Install
rhel-osp-installer.
yum install -y rhel-osp-installer | ||
Step 9 | Verify that a
valid fully qualified domain name (FQDN) hostname has been configured in the
/etc/hosts file by entering the following command:
hostname -f
If the command fails to return a hostname, you must configure one in the /etc/hosts file. The leftmost column is the ip_address to be resolved. The next column is the hostname, and the last column is an optional alias.
IPAddress Hostname Alias 203.0.10.3 web.openna.com web | ||
Step 10 | Install the RHEL-OSP Installer by executing the rhel-osp-installer on the Forman server's command line. For the detailed procedure, see the Red Hat Enterprise Linux OpenStack Platform 5 Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer) guide, Section 2.4.5., Installing the Red Hat Enterprise Linux OpenStack Platform Installer. | ||
Step 11 | Access the OpenStack Platform Installer's web user interface through its public IP address. If you have a private IP address, add an iptable rule so that you can access the web user interface. | ||
Step 12 | Log in to the OpenStack Platform Installer's web user interface using the login credentials. Username/password credentials are the ones that you see at the end of the RHEL-OSP Installer execution. | ||
Step 13 | Enable IP
forwarding by entering the following command at the command line prompt:
sysctl -w net.ipv4.ip_forward=1 | ||
Step 14 | Configure the same command shown in step 13 in the /etc/sysctl.conf file. | ||
Step 15 | Execute the
following IP table rules at the command line prompt:
iptables -I FORWARD -i eth1 -j ACCEPT iptables -I FORWARD -o eth1 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE | ||
Step 16 | If you are running RHEL-OSP installer version rhel-osp-installer-0.4.7-1.el6ost.noarch or lower, replace the /usr/share/openstack-foreman-installer/puppet/modules/quickstack/templates/cisco_plugins.ini.erb file with the one available in the Cisco yum repository. For information, see Configuring Additional Parameters in the cisco_plugin.ini File. |
To deploy Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP), you must add hosts to the RHEL-OSP Installer to use for provisioning. For the instructions on how to add these hosts, see Chapter 4, "Configuring Hosts" in the Red Hat Enterprise Linux OpenStack Platform 5 Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer) guide.
Installing OpenStack with Cisco Nexus 1000V VEM
You need to create a new OpenStack deployment.
You must add the VEM puppet class (neutron::agents::n1kv_vem) to different host groups depending on whether you are deploying OpenStack in standalone mode or High-Availability (HA) mode. When OpenStack is deployed in HA mode, the HA Controller provides the Neutron Networker services. So, in an OpenStack HA deployment, you add VEM to the HA Controller host group.
Step 1 | From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose OpenStack Installer > Deployments. | ||||||||||||||||||||||||||||
Step 2 | From the Red Hat Enterprise Linux OpenStack Platform Installer window, choose Configure > Puppet Classes. | ||||||||||||||||||||||||||||
Step 3 | In the Search field, enter n1kv_vem. | ||||||||||||||||||||||||||||
Step 4 | Click the neutron::agents::n1kv_vem class name. The Edit Puppet Class pane opens. | ||||||||||||||||||||||||||||
Step 5 | In the
Puppet
Class tab, in the Host Group field, choose the name of the host
group that you want to add the VEM puppet class to.
For an OpenStack standalone deployment, you need to add the puppet class to:
For an OpenStack HA deployment, you need to add the puppet class to: | ||||||||||||||||||||||||||||
Step 6 | Click the Smart Class Parameter tab. | ||||||||||||||||||||||||||||
Step 7 | In the
Smart Class Parameter pane, click the required
parameters. See table below.
| ||||||||||||||||||||||||||||
Step 8 | For each parameter, check the Override checkbox and configure the appropriate default value. | ||||||||||||||||||||||||||||
Step 9 | Click
Submit
|
You must configure the controller parameters.
For deployments with OpenStack in HA mode, ensure that you have added the VEM puppet class to the HA controller host group. For deployments with OpenStack in standalone mode, you do not add the VEM puppet class to the controller (Neutron) host group. For information, see Adding the n1kv_vem Class to a Host Group.
Step 1 | From the
Configure window, choose
Configure >
Host
Groups.
Notice that compute, controller, and Neutron networker host groups have been formed under the deployment that you created. | ||||||||||||||||||
Step 2 | Do one of the following: | ||||||||||||||||||
Step 3 | Click the Parameters tab. | ||||||||||||||||||
Step 4 | Click Override for all of the parameters that you want to define. See the table below. | ||||||||||||||||||
Step 5 | Scroll down to the bottom of the window. | ||||||||||||||||||
Step 6 | Change the
following parameters to the appropriate values.
| ||||||||||||||||||
Step 7 | Click Submit. |
You must configure the n1kv_vem class parameters for the Neutron Compute host group.
Ensure that you have added the VEM puppet class to the Neutron Compute host group. For information, see Adding the n1kv_vem Class to a Host Group.
You must bring up the Neutron Compute host group with the VEM installed in both standalone and HA deployments.
You need to configure the Neutron Networker parameters.
Ensure that you have added the VEM puppet class to the Neutron Networker host group. For information, see Adding the n1kv_vem Class to a Host Group.
You must bring up a Neutron Networker host group with the VEM installed.
Step 1 | From the
Configure window, choose
Configure >
Host Groups.
Notice that compute, controller, and Neutron networker host groups have been formed under the deployment that you created. | ||||||
Step 2 | For OpenStack in standalone mode, choose the Choose the base_RedHat_7/Deployment_Name/Neutron Networker host group. | ||||||
Step 3 | Click the Parameters tab. | ||||||
Step 4 | Click Override for all of the parameters that you want to define. See the table below. | ||||||
Step 5 | Scroll down to the bottom of the window. | ||||||
Step 6 | Change the following parameters to the appropriate values.
| ||||||
Step 7 | Click Submit. |
If you need to configure the fully qualified domain name (FQDN) parameter differently on one host than the other hosts in a group, you can use this procedure.
Step 1 | From the Red Hat Enterprise Linus OpenStack Platform Installer window, choose Configure > Puppet Classes. |
Step 2 | Enter n1kv_vem in the Search field and click Search. |
Step 3 | Choose the neutron::agents::n1kv_vem class name and click the Smart Class Parameter tab. |
Step 4 | In the Smart Class Parameter pane, click each parameter that you want to change and check the Override checkbox in the corresponding pane to the right. |
Step 5 | Scroll down to the Override Value For Specific Hosts area and provide the host specific configuration based on the FQDN of the host. |
Step 6 | Click Submit. |
If you need to, you can add additional parameters to the cisco_plugin.ini file.
You need to configure additional parameters int he n1kv.conf file.
You can set up the Cisco Yum repository.
Make sure that the Cisco Yum repository is reachable at the following URL:https://cnsg-yum-server.cisco.com/yumrepo.
Step 1 | Edit the /etc/yum.repos.d/cisco_os.repo file. |
Step 2 | Add the
following configuration:
[cisco-os] name=External repo for Cisco nexus 1000v served over HTTPS baseurl=https://cnsg-yum-server.cisco.com/yumrepo enabled=1 gpgcheck=1 gpgkey=https://cnsg-yum-server.cisco.com/yumrepo/RPM-GPG-KEY sslverify=1 |
Step 3 | Save and close the file. |
cisco-n1kv-openstack-neutron-<mandatory/optional>-patch-<openstack-neutron-version>.noarch.rpm cisco-n1kv-openstack-dashboard-<mandatory/optional>-patch-<openstack-dashboard-version>.noarch.rpm
Note | Mandatory patches must be installed on controller hosts in both OpenStack standalone and HA deployments. |
Make sure that the Cisco Yum repository has been configured. See Setting Up the Cisco Yum Repository.
Download the patch. You can use the wget cisco_repository_path/patch_name.rpm command.
Patches that have a mandatory tag must be installed. Patches that have an optional tag can be installed based on your preference. Use the rpm -qpil file.rpm or yum info file.rpm command to determine which bug fixes are included in the patches. The patches are located in the Cisco Yum repository.