|
|
|
Overcloud fails to deploy due to a puppet failure. |
The VSM management interface (N1000vVSMHostMgmtIntf) is set to the same interface as the provisioning interface. |
Reconfigure the VSM to use a different management interface or refer to the Red Hat documentation and configure the provisioning interface on a bridge. Use the bridge for N10 00vVSMHostMgmtIntf and set the N1000vExistingBridge parameter to true. These parameters are defined in the cisco-n1kv-config.yaml configuration file available at /usr/share/openstack-tripleo-heat-templates/environments. |
Some VEMs cannot communicate with the VSM. |
- Different vendor NICs are attached to your physical server. The management interface might be mapped to a different Ethernet interface other than the eth0 interface.
- Th e parameters N1000vVSMHostMgmtIntf, N1000vVEMHostMgmtIntf, and N1000vExistingBridge are configured incorrectly in the configuration file, /usr/share/openstack-tripleo-heat-templates/environments/cisco-n1kv-config.yaml.
- The parameter N1000vVEMHostMgmtIntf is configured incorrectly in the environment file.
|
Edit the parameter values to match the values in the configuration file (YAML file), or the environment file. If your compute and controller nodes have heterogeneous NIC ordering, you can leverage the custom configuration provided by the NodeDataLookup parameter to specify the different configurations. We recommend that you use the NodeDataLookup parameter to specify the configuration for controller nodes and use the N1000vVEMHostMgmtIntf parameter for the compute nodes. The number of controller nodes ranges from 1 to 3, whereas the number of computes nodes can expand over the life of your deployment. For details about how to leverage NodeDataLookup, see the Cisco Nexus 1000V for KVM Installation Guide for Red Hat Enterprise Linux OpenStack Platform 7. |
VSM bringup fails if the management (provisioning) interface of the controller node is not eth0. |
- Different vendor NICs are attached to your physical server. The management interface might be mapped to a different Ethernet interface other than the eth0 interface.
- The parameters N1000vVSMHostMgmtIntf and N1000vExistingBridge are configured incorrectly in the configuration file, /usr/share/openstack-tripleo-heat-templates/environments/cisco-n1kv-config.yaml.
|
1. Configure the parameters in the configuration file /usr/share/openstack-tripleo-heat-templates/environments/cisco-n1kv-config.yaml. 2. If the management interface is not eth0, configure the management interface of VSM on a separate uplink interface. Set the value of the N1000vVSMHostMgmtIntf parameter to an uplink interface name other than the name used for the controller node management interface. Also, set the N1000vExistingBridge parameter to false. |
VSM boots to the loader prompt after multiple controllers nodes reboot ungracefully. |
Multiple controller nodes fail simultaneously. |
Ensure that you have a backup of the latest VSM configuration at a remote location. 1. Disable pacemaker resources, such as the primary VSM (vsm-p) and the secondary VSM (vsm-s). 2. Log in to nodes with active VSMs and shut down the VSM VMs. 3. Log in to all three controllers and format the primary_disk and secondary_disk by using the qemu-img create disk-name 4G command at /var/spool/cisco/vsm/. 4. Enable both the primary and secondary VSMs in pacemaker.
#pcs resource enable resource_id
5. Recover the missing VSM configuration from backup. Note The VSM configuration might be lost during the recovery. |
VSMs go into a split-brain condition where the primary and secondary VSMs are in active - active state. |
- Layer 2 connectivity between the primary and secondary VSMs is lost.
- Multiple controller nodes fail simultaneously.
|
Ensure that you have a backup of the latest VSM configuration at a remote location. 1. Identify the primary and secondary VSM controller hosts by using the pcs status command. For example:
[root@overcloud-controller-2 heat-admin]# pcs status | grep vsm
vsm-p (ocf::heartbeat:VirtualDomain): Started overcloud-controller-1
vsm-s (ocf::heartbeat:VirtualDomain): Started overcloud-controller-2
2. Disable pacemaker resources, such as the primary VSM (vsm-p) and the secondary VSM (vsm-s). For example:
#pcs resource disable resource_id
3. Log in to nodes with active VSMs and shut down the VSM VMs. 4. Log in to all three controllers and format the primary_disk and secondary_disk by using the qemu-img create disk-name 4G command at /var/spool/cisco/vsm/. 5. Enable both the primary and secondary VSMs in pacemaker.
#pcs resource enable resource_id
6. Recover the missing VSM configuration from backup. Note If you cannot log in to one of the VSMs via virsh console, use the peer mac-addresses clear command on the active VSM accessible through the virsh console command. Note The VSM configuration might be lost during the VSM split-brain recovery. |
VSM standby is not running. |
Pacemaker cannot reinstantiate a primary standby or secondary standby node. |
1. Check whether the primary or secondary node is in standby mode. Log in to the active VSM and run the show redundancy status command. 2. Run the pcs resource cleanup [ vsm-p|vsm-s ] command from one of the controller nodes for the standby VSM. 3. Check the pacemaker status:
[root@overcloud-controller-2 heat-admin]# pcs status | grep vsm
vsm-p (ocf::heartbeat:VirtualDomain): Started overcloud-controller-1
vsm-s (ocf::heartbeat:VirtualDomain): Started overcloud-controller-2
|
VEM configuration is missing. |
- Per node specified in the NodeDataLookup configuration is not correctly applied to the node.
- The Puppet apply command to apply configurations failed to run.
|
1. Log in to the node with the VEM configuration problem. 2. Run the dmidecode --s system-uuid command to retrieve the System UUID. 3. Open the <System-UUID>.json file and confirm whether all configuration parameters expected for the configuration file, n1kv.conf, are present. 4. If the parameter values are incorrect, go to the Undercloud and reconfirm the VEM override parameter NodeDataLookup in the configuration file /usr/share/openstack-tripleo-heat-templates/environments/cisco-n1kv-config.yaml. 5. Perform a heat stack update by redeploying Overcloud to have the latest configurations on the respective nodes. |
Virtual Ethernet interfaces on VSM corresponding to the router ports on VEM flap continuously. |
The Neutron l3_ha parameter is set to True and one or more OpenStack controller nodes (in HA mode) are down or are in inconsistent state. |
1. Edit the configuration file, ../etc/neutron/neutron.conf, on the OpenStack controller node and set the l3_ha parameter to false and the allow_automatic_l3agent_failover parameter to true. 2. Restart the neutron-server service and the neutron-l3-agent service. 3. Repeat Step 1 and Step 2 on all OpenStack controller nodes in HA. 4. Clean up the router ports by manually deleting and recreating the router ports. |
Inter-VLAN traffic has stopped for VMs. |
The Neutron allow_automatic_l3agent_failover parameter is set to False and one or more OpenStack controller nodes (in HA mode) are down or are in inconsistent state, causing an automatic migration of l3_agent ports failure. |
1. Edit the configuration file, ../etc/neutron/neutron.conf, on the OpenStack controller node and set the allow_automatic_l3agent_failover parameter to true. 2. Restart the neutron-server service and the neutron-l3-agent service. 3. Repeat Step 1 and Step 2 on all OpenStack controller nodes in HA. |
Uplink Ethernet and Virtual Ethernet (VTEP) ports show NoPortProfile state on VSM after deploying using OSP7. |
- Port profiles required to bring up ports on VSM are not defined on the VSM.
- The port profile names defined in the heat template (/usr/share/openstack-tripleo-heat-templates/environments/cisco-n1kv-config.yaml) do not match the port profile names defined on the VSM.
|
Create Ethernet and VTEP Virtual Ethernet port profiles with the same name as defined in the heat template ( cisco-n1kv-config.yaml) on the Undercloud node. |
VSM shows the VEM module as offline or a VEM module is missing from VSM show module command output after the deployment through OSP7. |
The Cisco Nexus 1000V VEM service is not running on the respective VEM module. |
Verify the state of the Cisco Nexus 1000V VEM service on the VEM host by using the service nexus1000v status command. If the service is not running, restart it by using the service nexus1000v start command. |
The Cisco Nexus 1000V VEM cannot communicate with the VSM due to an incorrect networking configuration. |
Revisit the network planning. You can leverage the NodeDataLookup parameter to provide node-specific configuration for a single node or class of nodes to enable heterogeneous deployment. |
An error is observed on the Openstack controller node when you try to access the policy profiles pushed via the Cisco Nexus 1000V VSM using the neutron cisco-policy-profile-list command. |
The configuration file, /etc/neutron/Neutron.conf, is missing a value for the service_plugins parameter. |
Update the neutron.conf configuration file with the service_plugins parameter value:
service_plugins =router,cisco_n1kv_profile
After updating the parameter value on all controllers, restart the neutron-server on all controllers in HA mode. |