The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
If you are deploying a system on a heterogeneous hardware, you can specify parameters using the node specific configuration.
A node is deployed in the Overcloud environment with a pre-defined configuration. You can define host specific configuration for a node to override the existing configurations using the NodeDataLookup parameter of a node. You can access a specific node using the unique system UUID attached to it. The system UUID for a node can be extracted using the dmidecode command. After you extract UUID, you can access the NodeDataLookup parameter and edit it to override the node configuration.
To override a node configuration before deploying it to the overcloud, you need to extract system UUID of a node using a different method. Complete the following steps to extract UUID for a node before it is deployed to the Overcloud:
Step 1 | Generate a file that contains node specific information(introspection data). By default, the filename is Extra_Hardware_[ironic-id] and is created at the location where the commands are run. For detailed information, see Accessing additional introspection data. | ||
Step 2 | Extract the node
system UUID using the
cat command. For example:
# cat extra_hardware-8f50de05-c57c-425f-8071-dc2b61a02ebc | jq -r 'map(select(.[0]=="system" and .[2]=="uuid"))'
Example: NodeDataLookup: | {"41447D6B-157D-1043-84C5-147EF47117C0": {"neutron::agents::n1kv_vem::uplink_profile": {"eth1": "system-uplink"}, "neutron::agents::n1kv_vem::vtep_config": {"vtep1": {"profile": "virtprof", "ipmode": "static", "ipaddress": "172.16.0.31", "netmask": "255.255.255.0"}}, "neutron::agents::n1kv_vem::host_mgmt_intf": "br-ex"}, "9F9CD7D7-6486-BF48-9A20-2E10EBD42A19": {"neutron::agents::n1kv_vem::uplink_profile": {"eth1": "system-uplink"}, "neutron::agents::n1kv_vem::vtep_config": {"vtep1": {"profile": "virtprof", "ipmode": "static", "ipaddress": "172.16.0.56", "netmask": "255.255.255.0"}}, "neutron::agents::n1kv_vem::host_mgmt_intf": "eth2"}} |
The following table lists mapping between HEAT template parameters and node specific configuration parameters:
HEAT Template Parameter |
Node Specific Configuration Parameter |
---|---|
N1000vVEMHostMgmtIntf |
neutron::agents::n1kv_vem::host_mgmt_intf |
N1000vUplinkProfile |
neutron::agents::n1kv_vem::uplink_profile |
N1000vVtepConfig |
neutron::agents::n1kv_vem::vtep_config |
N1000vPortDB |
neutron::agents::n1kv_vem::portdb |
N1000vVtepsInSameSub |
neutron::agents::n1kv_vem::vteps_in_same_subnet |
N1000vVEMFastpathFlood |
neutron::agents::n1kv_vem::fastpath_flood |
N1000vVSMHostMgmtIntf |
n1k_vsm::phy_if_bridge |
N1000vVSMRole |
n1k_vsm::vsm_role |
N1000vExistingBridge |
n1k_vsm::existing_bridge |