Creating a Boot Parameters File
The boot parameters file provides a means to pass configuration items to StarOS before it boots. The parameters are typically necessary to successfully load StarOS and specify items such as virtual slot number, VM type, NIC assignment and network bonding configuration.
By default, VPC-DI assigns the vNIC interfaces in the order offered by the hypervisor. To configure your vNICs manually according to a specific order, you need to create a boot parameters file. You also must create a boot parameters file if you want to enable a VNFM interface.
The boot parameters are sourced in multiple ways, with all methods using the same parameter names and usage. The first location for the boot parameters file is on the first partition of the first VM drive, for example, /boot1/param.cfg. The second location searched is on the configuration drive, which is a virtual CD-ROM drive. If you are using OpenStack, specify the target boot parameters file name as staros_param.cfg. If you are not using OpenStack, create an ISO image with staros_param.cfg in the root directory and attach this ISO to the first virtual CD-ROM drive of the VM.
As the VM boots, the param.cfg file is parsed first by the preboot environment known as CFE. Once the VM starts Linux, the virtual CD-ROM drive is accessed to parse the staros_param.cfg file. If there are any conflicts with values stored in the /boot1/param.cfg file, parameters in staros_param.cfg take precedence.
If you do not create a boot parameters file, the default file is used. If you create a boot parameters file, all parameters described in Configuring Boot Parameters must be defined.
Format of the Boot Parameters File
The structure of the boot parameters file is:
VARIABLE_NAME = VALUE
Specify one variable per line with a newline as the end of the line terminator (UNIX text file format). Variable names and values are case insensitive. Invalid values are ignored and an error indication is displayed on the VM console. If there are duplicate values for a variable (two different values specified for the same variable name), the last value defined is used.
Numeric values do not need to be zero padded. For example a PCI_ID of 0:1:1.0 is treated the same as 0000:01:01.0.
Network Interface Roles
Network interfaces serve specific roles depending on whether the VM is used for a CF or SF.
All system VMs have a network interface connection to the DI internal network. This network links all the VMs in a VPC-DI instance together. This network must be private to a VPC-DI instance and is configured by the system software.
All VMs have the option of configuring a network interface that is connected to the virtual network function (VNF) manager (VNFM) if it exists. This interface can be configured via DHCP or static IP assignment and is used to talk to a VNFM or higher level orchestrator. This interface is enabled before the main application starts.
On CFs, one additional interface connects to the management network interface. This interface is typically configured in StarOS and should be part of the Day 0 configuration. The management interface supports static address assignment through the main StarOS configuration file.
On SFs, an additional 0 to 12 network interfaces serve as service ports. These interfaces are configured by StarOS. Typically these ports are configured as trunk ports in the VNF infrastructure (VNFI).
Interface Role |
Description |
---|---|
DI_INTERFACE |
Interface to the DI internal network, required for all VM types |
MGMT_INTERFACE |
Interface to the management port on the CF VM |
SERVICE#_INTERFACE |
Service port number # on the SF VM, where # can be from 1 to 12. |
VNFM_INTERFACE |
Optional network interface to the VNFM or orchestrator, valid for all VM types |
Note |
Although VIRTIO interfaces can be used for the DI_INTERFACE role and the SERVICE#_INTERFACE roles, they are not recommended. |
Network Interface Identification
By default the first NIC found by a VPC-DI VM is assigned the DI internal network role. Additional ports serve as either the management interface on the CF or service ports on the SF. No interface is used as the VNFM interface by default.
VPC-DI assigns the vNIC interfaces in the order offered by the hypervisor. You cannot be guaranteed that the order of the vNICs as listed in the hypervisor CLI/GUI is the same as how the hypervisor offers them to the VM.
The order that VPC-DI finds the vNICs is subject to the PCI bus enumeration order and even paravirtual devices are represented on the PCI bus. The PCI bus is enumerated in a depth first manner where bridges are explored before additional devices at the same level. If all the network interfaces are of the same type then knowing the PCI topology is sufficient to get the vNIC order correct. If the network interfaces are of different types, then the order is dependent on the PCI topology plus the device driver load order inside the VM. The device driver load order is not guaranteed to be the same from software release to release but in general paravirtual devices are prior to pass-through devices.
There are several methods available to identify NICs.
-
MAC address: MAC address of the interface
-
Virtual PCI ID
-
Bonded interfaces: When using network device bonding, network interfaces are identified to serve as the slave interface role. The slave interfaces in the bond are identified using MAC, PCI ID, or Interface type.
-
Interface type and instance number.
Virtual PCI ID
Devices on a PCI bus are identified by a unique tuple known as the domain, bus, device, and function numbers. These identifiers can be identified in several ways.
Inside the guest, the lspci utility shows the bus configuration:
# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer
00:04.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
00:06.0 Ethernet controller: Red Hat, Inc Virtio network device
The domain, bus, device, and function numbers for this virtual bus are shown here:
Line |
Domain |
Bus |
Device |
Function |
---|---|---|---|---|
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) |
0 |
0 |
0 |
0 |
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] |
0 |
0 |
1 |
0 |
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] |
0 |
0 |
1 |
1 |
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) |
0 |
0 |
1 |
2 |
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) |
0 |
0 |
1 |
3 |
00:02.0 VGA compatible controller: Cirrus Logic GD 5446 |
0 |
0 |
2 |
0 |
00:03.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer |
0 |
0 |
3 |
0 |
00:04.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon |
0 |
0 |
4 |
0 |
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device |
0 |
0 |
5 |
0 |
00:06.0 Ethernet controller: Red Hat, Inc Virtio network device |
0 |
0 |
6 |
0 |
For libvirt-based virtual machines, you can get the virtual PCI bus topology from the virsh dumpxml command. Note that the libvirt schema uses the term slot for the device number. This is a snippet of the xml description of the virtual machine used in the previous example:
<interface type='bridge'>
<mac address='52:54:00:c2:d0:5f'/>
<source bridge='br3043'/>
<target dev='vnet0'/>
<model type='virtio'/>
<driver name='vhost' queues='8'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:c3:60:eb'/>
<source bridge='br0'/>
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
Interface Type and Instance Number
Here the NIC is identified by its type using its Linux device driver name (virtio_net, vmxnet3, ixgbe, i40e, etc) and its instance number. The instance number is based on PCI enumeration order for that type of interface starting at instance number 1. The interface type is available to identify both paravirtual types as well as pass-through interfaces and SR-IOV virtual functions. The PCI enumeration order of devices on the PCI bus can be seen from the lspci utility, which is on the host OS.
For example, a CF with the following guest PCI topology indicates that virtio_net interface number1 is the Ethernet controller at 00:05.0 and virtio_net interface number 2 is the Ethernet Controller at 00:06.0. The output is from the lspci command executed in the guest:
# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer
00:04.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
00:06.0 Ethernet controller: Red Hat, Inc Virtio network device
Here is the complete list of the supported Linux drivers:
Type |
PCI Vendor / Device ID |
Driver Name |
---|---|---|
VIRTIO (paravirtual NIC for KVM) |
0x10af / 0x1000 |
virtio_net |
VMXNET3 (paravirtual NIC for VMware) |
0x15ad / 0x07b0 |
vmxnet3 |
Intel 10 Gigabit Ethernet |
0x8086 / 0x10b6 0x8086 / 0x10c6 0x8086 / 0x10c7 0x8086 / 0x10c8 0x8086 / 0x150b 0x8086 / 0x10dd 0x8086 / 0x10ec 0x8086 / 0x10f1 0x8086 / 0x10e1 0x8086 / 0x10db 0x8086 / 0x1508 0x8086 / 0x10f7 0x8086 / 0x10fc 0x8086 / 0x1517 0x8086 / 0x10fb 0x8086 / 0x1507 0x8086 / 0x1514 0x8086 / 0x10f9 0x8086 / 0x152a 0x8086 / 0x1529 0x8086 / 0x151c 0x8086 / 0x10f8 0x8086 / 0x1528 0x8086 / 0x154d 0x8086 / 0x154f 0x8086 / 0x1557 |
ixgbe |
Intel 10 Gigabit NIC virtual function |
0x8086 / 0x10ed 0x8086 / 0x1515 |
ixgbevf |
Cisco UCS NIC |
0x1137 / 0x0043 0x1137 / 0x0044 0x1137 / 0x0071 |
enic |
Mellanox ConnectX-5 |
0x15b3 / 0x1017 0x15b3 / 0x1018 |
mlx5_core |
Intel XL 710 family NIC (PF) |
0x8086 / 0x1572 (40 gig) 0x8086 / 0x1574 (40 gig) 0x8086 / 0x1580 (40 gig) 0x8086 / 0x1581 (40 gig) 0x8086 / 0x1583 (40 gig) 0x8086 / 0x1584 (40 gig) 0x8086 / 0x1585 (40 gig) 0x8086 / 0x158a (25 gig) 0x8086 / 0x158b (25 gig) |
i40e** |
Intel XL 710 family NIC virtual function |
0x8086 / 0x154c |
i40evf |
** Note: A known issue exists where MAC address assignment does not occur dynamically for SRIOV VFs created on the host when using the i40e driver. MAC address assignment is necessary to boot the StarOS VM. As a workaround, MAC address assignment must be configured from the host. Refer to the following link for more information:https://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/xl710-sr-iov-config-guide-gbe-linux-brief.pdf
Configuring Boot Parameters
If you do not create a boot parameters file, the default file is used. If you create a boot parameters file, all parameters described in this task must be defined.
Before you begin
Refer to Network Interface Roles and Network Interface Identification for more information on determining the interface identifiers for your VM interfaces.
Procedure
Step 1 |
CARDSLOT=slot-number slot_number is an integer between 1 and 32 that indicates the slot number or VM. CF slots can be 1 or 2. SF slots can range from 3 to 48. |
Step 2 |
CARDTYPE=card-type card-type identifies whether the VM is a CF or SF.
|
Step 3 |
interface-role_INTERFACE=interface-id Valid values for interface-role are:
For example, DI_INTERFACE=interface-id. Refer to Network Interface Roles for more information on interface roles. Valid values for interface-id are:
Refer to Network Interface Identification for information on determining the interface identifier. Example:This example identifies the interface by its MAC address:
This example identifies the interface by its guest PCI address:
This example identifies the interface by its interface type (1st virtio interface):
Example:This example identifies the interfaces as a network bond interface. The example illustrates identifying the interface using MAC address, PCI identifier and interface type:
|
Configuring Network Interface Bonding
The system supports configuring pairs of network interfaces into an active/standby bonded interface. Only one interface is active at a time and failure detection is limited to the loss of the physical link. Use this task to configure bonded interfaces.
All bonding variable names use the format interface-role _BOND. Refer to Network Interface Roles for information on interface roles.
Before you begin
All boot parameters described in this task are optional. If these parameters are required, add them to the boot parameters file together with the required parameters described in Configuring Boot Parameters.
Procedure
Step 1 |
interface-role _BOND_PRIMARY=interface-id Configures the primary slave interface if you have a preference for a particular interface to be active the majority of the time. The default bond configuration does not select a primary slave. Refer to Network Interface Roles for information on interface roles; refer to Network Interface Identification for information regarding interface identifiers.
Example:This example specifies the primary interface using a MAC address:
This example specifies the primary interface using a PCI identifier:
This example specifies the primary interface using an interface type identifier: Example:
|
||
Step 2 |
interface-role _BOND_MII_POLL = poll-interval Specifies the poll interval, in milliseconds, to use when MII is used for link detection. The poll interval can range from 0 to 1000. The default is 100. |
||
Step 3 |
interface-role _BOND_MII_UPDELAY=slave-enable-delay Specifies how long to wait for the link to settle before enabling a slave interface after a link failure, when MII is used for link detection. The link state can bounce when it is first detected. This delay allows the link to settle before trying to use the interface and thereby avoids excessive flips in the active slave for the bond interface. The slave enable delay must be a multiple of the MII poll interval. Values are in milliseconds and the default is 0. |
||
Step 4 |
interface-role _BOND_MII_DOWNDELAY=slave-disable-delay Optional. When used, it allows the bond to wait before declaring that the slave interface is down, when MII is used for link detection. The slave disable delay must be a multiple of the MII poll interval. Values are in milliseconds and the default is 0. |
Configuring a VNFM Interface
A virtual network function management (VNFM) interface is designed to communicate between each VM and a VNFM. This interface is brought up before the main application and can be configured only using the boot parameters. The VNFM interface is disabled by default.
Use this task to configure a VNFM interface:
Before you begin
All boot parameters described in this task are optional. If these parameters are required, add them to the boot parameters file together with the required parameters described in Configuring Boot Parameters.
Procedure
Step 1 |
VNFM_IPV4_ENABLE={true | false} Enables the VNFM interface. |
Step 2 |
VNFM_CARTRIDGE_AGENT={true | false} Enables the cartridge agent. This must be enabled if the VNFM is using the cartridge agent. |
Step 3 |
VNFM_IPV4_DHCP_ENABLE={true | false} Enables DHCP on the VNFM. |
Step 4 |
VNFM_IPV4_ADDRESS=x.x.x.x Specifies the IP address for the VNFM where DHCP is not used. |
Step 5 |
VNFM_IPV4_NETMASK=x.x.x.x Specifies the netmask for the IP address of the VNFM where DHCP is not used. |
Step 6 |
VNFM_IPV4_GATEWAY=x.x.x.x Specifies the gateway for the IP address of the VNFM where DHCP is not used. |
VNFM Interface Options
Note |
These configuration options are optional. |
The virtual network functions manager (VNFM) interface is designed to communicate between each VM and a VNFM. The VNFM interface initializes before the main application and only boot parameters can configure the interface.
The VNFM interface is disabled by default.
Enable VNFM IPv4 Interface
The default value is False (disabled).
Variable |
Valid Values |
---|---|
VNFM_IPV4_ENABLE |
True or False |
Configure IPv4 DHCP Client
Variable |
Valid Values |
---|---|
VNFM_IPV4_DHCP_ENABLE |
True or False |
Configure IPv4 Static IP
Note |
If IPv4 DHCP client is enabled, static configuration parameters are ignored. |
Variable |
Valid Values |
---|---|
VNFM_IPV4_ADDRESS |
x.x.x.x |
VNFM_IPV4_NETMASK |
x.x.x.x |
VNFM_IPV4_GATEWAY |
x.x.x.x |
Enable VNFM IPv6 Interface.
Variable |
Valid Values |
---|---|
VNFM_IPV6_ENABLE |
True or False |
Enable IPv6 Static IP Configuration
Variable |
Valid Values |
---|---|
VNFM_IPV6_STATIC_ENABLE |
True or False |
If set to true, static IP parameters configuration applies to the interface as shown in the following section. If set to false, the interface attempts to use both stateless autoconfiguration (RFC4862) and DHCPv6 to configure the address of the interface.
Configure IPv6 Static IP
Note |
If the "VNFM_IPV6_ENABLE" parameter value is set to false, the static configuration parameters are ignored. The IPv6 address field should conform to RFC 5952. Prefix is fixed at /64. |
Variable |
Valid Values |
---|---|
VNFM_IPV6_ADDRESS |
x:x:x:x:x:x:x:x |
VNFM_IPV6_GATEWAY |
x:x:x:x:x:x:x:x |
Configuring the DI Network VLAN
The DI network requires a unique and isolated network available for its use. When using pass-through interfaces, a VLAN ID can be configured to allow for easier separation of the VPC-DI instances in the customer network. Optionally, the DI Network VLAN can also be tagged on the host or even the L2 switch, if there are dedicated ports on the host.
Use this task to configure the VLAN.
Before you begin
All boot parameters described in this task are optional. If these parameters are required, add them to the boot parameters file together with the required parameters described in Configuring Boot Parameters.
Procedure
DI_Internal_VLANID=vlan-id Specifies a VLAN ID for the internal DI network. Values can range from 1 to 4094. Example:
|
Configuring IFTASK Tunable Parameters
By default, DPDK allocates 30% of the CPU cores to the Internal Forwarder Task (IFtask) process. You can configure the resources allocated to IFTASK using these boot parameters. Use the show cpu info and show cpu verbose commands to display information regarding the CPU core allocation for IFTASK.
Note |
These are optional parameters that should be set with extreme care. |
Procedure
Step 1 |
(Optional) IFTASK_CORES=percentage-of-cores Specify the percentage of CPU cores to allocate to IFTASK. Values can range from 0 to 100 percent. The default is 30. |
||
Step 2 |
(Optional) MCDMA_THREAD_DISABLE=percentage-of-iftask-cores Set the MCDMA_THREAD_DISABLE parameter to 1 to run PMDs on all cores, rather than using an MCDMA - VNPU split. |
||
Step 3 |
(Optional) IFTASK_SERVICE_TYPE=value Specifies the service type being deployed in order to calculate the service memory and enable service-specific features. The following service types can be specified:
The default is 0. |
||
Step 4 |
(Optional) IFTASK_CRYPTO_CORES=value When IFTASK_SERVICE_TYPE is configured to "2" (EPDG), this parameter specifies the percentages of iftask cores to allocate to crypto processing. Values can range from 0 to 50 percent, though the cores dedicate will be capped at 4. The default is 0.
|
||
Step 5 |
(Optional) IFTASK_DISABLE_NUMA_OPT=value Use this setting to disable the NUMA optimizations, even though more than 1 NUMA node is presented to the VM by the host. This option can be set when NUMA optimizations are not desirable for whatever reason.
NUMA optimization is enabled by default, except for the following cases:
|
||
Step 6 |
(Optional) IFTASK_VNPU_TX_MODE=value The compute nodes in an Ultra M deployment have 28 cores. Two of these cores are reserved for use by the host. When 26 cores are utilized, this results in an unequal distribution of MCDMA channels across the cores used to perform MCDMA work. When this setting is enabled, the MCDMA function cores in iftask are split equally as MCDMA cores and VNPU TX lookup cores.
|
||
Step 7 |
(Optional) MULTI_SEG_MBUF_ENABLE=value By default in release 21.6 and higher, the system enables the use of multi-segmented transmission/reception with smaller size buffers in all memory pools for Ixgbe pf/vf drivers. This feature reduces the overall memory size of IFTASK and makes it more suitable for small deployments.
|
Example
[local]mySystem# show cloud hardware iftask 4
Card 4:
Total number of cores on VM: 24
Number of cores for PMD only: 0
Number of cores for VNPU only: 0
Number of cores for PMD and VNPU: 3
Number of cores for MCDMA: 4
Number of cores for Crypto 0
Hugepage size: 2048 kB
Total hugepages: 3670016 kB
NPUSHM hugepages: 0 kB
CPU flags: avx sse sse2 ssse3 sse4_1 sse4_2
Poll CPU's: 1 2 3 4 5 6 7
KNI reschedule interval: 5 us
Increased Maximum IFtask Thread Support
Feature Summary and Revision History
Summary Data
Applicable Product(s) or Functional Area |
All |
Applicable Platform(s) |
VPC-DI |
Feature Default |
Enabled - Always-on |
Related Changes in This Release |
Not applicable |
Related Documentation |
VPC-DI System Administration Guide |
Revision History
Important |
Revision history details are not provided for features introduced before releases 21.2 and N5.1. |
Revision Details |
Release |
---|---|
From this release, the maximum number of IFtask threads configuration supported is increased to 22 cores. |
21.8 |
First introduced. |
Pre 21.2 |
Feature Changes
When the number of DPDK Internal Forwarder (IFTask) threads configured (in /tmp/iftask.cfg) are greater than 14 cores, the IFTask drops packets or displays an error.
Previous Behavior: Currently, the maximum number of IFtask threads configuration is limited to only 14 cores.
New Behavior: From Release 21.8, the maximum number of IFtask threads configuration supported is increased to 22 cores.
Configure MTU Size
By default, the IFTASK process sets the maximum interface MTU as follows:
-
Service interfaces: 2100 bytes
-
DI network interface: 7100 bytes
These default can be modified by setting the following parameters in the param.cfg file:
Parameter Name |
Range |
Default Value |
---|---|---|
DI_INTERFACE_MTU= |
576-9100 |
7100 |
SERVICE_INTERFACE_MTU= |
576-9100 |
2100 |
Refer to Configure Support for Traffic Above Supported MTU for configuring the MTU size for a system which does not support jumbo frames.
Configure Support for Traffic Above Supported MTU
By default, jumbo frame support is required for the system to operate. If your infrastructure does not support jumbo frames, you can still run the system, however you must specify the MTU for the DI internal network to be 1500 in the boot parameters file. This allows the IFTASK to process DI network traffic that is above the supported MTU.
Before you begin
All boot parameters described in this task are optional. If these parameters are required, add them to the boot parameters file together with the required parameters described in Configuring Boot Parameters.
Procedure
DI_INTERFACE_MTU=1500 Specifies that the DI internal network does not support jumbo frames so that the software handles jumbo frames appropriately. |
Boot Parameters File Examples
This example shows a boot parameters file for a CF in slot 1 with two VIRTIO interfaces:
CARDSLOT=1
CARDTYPE=0x40010100
DI_INTERFACE=TYPE:enic-1
MGMT_INTERFACE=TYPE:virtio_net-2
This example shows a boot parameters file for an SF in slot 3 with three VIRTIO interfaces:
CARDSLOT=3
CARDTYPE=0x42020100
DI_INTERFACE=TYPE:enic-1
SERVICE1_INTERFACE=TYPE:enic-3
SERVICE2_INTERFACE=TYPE:enic-4
This example shows a boot parameters file for a CF with pass-through NICs, bonding configured and a DI internal network on a VLAN:
CARDSLOT=1
CARDTYPE=0x40010100
DI_INTERFACE=BOND:TYPE:enic-1,TYPE:enic-2
MGMT_INTERFACE=BOND:TYPE:ixgbe-3,TYPE:ixgbe-4
DI_INTERNAL_VLANID=10