You can increase the performance for a Cisco CSR 1000v running in a KVM environment by changing some settings on the KVM host.
These settings are independent of the IOS XE configuration settings on the CSR 1000v instance.
 Note |
In Cisco IOS XE Release 3.13S and earlier, the CSR 1000v instance does not support jumbo packets larger than 1518 bytes for
KVM on a Virtio interface. The packets larger than 1518 bytes are dropped.
|
To improve the KVM configuration performance, Cisco recommends that you:
-
Enable vCPU pinning
-
Enable emulator pinning
-
Enable numa tuning. Ensure that all the vCPUs are pinned to the physical cores on the same socket.
-
Set hugepage memory backing
-
Use virtio instead of IDE
-
Use graphics VNC instead of SPICE
-
Remove unused devices USB, tablet etc.
-
Disable memballoon
 Note |
These settings might impact the number of VMs that you can be instantiate on a server.
Tuning steps are most impactful for a small number of VMs that you instantiate on a host.
|
In addition to the above mentioned, do the following:
Enable CPU Pinning
Increase the performance for the KVM environments by using the KVM CPU Affinity option to assign a virtual machine to a specific
processor. To use this option, configure CPU pinning on the KVM host.
In the KVM host environment, use the following commands:
-
virsh nodeinfo: To verify the host topology to find out how many vCPUs are available for pinning by using the following command.
-
virsh capabilities: To verify the available vCPU numbers.
-
virsh
vcpupin <vmname > <vcpu# > <host
core# >: To pin the virtual CPUs to sets of processor cores.
This KVM command must be executed for each vCPU on your Cisco CSR 1000v. The following example pins virtual CPU 1 to host
core 3:
virsh
vcpupin
csr1000v
1
3
The following example shows the KVM commands needed if you have a Cisco CSR 1000v configuration with four vCPUs and the host
has eight cores:
virsh
vcpupin
csr1000v
0
2
virsh
vcpupin
csr1000v
1
3
virsh
vcpupin
csr1000v
2
4
virsh
vcpupin
csr1000v
3
5
The host core number can be any number from 0 to 7. For more information, see the KVM documentation.
 Note |
When you configure CPU pinning, consider the CPU topology of the host server. If you are using a CSR 1000v instance with multiple
cores, do not configure CPU pinning across multiple sockets.
|
BIOS Settings
Optimize the performance of the KVM configuraiton by applying the recommended BIOS settings as mentioned in the following
table:
Configuration
|
Recommended Setting
|
Intel Hyper-Threading Technology
|
Disabled |
Number of Enable Cores
|
ALL
|
Execute Disable
|
Enabled
|
Intel VT
|
Enabled
|
Intel VT-D
|
Enabled
|
Intel VT-D coherency support
|
Enabled
|
Intel VT-D ATS support
|
Enabled
|
CPU Performance
|
High throughput
|
Hardware Perfetcher
|
Disabled
|
Adjacent Cache Line Prefetcher
|
Disabled
|
DCU Streamer Prefetch
|
Disable
|
Power Technology
|
Custom
|
Enhanced Intel Speedstep Technology
|
Disabled
|
Intel Turbo Boost Technology
|
Enabled
|
Processor Power State C6
|
Disabled
|
Processor Power State C1 Enhanced
|
Disabled
|
Frequency Poor Override
|
Enabled
|
P-State Coordination
|
HW_ALL
|
Energy Performance
|
Performance
|
For information about Red Hat Enterprise Linux requirements, see Bootstrap Properties and the subsequent sections.
Host OS Settings
In the host side, Cisco recommends that you use hugepages and enable emulator pinning. The following are some of the recommended
settings in the host side:
In addition to enabling hugepages and emulator pinning, the following settings are also recommended: nmi_watchdog=0 elevator=cfq
transparent_hugepage=never
 Note |
If you use Virtio VHOST USER with VPP or OVS-DPDK, you can increase the buffer size to 1024 (rx_queue_size='1024' ) provided
the version of your QEMU supports it.
|
IO Settings
You can use SR-IOV for better performance. However, note that this might bring in some limitations such as number of virtual
functions (VF), OpenStack limitations for SR-IOV like QoS support, live migration and security group support.
If you use a modern vSwitch like fd.io VPP or OVS-DPDK, reserve at least 2 cores for the VPP worker threads or the OVS-DPDK
PMD threads.
Configure the following parameters to run the VPP through command line:
-
-cpu host: This parameter causes the VM to inherit the host OS flags. You require libvirt 0.9.11 or greater for this to be
included in the xml configuration.
-
-m 8192: You require 8GB RAM for optimal zero packet drop rates.
-
rombar=0: To disable PXE boot delays, set rombar=0 to the end of each device option list or add "<rom bar=off />" to the device
xml configuration.
Sample XMLs for KVM Performance Improvement
Sample XML for numa tuning
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Sample XML for vCPU and emulator pinning
<cputune>
<vcpupin vcpu='0' cpuset='3'/>
<emulatorpin cpuset='3'/>
</cputune>
Sample XML for hugepages
<currentMemory unit='KiB'>4194304</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='0'/>
</hugepages>
<nosharepages/>
</memoryBacking>
Sample XML for virtio instead of IDE
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/rhel7.0.qcow2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
Sample XML for VNC graphics
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1' keymap='en-us'>
<listen type='address' address='127.0.0.1'/>
</graphics>
XML for disabling memballon
<memballon model='none'>