Cisco Catalyst 8000V Edge Software Installation And Configuration Guide

PDF

Install Cisco Catalyst 8000V in VMware ESXi Environment

Want to summarize with AI?

Log in

Overview

Provides information about how to deploy Cisco Catalyst 8000V in ESXi environments, the requirements for a successful deployment, and the deployment methods that are supported.

VMware ESXi is a hypervisor that allows the basic creation and management of virtual machines, and is one of the hypervisors supported by Cisco Catalyst 8000V. This hypervisor runs on x86 hardware with virtualization extensions, and you can use it to run several VMs simultaneously.

This chapter provides information about how to deploy Cisco Catalyst 8000V in ESXi environments, the requirements for a successful deployment, and the deployment methods that are supported.

Before you proceed, familiarize yourself with VMware vSphere information by referring to the official VMWare product documentation.

Note

Oversubscription of host resources can reduce performance, and your instance could become unstable. We recommend following the guidelines and best practices for your host hypervisor.


Supported features and operations

VMware supports various features and operations that allow you to manage your virtual applications and perform operations such as cloning, migration, shutdown and resume.

Some of these operations save the current runtime state of the VM. When you restart the VM, the system restores this state. If the runtime state includes traffic-related state, on resumption or replaying the runtime state, additional errors, statistics, or messages are displayed on the user console. If the saved state is only configuration driven, you can use these features and operations without any issues.

See the tables to view all the supported features and operations for Cisco Catalyst 8000V instances deployed in ESXi environments.

Table 1. Supported VMware features for vCenter Server

Supported entities

Description

Cloning

Enables cloning a virtual machine or template, or cloning a virtual machine to a template.

Migration

The entire state of the virtual machine as well as its configuration file, if necessary, is moved to the new host even while the data storage remains in the same location on shared storage.

vMotion

Enables moving the VM from one physical server to another while the VM remains active.

Template

Uses templates to create new virtual machines by cloning the template as a virtual machine.

Table 2. Supported VMware operations (for vCenter server and vSphere client)

Supported entities

Description

Power on

Powers on the virtual machine and boots the guest operating system if the guest operating system is installed.

Power off

Stops the virtual machine until it is powered back. The power off option performs a “hard” power off, which is analogous to pulling the power cable on a physical machine and always works.

Shutdown

Shut Down, or soft power off uses VMware Tools to perform a graceful shutdown of a guest operating system. In certain situations, such as when VMware Tools is not installed or the guest operating system is unresponsive, the shut down might not complete, and you must use the power off option instead

Suspend

Suspends the virtual machine.

Reset or restart

Stops the virtual machine and restarts it.

OVF creation

An OVF package consisting of several files in a directory captures the state of a virtual machine including disk files that are stored in a compressed format. You can export an OVF package to your local computer.

OVA creation

You can create a single OVA package file from the OVF package/template. The OVA can then be distributed more easily; for example, it may be downloaded from a website or moved via a USB key.

Table 3. Supported networking features

Supported entities

Description

Custom MAC address

You can set up the MAC address manually for a virtual network adapter from both vCenter Server and vSphere Client.

Distributed VSwitch

A vSphere distributed switch on a vCenter Server data center can manage networking traffic for all associated hosts on the data center. This feature is available only from vCenter Server.

Distributed resources scheduler

Provides automatic load balancing across hosts.

NIC load balancing

Load balancing and failover policies allow you to determine how network traffic is distributed between adapters and how to reroute traffic if an adapter fails.This feature is available in both vCenter Server and vSphere Client.

NIC teaming

This feature is available in both vCenter Server and vSphere Client and allows you to set up an environment where each virtual switch connects to two uplink adapters that form a NIC team. The NIC teams can then either share the load of traffic between physical and virtual networks among some or all of its members, or provide passive failover in the event of a hardware failure or a network outage.

Note
NIC Teaming can cause a large number of ARP packets to flood the Cisco Catalyst 8000V and overload the CPU. To avoid this situation, reduce the number of ARP packets and implement NIC Teaming as Active-Standby rather than Active-Active.

vSwitch

A vSwitch is a virtualized version of a Layer 2 physical switch. A vSwitch can route traffic internally between virtual machines and link to external networks. You can use vSwitches to combine the bandwidth of multiple network adapters and balance communications traffic among them. You can also configure a vSwitch to handle a physical NIC fail-over. This feature is available in both vCenter Server and vSphere Client.

Table 4. High availability features

Supported entities

Description

VM-level high availability

To monitor operating system failures, VM-Level High Availability monitors heartbeat information in the VMware High Availability cluster. Failures are detected when no heartbeat is received from a given virtual machine within a user-specified time interval. VM-Level High Availability is enabled by creating a resource pool of VMs using VMware vCenter Server.

Host-level high availability

To monitor physical servers, an agent on each server maintains a heartbeat with the other servers in the resource pool such that a loss of heartbeat automatically initiates the restart of all affected virtual machines on other servers in the resource pool. Host-Level High Availability is enabled by creating a resource pool of servers or hosts, and enabling high availability in vSphere.

Fault tolerance

Using high availability, fault tolerance is enabled on the ESXi host. When you enable fault tolerance on the VM running the Cisco Catalyst 8000V instance, a secondary VM on another host in the cluster is created. If the primary host goes down, then the VM on the secondary host will take over as the primary VM for the Cisco Catalyst 8000V.

Note
Cisco IOS-based High Availability is not supported by the Cisco Catalyst 8000V instance. High Availability is supported on the VM host only.
Table 5. Storage options (for vCenter Server and vSphere Web Client)

Supported entities

Description

Local storage

Local storage is in the internal hard disks located inside your ESXi host. Local storage devices do not support sharing across multiple hosts. A datastore on a local storage device can be accessed by only one host.

External storage target

You can deploy the Cisco Catalyst 8000V instance on external storage, such as a Storage Area Network (SAN).

Mount or pass through USB storage

You can connect USB sticks to the Cisco Catalyst 8000V instance and use them as storage devices. In ESXi, you need to add a USB controller and then assign the disk devices to the Cisco Catalyst 8000V instance.

Cisco Catalyst 8000V supports USB disk hot-plug. However, you can use only two USB disk hot-plug devices at a time.

USB hub is not supported.


Installation requirements

This section specifies the requirements for you to install Cisco Catalyst 8000V in ESXi environment. These requirements have been fully tested and meet the performance benchmarks.

ESXi hypervisor requirements

Cisco IOS XE release

vSphere Web Client

vCenter Server

Cisco IOS XE 17.18.x releases

Cisco IOS XE 17.16.x releases

Cisco IOS XE 17.15.x releases

Cisco IOS XE 17.14.x releases

Cisco IOS XE 17.13.x releases

Cisco IOS XE 17.12.x releases

VMware vSphere Web Client versions 8.0 and 7.0

VMware ESXi 8.0 and ESXi 7.0

Cisco IOS XE 17.11.x releases

Cisco IOS XE 17.10.x releases

Cisco IOS XE 17.9.x releases

Cisco IOS XE 17.8.x releases

Cisco IOS XE 17.7.x releases

Cisco IOS XE 17.6.x releases

VMware vSphere Web Client versions 7.0 and 6.7

VMware ESXi 7.0 and ESXi 6.7

Cisco IOS XE 17.5.x releases

VMware vSphere Web Client versions 6.7 and 6.5

VMware ESXi 6.7 and ESXi 6.5

Cisco IOS XE 17.4.x releases

VMware vSphere Web Client versions 6.7 and 6.5

VMware ESXi 6.7 and ESXi 6.5

Note

Do not use a standalone vSphere client to manage the ESXi server. Starting ESXi 6.0, it is no longer possible to directly deploy Cisco Catalyst 8000V in ESXi in the case of an ova deployment. You must have a vMware vCenter server and a vSphere client to deploy a .ova file.

vCPU requirements

These are the supported vCPU configurations for the installation.

  • 1 vCPU: requires a minimum of 4 GB RAM allocation

  • 2 vCPUs: requires a minimum of 4 GB RAM allocation

  • 4 vCPUs: requires a minimum of 4 GB RAM allocation

  • 8 vCPUs: requires a minimum of 4 GB RAM allocation

  • 16 vCPUS: requires a minimum of 8 GB RAM allocation (supported from Cisco IOS XE 17.11.1a)

Note
The required vCPU configuration depends on the throughput license and technology package installed. For more information, see the data sheet for your release.

vNIC requirements

The Virtual Network Interface Cards (VNICs) listed in this section are supported for the ESXi installation. A maximum of 8 vNICs is supported.

  • ConnectX-6 - Supported from Cisco IOS XE 17.18.1a

  • ixgbe - Supported from Cisco IOS XE 17.10.1

  • ConnectX-5VF - Supported from Cisco IOS XE 17.9.1

  • iavf - Supported from Cisco IOS XE 17.9.1

  • i40eVF - Supported from Cisco IOS XE 17.4.1 to Cisco IOS XE 17.8.x

  • VMXNET3 - Supported from Cisco IOS XE 17.4.1

  • iXGBeVF - Supported from Cisco IOS XE 17.4.1

Note

The supported version of the NIC driver and the firmware version are the default versions that are included with the hypervisor package.

Other requirements

  • VMware vCenter - installation tool

  • VMware vSwitch - standard or distributed vSwitches are supported

  • Hard Drive - only a single hard disk drive is supported. Multiple hard disk drives on a VM are not supported

  • Virtual Disk - both 16 GB and 8 GB virtual disks are supported

  • Virtual CPU core - one virtual CPU core is required. This needs a 64-bit processor with Virtualization Technology (VT) enabled in the BIOS setup of the host machine.

  • Virtual hard disk space - a minimum size of 8 GB is required

  • A default video controller, an SCSI controller set, and an installed virtual CD/DVD drive are also required for this installation.

What to do next

Familiarize yourself with the secure boot configuration before proceeding with the installation. For more information, see Enabling VNF Secure Boot.


Restrictions for deploying in ESXi environment

The VMware features and operations listed here are not supported in Cisco Catalyst 8000V. Using or performing these unsupported versions might result in dropped packets, dropped connections, and other error statistics.

  • Distributed Resource Scheduling (DRS)

  • Fault tolerance

  • Resume

  • Snapshot

  • Suspend