Contents
- Requirements for Intercloud Fabric Deployment
- Requirements for Installing Intercloud Fabric
- Prerequisites
- Guidelines and Limitations
- System Requirements
- Requirements for Upgrading Intercloud Fabric
- Prerequisites
- Requirements for Intercloud Fabric Cloud
- Prerequisites
- Guidelines and Limitations
- Requirements for Deploying Virtual Machines
- Prerequisites
- Guidelines and Limitations
- Requirements for Onboarding Cloud Virtual Machines
- Prerequisites
- Guidelines and Limitations
- Requirements for Installing Intercloud Fabric Firewall
- Prerequisites
- Guidelines and Limitations
- Requirements for Configuring Intercloud Fabric Router (Integrated)
- Prerequisite
- Guidelines and Limitations
- Requirements for Configuring Intercloud Fabric Router (CSR)
- Prerequisites
- Guidelines and Limitations
- Requirements for Configuring Intercloud Fabric Load Balancing
- Prerequisites
- Guidelines and Limitations
Requirements for Intercloud Fabric Deployment
This document contains the prerequisites and guidelines for Intercloud Fabric Release 2.3.1 deployment.
Requirements for Installing Intercloud Fabric
Prerequisites
Cloud Provider Prerequisites
Create a provider account in the cloud provider.
Find the public IP address range(s) that the cloud provider assigns for virtual machines created in the provider cloud.
Note
The cloud provider IP address in the desired region must be open.
Certain ports must be open in the firewall to allow the Intercloud Fabric Extender to communicate with the Intercloud Fabric Switch. Port 443 must always be open. For a UDP tunnel, port 6644 must also be open. For a TCP tunnel, either port 6646 or port 443 can be used. Specify the choice of tunnel protocol and port when configuring the tunnel profile.
TCP ports 22 and 443 must be open in the firewall that is outbound from the Cisco Prime Network Services Controller IP address to the cloud provider.
Virtual Machine Manager Prerequisites
For VMware environments:
Install and prepare the vCenter Server for host management using the instructions from VMware.
Install the VMware vSphere Client.
Verify that all Intercloud Fabric cloud hosts are running a supported version of ESX or ESXi: 5.1, 5.5, or 6.0.
Have admin access to VMware vCenter.
Have two physical network interface cards (NICs) on each host for redundancy. Deployment is also possible with one physical NIC.
Cisco Intercloud Fabric Director and Cisco Prime Network Services Controller must have IP connectivity on port 443 to all ESXi hosts. Cisco Prime Network Services Controller uses this path to upload the Intercloud Fabric Extender image to the host.
Cisco Intercloud Fabric Prerequisites
Know the IP, subnet mask, and gateway information for ICF, ICFD, and PNSC.
Know the DNS server and domain name information.
Make sure the correct NTP server is configured during Intercloud Fabric OVA deployment. We recommend that you check the NTP settings on the ESX/ESXi host to avoid any conflicts with ESX host and Intercloud Fabric virtual machine time synchronization.
Verify that the date and time are set accurately to connect to the cloud provider.
Know the management port profile or management network name for the virtual machine (management).
Note
The management port profile can be the same port profile that is used for the Cisco Nexus 1000V VSM. The port profile is configured in the VSM and is used for the Cisco Prime Network Services Controller management interface. This requirement applies only if you are using a Cisco Nexus 1000V switch; it does not apply if you are using a VMware virtual switch.
For OpenStack environments, a Cisco Nexus 1000V switch is required.
If you do not configure NAT and PAT policies correctly for cloud providers, incoming traffic cannot reach the provider.
Make sure that any IPS or IDS (such as Cisco Sourcefire) and any SSL inspection (such as Cisco IronPort) allow connections from the ICFD IP addresses.
Virtual Switch Prerequisites
VMware
For a security policy for the trunk port group on the VMware virtual switch, set the Promiscuous Mode, MAC Address Changes, and Forged Transmits to Accept in the VMware vSphere GUI. This requirement applies only if you are using a VMware virtual switch and distributed switch; it does not apply if you are using a Cisco Nexus 1000V switch.
If Intercloud Fabric Extender is hosted on a VMware vSwitch or distributed switch (VDS) and if the vSwitch or distributed switch is connected to multiple physical NICs, you must enable the setting Net.ReversePathFwdCheckPromisc=1 in the ESX host where the Intercloud Fabric Extender is hosted. This setting is found under in the VMware vSphere GUI. If this setting is not enabled, you might experience traffic loss or duplicate packets between enterprise and cloud VM traffic or Intercloud Fabric Switch module flap at the Intercloud Fabric VSM. This requirement applies only if you are using a VMware virtual switch or distributed switch to host the Intercloud Fabric Extender; it does not apply if you are using a Cisco Nexus 1000V switch.
Note
If the value of the Net.ReversePathFwdCheckPromisc configuration option is changed while the ESXi host is running, you must toggle (disable then re-enable) the Promiscuous Mode check box in the Intercloud Fabric Extender trunk port group security settings for the change to take effect.
For the VMware virtual switch, you must set the trunk port group to allow All VLAN IDs in the VMware vSphere GUI.
Cisco Nexus 1000V switch
You must disable Unknown-Unicast-Flooding-Block (UUFB) if you are using a Cisco Nexus 1000V switch in the private cloud. Enter the command no uufb enable to disable UUFB. Enter the command show run | include uufb to verify that you disabled UUFB.
Guidelines and Limitations
Cisco Intercloud Fabric supports only the English version of vCenter.
For VMware environments, the Cisco Nexus 1000V for VMware vSphere, VMware vSwitch, or VDS is already installed in the private cloud. See Cisco Nexus 1000V for VMware for more information.
An Intercloud Fabric cloud can support up to a maximum of 100 VMs.
System Requirements
The following tables identify the system requirements for installing Cisco Intercloud Fabric.
Table 1 System Requirements Requirement Description Intercloud Fabric
CPUs 8 vCPU (64-bit x86 CPU [VT-capable]) Network interface cards (vNICs) 1 RAM
20 GB
Disk
350 GB
Intercloud Fabric Extender
Memory 2 GB CPU 2 vCPU Disk 3 GB Intercloud Fabric VSM
Memory 2 GB CPU 1 vCPU Disk 3 GB
Note
The virtual disk must be capable of at least 40 MB/s bandwidth.
Table 3 Client Browser Requirements Requirement Description Browser Google Chrome 32.0 or later
Note We recommend that you use Google Chrome for Intercloud Fabric.
Table 4 System Requirements for Provider Clouds Provider/Model
Device
vCPU
Memory (GB)
Disk (GB)
AWS
c3.2xlarge
Intercloud Fabric Switch
8
15
20
c3.xlarge
Intercloud Fabric Router
4
7.5
8
m3.medium
Intercloud Fabric Firewall (VSG)
1
3.75
2
Azure
A3
Intercloud Fabric Switch
4
7
20
A3
Intercloud Fabric Firewall (VSG)
2
3.5
2
All Other Providers
Intercloud Fabric Switch
4
4
20
Intercloud Fabric Firewall (VSG)
1
3
3
Intercloud Fabric Router (CSR)
4
4
8
Note
For optimal performance, we recommend reserving extra system resources for Intercloud Fabric Director above the minimum system requirements listed in the preceding table. For more information, see "Reserving System Resources" in the Intercloud Fabric Getting Started Guide.
Cisco Intercloud Fabric for Business Disk Throughput Requirements
Intercloud Fabric for Business requires a minimum disk throughput of 40 MB/s in the data store or disks where it is installed. Lower disk throughput impacts the solution installation and virtual machine migration, to or from the cloud.
Requirements for Upgrading Intercloud Fabric
Prerequisites
Ensure that no service requests are running during the upgrade process. You must complete all service requests before starting the upgrade.
- For Intercloud Fabric clouds that are deployed in HA mode, the Intercloud Fabric Switch and Intercloud Fabric Extender must be in either Active or Standby state. Ensure that they are reachable.
For Intercloud Fabric VSMs that are deployed in HA mode, confirm that the HA pair is healthy, that the VSMs are in either Active or Standby state, and that both are online.
Change the admin password of Intercloud Fabric Director to be the same as the admin password of Prime Network Services Controller. To change the password, log in to the Intercloud Fabric cloud GUI and choose .
Requirements for Intercloud Fabric Cloud
Prerequisites
You have created an account in the provider cloud.
You have the credentials for the cloud provider, such as an access key and access ID for Amazon AWS, and a username, password, and URI for other supported providers.
In the Amazon Web Services GUI, you can access the security credentials for Intercloud Fabric by navigating to . Click the Access Keys (Access Key ID and Secret Access Key) (+) icon to obtain the AWS access key ID. To create a new access key, click Create New Access Key. Download the key file to get the access key ID and secret access key. Optionally, click the Show Access Key link to see the access key ID and the secret access key.
For Microsoft Azure, see Accessing Security Credentials for Intercloud Fabric in Microsoft Azure.
For Cisco ICFPP-based providers, you will receive a welcome email from Cisco with information about the security credentials.
You have installed the Intercloud Fabric infrastructure components.
If you are using Cisco Nexus 1000V in the private cloud, you have added the Cisco Nexus 1000V switch to Intercloud Fabric.
For Cisco ICFPP-based providers, you have configured the cloud instance and tenant in Cisco ICFPP for use with Intercloud Fabric. See the Cisco Intercloud Fabric Provider Platform Installation Guide.
To integrate VCD with Cisco ICFPP, see the Configuring VMware vCloud Director for Cisco ICFPP chapter in the Cisco Intercloud Fabric Provider Platform Installation Guide.
Using a proxy on private cloud is not supported when Intercloud Fabric is being used to connect to public cloud.
Guidelines and Limitations
For the cloud provider Microsoft Azure:
You must register the certificate with the Microsoft Azure portal.
Intercloud Fabric will create a storage account with a name starting with pnsc and a container below that storage account with the name pnsc-sc in the region where the Intercloud Fabric cloud is created. This storage account will be created during the first upload call and will be used for all further uploads to the Microsoft Azure cloud.
All port profiles required for deploying a virtual machine must be created using Intercloud Fabric.
All port profiles required for creating an Intercloud Fabric cloud must be created using Intercloud Fabric.
All port profiles required for creating Intercloud network policies must be created using Intercloud Fabric.
All port profiles required for the Intercloud Fabric Firewall data interface and management interface must be created using Intercloud Fabric.
All port profiles required for virtual machines that need firewall protection in the Intercloud Fabric must be updated using PNSC to add the vPath configuration.
While cloning an Intercloud Fabric cloud, you must not migrate the source virtual machine or the destination virtual machine. Doing so will impact the cloning operation and any operations carried out on the destination virtual machine after migration.
Requirements for Deploying Virtual Machines
Prerequisites
You have created an account in the provider cloud.
You have installed the Intercloud Fabric infrastructure components.
You have created an Intercloud Fabric cloud.
You have configured the required policies.
The guest operating system firewall has been configured to allow traffic on the following ports:
For the Cisco Intercloud Services – V and CloudStack, vmware-tools has been installed on the guest VM.
You have installed SSH, SCP, sudo, and mkinitrd binaries for Linux VMs.
Guidelines and Limitations
A Windows VM image that has been syspreped to certain cloud providers (such as Azure) cannot be migrated.
Out-of-band operations are not supported in Intercloud Fabric. If you terminate a virtual machine from the cloud provider portal, the status is not reflected in the Intercloud Fabric GUI.
All port profiles required for deploying a virtual machine must be created using Intercloud Fabric.
All port profiles required for creating network policies must be created using Intercloud Fabric.
Trunk ports are not supported in the cloud virtual machines.
Trunk ports are not supported in virtual machines that will be migrated to the cloud.
In Microsoft Azure, when you terminate a virtual machine in the cloud, the virtual machine is terminated; however, the storage is not deleted from the image and the provider will still charge you for the virtual machine. To delete the storage and the image, use the Intercloud Fabric GUI to delete the template that was used to create the virtual machine.
If network connectivity between Intercloud Fabric and the cloud provider is slow, image upload operations, such as migrating a virtual machine migration, might fail. If the image is not uploaded within 12 hours, the operation fails and Intercloud Fabric tries to upload the image again.
Certain Cisco cloud service providers require execution of sysprep on the virtual machine image after migrating a virtual machine. Execution of sysprep leads to certain configuration changes within your virtual machine, including resetting the Windows Administrator password and removing the virtual machine from the domain to which it belonged. To address these effects of sysprep, be aware of the following after migrating your virtual machine to a cloud provider:
The Windows password is reset to the name of the virtual machine that you entered in the VM Name field in the Assign VM dialog box. If the virtual machine name contains fewer than ten characters, the password is reset to the name of the virtual machine appended with the required number of 3s to reach the ten-character limit.
If the virtual machine was part of a domain, you must manually rejoin the virtual machine to the domain after the migration is complete and connectivity to the private cloud network is restored.
Before you migrate a virtual machine from Intercloud Fabric to the private cloud:
Make sure that the private cloud has sufficient storage capacity for the virtual machine.
You must add the resource pool to the default computing policy. You can then select the resource pool you added in the Migrate VM Back on Premise window during migration.
Make sure that the port profile in the virtual machine that is being migrated exactly matches the port profile in the network policy of the destination private VDC.
Before you migrate a virtual machine from the private cloud to the Intercloud Fabric cloud, make sure that the port profile in the virtual machine that is being migrated exactly matches the port profile in the network policy of the destination Intercloud Fabric cloud VDC.
Amazon Web Services (AWS) supports enhanced networking for certain EC2 instance types. To move the Windows virtual machine to AWS, you must load the Intel driver in the Windows virtual machine.
See Enhanced Networking for information on enhanced networking on AWS.
See Enabling Enhanced Networking on Windows for information on how to enable enhanced networking on Windows.
Intercloud Fabric supports migrations of virtual machines and virtual machine templates with a maximum of 8 disks.
In a Linux-based virtual machine, disks should be referred to by using persistent names in fstab (/etc/fstab) and the grub configuration file /boot/grub/menu.lst. UUID or labels can be used for the persistent naming of disks.
Only the default DVD kernel for a given version is supported for Linux-based virtual machines.
About temporary virtual machines:
Intercloud Fabric creates a temporary virtual machine when you create a Linux virtual machine on AWS to host the uploaded image.
Intercloud Fabric uses the PNSC IP Address tmp UUID naming convention for a temporary virtual machine.
Intercloud Fabric creates a temporary virtual machine with the instance type c3.large and 300 GB disk space.
Intercloud Fabric monitors the disk space utilization on the temporary virtual machine and creates a new temporary virtual machine if required.
The temporary virtual machine created is specific to a region, and Intercloud Fabric reuses the temporary virtual machine within the same region.
Intercloud Fabric terminates the temporary virtual machine after 12 hours of inactivity.
Intercloud Fabric chooses the VM instance on the provider cloud that most closely matches the VM requirements.
Requirements for Onboarding Cloud Virtual Machines
Prerequisites
Ensure that you have the credentials for the virtual machines that you will onboard to Intercloud Fabric.
Ensure that you have updated the following for the onboarded virtual machine from the provider console so that Intercloud Fabric can communicate with the virtual machine:
Change the security group of the virtual machines in Amazon EC2 VPC to the security group of the virtual machine created by Intercloud Fabric.
Add inbound rules for UDP and TCP ports 6644 and 6646, and TCP port 22 to the security groups of the virtual machines in Amazon EC2 Classic. The security groups should match the security group of the virtual machine created by Intercloud Fabric.
If a Windows VM firewall or IP tables are configured in the provider cloud, ensure that the required ports are open.
Ensure that you have the required utilities, such as WinSCP, to copy the onboard package to the onboarding Windows virtual machine in the provider cloud.
To prepare for onboarding a Windows VM, complete the following steps:
Ensure that you have already installed SSH on Linux virtual machines.
Collect the following information for the virtual machine you are onboarding:
Guidelines and Limitations
Onboarding provider cloud VMs to Intercloud Fabric is only supported on Amazon Web Services EC2 Classic and Amazon Web Services EC2 VPC.
The onboarding process applies within a region; the virtual machine and destination VDC are in the same region.
After the virtual machine is onboarded to Intercloud Fabric, the virtual machine is assigned a private IP address. You must manually update any applications that explicitly used the provider’s IP address before onboarding if the applications are expected to communicate with the newly assigned private IP address.
Windows virtual machines can be onboarded but cannot be moved to the private cloud.
Linux virtual machines with nonpartitioned disks that can be onboarded cannot be moved to the private cloud.
Linux virtual machines that are expected to move back to the private cloud must be referred to by using persistent names (that is, by UUID or label) in /etc/fstab instead of nonpersistent names, such as /dev/xvda. For example:
In the grub configuration file, the kernel parameter root must refer to the root partition using a label or ID.
Example of a label/ID in grub.conf:kernel /boot/vmlinuz-2.6.32-431.29.2.el6.x86_64 console=ttyS0 ro root=UUID=9996863e-b964-47d3-a33b-3920974fdbd9 rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 xen_blkfront.sda_is_xvda=1 console=ttyS0,115200n8 console=tty0 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM rd_NO_DMIn the file system configuration file /etc/fstab, the first column should have a label or ID instead of a partition name.
Example of a label/ID in fstab:UUID=9996863e-b964-47d3-a33b-3920974fdbd9 / ext4 defaults 1 1For Amazon VPCs, onboarding is supported only if the onboarded VM is on the same VPC as an Intercloud Fabric cloud. Also, onboarding is not supported for an AWS instance that is linked to the VPC using ClassicLink.
AWS instances on nondefault VPCs cannot be onboarded to AWS EC2 Classic-based Intercloud Fabric clouds.
Requirements for Installing Intercloud Fabric Firewall
Prerequisites
Intercloud Fabric Director is installed.
Infrastructure setup and Intercloud Fabric cloud setup are complete.
Promiscuous mode is enabled on the Intercloud Fabric Extender trunk port if a port group is used for the Intercloud Fabric Extender trunk interface.
The complete VLAN range is enabled in the port group that is bound to the trunk interface in the Intercloud Fabric Extender.
Guidelines and Limitations
You can also add the Intercloud Fabric Firewall service after you create the Intercloud Fabric cloud instance.
Requirements for Configuring Intercloud Fabric Router (Integrated)
Prerequisite
Because each Intercloud Fabric cloud requires an IP address for Intercloud Fabric Router (Integrated), ensure that the management subnetwork has a sufficient number of IP addresses.
Guidelines and Limitations
The following limitations apply to Intercloud Fabric Router (Integrated):
Intercloud Fabric Router (Integrated) is supported only on Microsoft Azure.
If an Intercloud Fabric cloud instance is in high availability (HA) mode, you cannot create an Intercloud Fabric Router (Integrated).
If an Intercloud Fabric cloud instance is created with Intercloud Fabric Router (Integrated), the HA option is disabled.
Because routing is implicitly available for the management VLAN subnet, do not configure interfaces for the management VLAN when you configure interfaces in the Intercloud Fabric Router (Integrated).
The following guidelines apply to Intercloud Fabric Router (Integrated):
An Intercloud Fabric Router (Integrated) is always created under the tenant organization named icfCloud in PNSC.
The name of an Intercloud Fabric Router (Integrated) is automatically selected and is the same as the name of the associated Intercloud Fabric cloud. For example, if an Intercloud Fabric cloud instance is named Icf-Azure-Link1, the corresponding Intercloud Fabric Router (Integrated) has the same name with the suffix -iclink, such as Icf-Azure-Link1-iclink.
Requirements for Configuring Intercloud Fabric Router (CSR)
Prerequisites
You have an account in the provider cloud.
When you provision the virtual data centers in the Intercloud Fabric Router (CSR), ensure that the network policies associated with the virtual data centers have the VLANs that are required for creating the data interfaces for the Intercloud Fabric Router (CSR).
Guidelines and Limitations
The Intercloud Fabric Router (CSR) is not supported on Microsoft Azure.
The Intercloud Fabric Router (CSR) version 3.16.1 is required for Intercloud Fabric with VCD.
The Intercloud Fabric Router (CSR) version 3.14.01 is required for Intercloud Fabric with AWS.
The Intercloud Fabric Router (CSR) version 3.14.1.S is required for Intercloud Fabric with Cisco Intercloud Services – V.
Network Address Translation (NAT) functionality for the Intercloud Fabric Router (CSR) is available only if there is a default VPC in the Amazon Web Services (AWS) account.
If you configure dynamic NAT when working with Cisco Intercloud Services – V, return network traffic does not reach the Intercloud Fabric Router (CSR) cloud VM.
- During deployment of the Intercloud Fabric Router (CSR) in the provider cloud, inter-VLAN traffic might stop working between the private cloud and provider cloud virtual machines for VLANs that are not extended to the provider cloud. You must add routing for private cloud VLANs that are not extended on a data interface that is configured as the default gateway. If a data interface is not configured as a default gateway, add one with one of the private cloud VLANs that is not extended. Then, add routing for the remaining VLANs under that interface.
If you delete an Intercloud Fabric Router (CSR) instance and immediately try to recreate either the same instance or another instance of the Intercloud Fabric Router (CSR) in the same Intercloud Fabric cloud, you might receive the error can’t create; object already exists. We recommend that you wait for 10 minutes before you create a new instance of the Intercloud Fabric Router (CSR) in the Intercloud Fabric cloud.
After you perform any lifecycle operations using the PNSC GUI, you must refresh the GUI to view the status of the operation.
Requirements for Configuring Intercloud Fabric Load Balancing
Prerequisites
Ensure that you have installed and configured the Intercloud Fabric Router (Integrated) using the Intercloud Fabric Cloud Setup wizard or after the Intercloud Fabric cloud is operational.
Before adding any services to the cloud, make sure that the Azure cloud link is up and the status is operational.
Copyright © 2016, Cisco Systems, Inc. All rights reserved.