Cisco APIC Getting Started Guide, Release 6.2(x)

PDF

APIC migration types

Want to summarize with AI?

Log in

Overview

This topic explains the process of migrating APIC clusters, including transitions between physical and virtual clusters, support for mixed cluster types, and the migration of standby nodes, with details on supported scenarios and platform specifics.

APIC migration is a controller transition process that enables movement between physical and virtual APIC clusters, supports migration among clusters of different APIC types, and facilitates the migration of standby nodes between clusters.

  • Enables movement between physical and virtual APIC clusters

  • Supports migration among clusters of different APIC types: physical, virtual, or mixed

  • Facilitates the migration of one or more standby nodes from a source cluster to a destination cluster

Supported APIC migration scenarios and platform details

Beginning with Cisco APIC release 6.1(1), APIC migrations support transitioning from a physical APIC cluster to a virtual APIC cluster deployed on ESXi hosts (using VMware vCenter) and from a virtual APIC cluster (on ESXi hosts) to a physical APIC cluster.

Starting with Cisco APIC release 6.2(1), APIC migrations support transitions between clusters composed of any mix of physical and virtual controllers hosted on VMware ESXi or Nutanix AHV platforms. This release also introduces support for migrating one or more standby nodes from the source cluster to the destination cluster.


Guidelines and limitations for migrating physical APICs to virtual APICs

The guidelines and limitations for migrating physical APICs to virtual APICs define the supported migration paths, required preconditions, and unsupported scenarios.

  • Migration is supported only between physical and virtual APICs within the same network layer (Layer 2 to Layer 2, Layer 3 to Layer 3).

  • Migration between Layer 2 APICs and Layer 3 APICs is not supported.

  • Migration of standby nodes and mini ACI fabric is not supported.

Guidelines and Limitations Details

The following guidelines and limitations apply when migrating physical APICs to virtual APICs and vice versa:

Guidelines for migration:

  • Physical APICs in layer 2 (directly attached to the fabric) can be migrated to layer 2 virtual APICs, and layer 2 virtual APICs can be migrated to layer 2 physical APICs.

  • Physical APICs in layer 3 (remotely attached to the fabric) can be migrated to layer 3 virtual APICs, and virtual APICs in layer 3 can be migrated to layer 3 physical APICs.

  • Migration between layer 2 APICs and layer 3 APICs is not supported.

  • Do not initiate the migration process if an upgrade is in progress.

  • Do not initiate an upgrade if migration is in progress.

  • Update any configuration that uses APIC out-of-band (OOB) management after migration is completed.

  • If NDO (Network Domain Orchestrator) is configured, update connection details in NDO after migration, as the process changes OOB IP and subnet addresses.

  • If an SMU (Software Maintenance Update) is installed on the physical APIC, migration from physical to virtual APIC is not recommended for Cisco APIC release 6.1(1). Upgrade the cluster to an image that contains the SMU fix before migrating.

  • For app-infra, stop any running ELAM/FTRIAGE jobs prior to migration and restart them after migration is complete.

Limitations:

  • Migration of standby nodes is not supported. Remove all standby nodes from the cluster before migration.

  • Migration is not supported for mini ACI fabric.


Migrating APIC clusters

The APIC cluster migration process is a workflow that transitions a three-node Cisco APIC cluster from physical (source) nodes to virtual (target) nodes, ensuring continuous network operations.

  • Source APIC nodes: The physical nodes currently running in the cluster (APIC 1, APIC 2, APIC 3).

  • Target APIC nodes: The virtual nodes that will form the new cluster after migration.

  • Administrator: The user who initiates and manages the migration workflow.

Key components and migration workflow

The migration process involves source APIC nodes, target APIC nodes, and an administrator who manages the workflow. The process is performed in a specific sequence to ensure cluster continuity.

  • Source APIC nodes: APIC 1, APIC 2, APIC 3 (physical nodes)

  • Target APIC nodes: Virtual nodes that replace the physical nodes after migration

  • Administrator: Initiates and manages the migration process

  1. Log in to source APIC 1 (172.16.1.1), and initiate the migration process.

  2. Migration of source node APIC 3 (172.16.1.3) is initiated.

  3. Migration of APIC 3 is completed (to target node 172.16.1.13).

  4. Migration of source node APIC 2 (172.16.1.2) is initiated.

  5. Migration of APIC 2 is completed (to target node 172.16.1.12).

  6. Target APIC 2 takes control to enable the migration of APIC 1. This is called the handover process where control is passed from source APIC 1 (172.16.1.1) to target APIC 2 (172.16.1.12). At this stage, a new window is displayed (URL redirected to target APIC 2). After successful migration, source APIC 1 is no longer part of the cluster (which now has the migrated target APICs).

The migration process is completed in the reverse order: APIC N (APIC 3 in the example) is migrated first, followed by APIC N-1 (APIC 2), and finally APIC 1.

Table 1. Sample APIC nodes

APIC

Source Node

Target Node

APIC 1

172.16.1.1

172.16.1.11

APIC 2

172.16.1.2

172.16.1.12

APIC 3

17.16.1.3

172.16.1.13


Migrate an APIC cluster between physical and virtual deployments

Before you begin

Following are the required prerequisites before you start with the migration process:

  • Cluster health : Confirm that the current APIC cluster is Fully fit .

  • Generic :

    • Ensure that the source and destination APICs’ date and time are synchronized.

    • Ensure that all the controllers are on Cisco APIC release 6.1(1), and all the switches are running the same version as the controller.

  • Source and target nodes :

    • For directly connected APIC migration, ensure both source and target nodes are on the same layer 2 network.

    • For remotely connected APIC migration, ensure both source and target nodes have infra network connectivity between them. This means the new target APIC should have the correct IPN configuration such that it can interact with the infra network of the fabric.

    • Target nodes have the same admin password as the source cluster.

    • Target nodes’ OOB IP address should be different while all other fields can be same or different from the source node. Infra addresses will remain same for layer 2 (directly attached); for layer 3 (remotely attached) cluster, they can be same or different based on deployments.

    • Source cluster and target cluster OOB networking stacks should match. For example, if source cluster is using dual stack (IPv4 and IPv6) for OOB, dual stack (IPv4 and IPv6) address details should be provided for target nodes too.

    • Ensure OOB connectivity between the source and destination APICs.

    • Ensure the OOB contracts and reachability for the new APIC are configured correctly; the migration process uses the OOB IP address to communicate between the APICs.

  • For virtual APIC to physical APIC migration :

    • Ensure the physical APIC nodes are factory reset; use the acidiag touch setup and acidiag reboot commands.

    • For migration with/ without CIMC (applicable for physical APICs):

      If....

      Then....

      Using CIMC

      ensure that the physical APIC CIMC addresses are reachable from the OOB network of the virtual APIC.

      Not using CIMC

      ensure that the OOB IP address is configured manually on the physical APIC after factory-reset and use the OOB option for connectivity.

  • For Physical to virtual migration :

    • Ensure that you have deployed the virtual APIC nodes as per the procedure in the Deploying Cisco Virtual APIC Using VMware vCenter guide.

    • If virtual APICs are deployed on a vCenter that is part of a VMM domain, ensure that Infrastructure VLAN is enabled on the AEP configured on the interfaces connected to the ESXi host(s) where the virtual APIC is deployed.

Use this procedure to migrate the nodes of a physical APIC cluster to a virtual APIC cluster (or vice-versa).

Procedure

1.

On the Cluster as Seen by Node screen, click Migrate (displayed in the Cluster Overview area).

All the available controllers in the cluster are displayed.

Note

The Migrate button is displayed only on APIC 1 (of the cluster).

2.

Click the pencil icon next to the Validate column, to start the migration process of the selected controller.

The Migrate Node screen is displayed.

3.

Enter the following details in the Migrate Node screen:

  1. For the Controller Type, select Virtual or Physical, as the case may be (migration from physical APIC to virtual APIC and vice-versa is supported).

  2. For the Connectivity Type , select OOB, if you are migrating a physical APIC to a virtual APIC. If you are migrating a virtual APIC to a physical APIC, you can either select the OOB option or the CIMC option.

    It is recommended to select the CIMC option for the virtual to physical migration. To use the OOB option, connect to the CIMC address of physical APICs and configure the OOB IP addresses manually before starting the migration process.

    Controller Type and Connectivity Type are auto selected based on the source controller type. If required, you can modify them.

  3. In the Management IP pane, enter the following (target APIC details) — Management IP address, Username, Password.

    or

    (applicable only for virtual to physical APIC migration) In the CIMC Details pane, enter the following details of the physical APIC— CIMC IP address, username and password of the node.

  4. Click Validate .

    After you click Validate, the details displayed in the General and Out of Band management panes change to match the details of the controller. The only editable fields are the Name and Pod ID (applicable only for layer 2), the other fields cannot be modified. For virtual to physical APIC migration, confirm the Admin Password too.

    Note

    If dual stack is supported, fill in the IPv4 and IPv6 addresses.

  5. In the Infra Network pane (applicable only for Layer 3, APIC is remotely attached to the fabric), enter the following:

    • IPv4 Address: the infra network address.

    • IPv4 Gateway: the IP address of the gateway.

    • VLAN: the interface VLAN ID to be used.

The OOB gateway and IP addresses are auto-populated in the table (based on the validation); click Apply . The validation status is displayed as Complete (on the Migrate Nodes screen).

Repeat the same process for the other APICs in the cluster by clicking the pencil icon (next to the Validation column). After providing all the controller details, click the Migrate button at the bottom of the Migrate screen.


Migration status

The migration process involves a series of activities, and this is displayed in stages. Each stage is indicated with a color-coded bar.

Figure 1. Migration Status

The Migrate Cluster Status screen displays the overall migration status, followed by the status of the apisever. The apiserver is the process that orchestrates the whole migration process. Below the apiserver status, the controller-wise migration status is displayed. The source IP address and the target IP address of the nodes are also indicated.

The apiserver status is indicated as 100% done (green bar) after the handover to APIC 2 is completed. At this stage, a new window is displayed (URL redirected to target APIC 2). Login to target APIC 2. A banner indicating that Migration is in progress is displayed at the top of the GUI until the migration is complete. After the handover process, the banner which was displayed on source APIC 1 is displayed on the target APIC 2. Click the View Status link on the banner to check the migration status.

You can also abort the migration process from source APIC 1 by clicking the Abort button available on the Migrate Cluster Status screen. The Abort button is displayed only after a certain period of time after initiating the migration.

After successful migration:

  • the migration status is no longer displayed. If the migration has failed, then a failure message is explicitly displayed.

  • to confirm if the target cluster is healthy and fully fit, navigate to System > Controllers > Expand Controllers. Expand Controller 1 > Cluster as seen by Node page.

  • verify if all the fabric nodes are in active state; navigate to Fabric > Fabric Membership.

  • if the Pod ID of the target APIC has changed, inband address for the node needs to be reconfigured on the Tenant Management screen. Navigate to Tenants > Mgmt > Node Management Addresses page.


Operations in case of a migration failure

Operations in case of a migration failure refer to the recommended actions and procedures to follow when the migration process is interrupted, aborted, or fails to complete successfully.

  • It is recommended to revert or resume migration to either the source or target controller type if migration is not successful.

  • Do not leave the APIC cluster in a migration failed state with a mix of physical and virtual controllers.

  • Before attempting revert or resume, ensure the cluster is in a healthy state by following troubleshooting steps.

Procedures for resuming or reverting migration after failure

After a migration failure, you can either resume or revert the migration process, depending on your requirements and the state of the controllers.

  • To resume migration:

    1. On the Migrate Node screen (source APIC 1), enter the details of all the target nodes based on the controller type you want to migrate to.

    2. Click Migrate .

  • To revert migration:

    1. Factory reset each of the source APIC nodes which are being migrated using acidiag touch setup and acidiag reboot commands.

    2. On the Migrate Node screen, enter the source APIC details for all the nodes, as the migration process reverts the previously migrated APICs to the source controller type.

    3. Click Migrate .

To collect logs for tech support, navigate to Admin > Import/Export > Export Policies > On-demand Tech Support > migration_techsuppport .

Note

If the migration process fails after the handover process (control is passed on to target APIC 2 from source APIC 1), the migration cannot be resumed or reverted.

Example: Migration failure and recovery

For example, if the migration process is interrupted and the cluster is left in a failed state, you must first follow troubleshooting steps to restore cluster health. Then, you can choose to either resume the migration by providing target node details and clicking Migrate , or revert by factory resetting the source APIC nodes and re-entering their details to return to the original controller type.


Basic troubleshooting

Consider a three-node cluster where in two nodes have migrated successfully, and a failure is detected during the migration of the third node. Check the status of the failed node. If the controller is not in the Fully fit state, the migration could fail.

Use this procedure to get the cluster to a healthy state:

Procedure

1.

(for migration failures with APIC 1) Check the cluster health from target APIC 2 by navigating to, System > Controllers . Select Controller 2 > Cluster as seen by Node screen.

or

(for migration failures with APIC 2 to N ) Check the cluster health from source APIC 1 by navigating to, System > Controllers . Select Controller 1 > Cluster as seen by Node screen.

2.

If APIC 1 (or any other node of the cluster) is not Fully fit , click the three dots adjacent to the serial number of the controller. Select Maintenance > Decommission . Click Force Decommission as the node is not in a Fully fit state. Connect to the source APIC node N using SSH and factory-reset the node using the following commands - acidiag touch setup , acidiag reboot .

3.

From source APIC 1 navigate to, System > Controllers . Click Controllers > Controller 1 > Cluster as seen by Node screen.

or

From target APIC 2 navigate to, System > Controllers . Click Controllers > Controller 2 > Cluster as seen by Node screen.

4.

To commission a controller, click the three dots adjacent to the serial number of the controller. Select Maintenance > Commission . Enter the details, as required. Refer to the Commissioning a Node procedure (described earlier in this chapter). The only difference here is that the controller ID is pre-populated with the number corresponding to the ID of the controller in the cluster.

After the controller is commissioned, the cluster is indicated as Fully fit .

5.

Check the status of the cluster after commissioning the failed node. If the cluster is in a healthy state, resume the migration by clicking Migrate on the Cluster As Seen By Node screen. If the migration fails again, contact Cisco TAC for further assistance.