简介

    本文档介绍在虚拟机无法访问一段时间后,以及思科弹性服务控制器(ESC)尝试恢复虚拟数据包核心(VPC)的虚拟机(VM)后,如何在思科UltraServices平台(UltraM)中恢复虚拟机(VM)。

    问题

    在UltraM设置中,计算节点被删除(或无法访问)。当ESC尝试恢复节点时,但由于节点无法访问,它无法恢复。当您从计算节点保留电源线时,可以模拟此场景。模拟此场景的一种方法是从统一计算系统(UCS)刀片中移除电源线。在ESC无法恢复虚拟机后,它将进入OpenStack下的“错误”状态并保持“引导”状态。 

    在本例中,SF卡5位于vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c上:

    [local]rcdn-ulram-lab# show card table

    Slot         Card Type                               Oper State     SPOF  Attach

    -----------  --------------------------------------  -------------  ----  ------

     1: CFC      Control Function Virtual Card           Standby        -          

     2: CFC      Control Function Virtual Card           Active         No         

     3: FC       1-Port Service Function Virtual Card    Active         No         

     4: FC       1-Port Service Function Virtual Card    Active         No         

     5: FC       1-Port Service Function Virtual Card    Booting        -          

     6: FC       1-Port Service Function Virtual Card    Active         No         

     7: FC       1-Port Service Function Virtual Card    Active         No         





    [stack@ultram-ospd ~]$ nova list

    +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+

    | ID                                   | Name                                                          | Status | Task State | Power State | Networks                                                                                                   |

    +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+

    | beab0296-8cfa-4b63-8a05-a800637199f5 | Testcompanion                                                 | ACTIVE | -          | Running     | testcomp-gn=10.10.11.8; mgmt=172.16.181.18, 10.201.206.46; testcomp-sig=10.10.13.5; testcomp-gi=10.10.12.7 |

    | 235f5591-9502-4ba3-a003-b254494d258b | auto-deploy-ISO-590-uas-0                                     | ACTIVE | -          | Running     | mgmt=172.16.181.11, 10.201.206.44                                                                          |

    | 9450cb19-f073-476b-a750-9336b26e3c6a | auto-it-vnf-ISO-590-uas-0                                     | ACTIVE | -          | Running     | mgmt=172.16.181.8, 10.201.206.43                                                                           |

    | d0d91636-951d-49db-a92b-b2a639f5db9d | autovnf1-uas-0                                                | ACTIVE | -          | Running     | orchestr=172.16.180.14; mgmt=172.16.181.13                                                                 |

    | 901f30e2-e96e-4658-9e1e-39a45b5859c7 | autovnf1-uas-1                                                | ACTIVE | -          | Running     | orchestr=172.16.180.5; mgmt=172.16.181.12                                                                  |

    | 9edb3a8d-a69b-4912-86f6-9d0b05d6210d | autovnf1-uas-2                                                | ACTIVE | -          | Running     | orchestr=172.16.180.16; mgmt=172.16.181.5                                                                  |

    | 56ce362c-3494-4106-98e3-ba06e56ee4ed | ultram-vnfm1-ESC-0                                            | ACTIVE | -          | Running     | orchestr=172.16.180.9; mgmt=172.16.181.6, 10.201.206.55                                                    |

    | bb687399-e1f9-44b2-a258-cfa29dcf178e | ultram-vnfm1-ESC-1                                            | ACTIVE | -          | Running     | orchestr=172.16.180.15; mgmt=172.16.181.7                                                                  |

    | bfc4096c-4ff7-4b30-af3f-5bc3810b30e3 | ultram-vnfm1-em_ultram_0_9b5ccf05-c340-44da-9bca-f5af4689ea42 | ACTIVE | -          | Running     | orchestr=172.16.180.7; mgmt=172.16.181.14                                                                  |

    | cf7ddc9e-5e6d-4e38-a606-9dc9d31c559d | ultram-vnfm1-em_ultram_0_c2533edd-8756-44fb-a8bf-98b9c10bfacd | ACTIVE | -          | Running     | orchestr=172.16.180.8; mgmt=172.16.181.15                                                                  |

    | 592b5b3f-0b0b-4bc6-81e7-a8cc9a609594 | ultram-vnfm1-em_ultram_0_ce0c37a0-509e-45d1-9d00-464988e02730 | ACTIVE | -          | Running     | orchestr=172.16.180.6; mgmt=172.16.181.10                                                                  |

    | 143baf4f-024a-47f1-969a-d4d79d89be14 | vnfd1-deployment_c1_0_84c5bc9e-9d80-4628-b88a-f8a0011b5d4b    | ACTIVE | -          | Running     | orchestr=172.16.180.26; ultram-vnfm1-di-internal1=192.168.1.13; mgmt=172.16.181.25                         |

    | b74a0365-3be1-4bee-b1cc-e454d5b0cd11 | vnfd1-deployment_c2_0_66bac767-39fe-4972-b877-7826468a762e    | ACTIVE | -          | Running     | orchestr=172.16.180.10; ultram-vnfm1-di-internal1=192.168.1.5; mgmt=172.16.181.20, 10.201.206.45           |

    | 59a02ec2-bed6-4ad8-81ff-e8a922742f7b | vnfd1-deployment_s3_0_f9f6b7a6-1458-4b22-b40f-33f8af3500b8    | ACTIVE | -          | Running     | ultram-vnfm1-service-network1=10.10.10.4; orchestr=172.16.180.17; ultram-vnfm1-di-internal1=192.168.1.6    |

    | 52e9a2b0-cf2c-478d-baea-f4a5f3b7f327 | vnfd1-deployment_s4_0_8c78cfd9-57c5-4394-992a-c86393187dd0    | ACTIVE | -          | Running     | ultram-vnfm1-service-network1=10.10.10.11; orchestr=172.16.180.20; ultram-vnfm1-di-internal1=192.168.1.3   |

    | bd7c6600-3e8f-4c09-a35c-89921bbf1b35 | vnfd1-deployment_s5_0_f1c48ea1-4a91-4098-86f6-48e172e23c83    | ACTIVE | -          | Running     | ultram-vnfm1-service-network1=10.10.10.12; orchestr=172.16.180.13; ultram-vnfm1-di-internal1=192.168.1.2   |

    | 085baf6a-02bf-4190-ac38-bbb33350b941 | vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c    | ERROR  | -          | NOSTATE     |                                                                                                            |

    | ea03767f-5dd9-43ed-8e9d-603590da2580 | vnfd1-deployment_s7_0_e887d8b1-7c98-4f60-b343-b0be7b387b32    | ACTIVE | -          | Running     | ultram-vnfm1-service-network1=10.10.10.10; orchestr=172.16.180.18; ultram-vnfm1-di-internal1=192.168.1.9   |

    +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+

    在ESC尝试恢复失败的VM后,它将VM标记为OpenStack中的失败实例,并且不会从现在开始重试恢复。

    以下是ESC中VM恢复失败的日志:

    15:11:04,617 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====

    15:11:04,617 11-Aug-2017 WARN  Type: VM_RECOVERY_INIT

    15:11:04,617 11-Aug-2017 WARN  Status: SUCCESS

    15:11:04,617 11-Aug-2017 WARN  Status Code: 200

    15:11:04,617 11-Aug-2017 WARN  Status Msg: Recovery event for VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] triggered.

    15:11:04,617 11-Aug-2017 WARN  Tenant: core

    15:11:04,617 11-Aug-2017 WARN  Service ID: NULL

    15:11:04,617 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187

    15:11:04,617 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1

    15:11:04,617 11-Aug-2017 WARN  VM group name: s6

    15:11:04,618 11-Aug-2017 WARN  VM Source:

    15:11:04,618 11-Aug-2017 WARN      VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d

    15:11:04,618 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73

    15:11:04,618 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain

    15:11:04,618 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;

    15:11:04,618 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====

    15:16:38,019 11-Aug-2017 WARN 

    15:16:38,020 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====

    15:16:38,020 11-Aug-2017 WARN  Type: VM_RECOVERY_REBOOT

    15:16:38,020 11-Aug-2017 WARN  Status: FAILURE

    15:16:38,020 11-Aug-2017 WARN  Status Code: 500

    15:16:38,020 11-Aug-2017 WARN  Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] failed to be rebooted.

    15:16:38,020 11-Aug-2017 WARN  Tenant: core

    15:16:38,020 11-Aug-2017 WARN  Service ID: NULL

    15:16:38,020 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187

    15:16:38,020 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1

    15:16:38,020 11-Aug-2017 WARN  VM group name: s6

    15:16:38,021 11-Aug-2017 WARN  VM Source:

    15:16:38,021 11-Aug-2017 WARN      VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d

    15:16:38,021 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73

    15:16:38,021 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain

    15:16:38,021 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;

    15:16:38,021 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====

    15:16:48,286 11-Aug-2017 WARN 

    15:16:48,286 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====

    15:16:48,286 11-Aug-2017 WARN  Type: VM_RECOVERY_UNDEPLOYED

    15:16:48,286 11-Aug-2017 WARN  Status: SUCCESS

    15:16:48,286 11-Aug-2017 WARN  Status Code: 204

    15:16:48,286 11-Aug-2017 WARN  Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been undeployed.

    15:16:48,286 11-Aug-2017 WARN  Tenant: core

    15:16:48,286 11-Aug-2017 WARN  Service ID: NULL

    15:16:48,286 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187

    15:16:48,286 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1

    15:16:48,286 11-Aug-2017 WARN  VM group name: s6

    15:16:48,286 11-Aug-2017 WARN  VM Source:

    15:16:48,286 11-Aug-2017 WARN      VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d

    15:16:48,286 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73

    15:16:48,286 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain

    15:16:48,287 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;

    15:16:48,287 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====

    15:18:04,418 11-Aug-2017 WARN 

    15:18:04,418 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====

    15:18:04,418 11-Aug-2017 WARN  Type: VM_RECOVERY_COMPLETE

    15:18:04,418 11-Aug-2017 WARN  Status: FAILURE

    15:18:04,418 11-Aug-2017 WARN  Status Code: 500

    15:18:04,418 11-Aug-2017 WARN  Status Msg: Error deploying VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] as part of recovery workflow. VIM Driver: VM booted in ERROR state in Openstack: No valid host was found. There are not enough hosts available.

    15:18:04,418 11-Aug-2017 WARN  Tenant: core

    15:18:04,418 11-Aug-2017 WARN  Service ID: NULL

    15:18:04,418 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187

    15:18:04,418 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1

    15:18:04,418 11-Aug-2017 WARN  VM group name: s6

    15:18:04,418 11-Aug-2017 WARN  VM Source:

    15:18:04,418 11-Aug-2017 WARN      VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d

    15:18:04,418 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73

    15:18:04,418 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain

    15:18:04,418 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;

    15:18:04,418 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====

    解决方案

    1.打开计算机电源按钮并等待虚拟机监控程序启动:

    [root@ultram-ospd ~]# su - stack

    [stack@ultram-ospd ~]$ source stackrc

    [stack@ultram-ospd ~]$ nova hypervisor-list

    +----+---------------------------------------+-------+---------+

    | ID | Hypervisor hostname                   | State | Status  |

    +----+---------------------------------------+-------+---------+

    | 3  | ultram-rcdnlab-compute-10.localdomain | up    | enabled |

    | 6  | ultram-rcdnlab-compute-5.localdomain  | up    | enabled |

    | 9  | ultram-rcdnlab-compute-6.localdomain  | up    | enabled |

    | 12 | ultram-rcdnlab-compute-3.localdomain  | up    | enabled |

    | 15 | ultram-rcdnlab-compute-9.localdomain  | up    | enabled |

    | 18 | ultram-rcdnlab-compute-1.localdomain  | up    | enabled |

    | 21 | ultram-rcdnlab-compute-8.localdomain  | up    | enabled |

    | 24 | ultram-rcdnlab-compute-4.localdomain  | down  | enabled |

    | 27 | ultram-rcdnlab-compute-7.localdomain  | up    | enabled |

    | 30 | ultram-rcdnlab-compute-2.localdomain  | up    | enabled |

    | 33 | ultram-rcdnlab-compute-0.localdomain  | up    | enabled |

    +----+---------------------------------------+-------+---------+





    [stack@ultram-ospd ~]$ nova hypervisor-list

    +----+---------------------------------------+-------+---------+

    | ID | Hypervisor hostname                   | State | Status  |

    +----+---------------------------------------+-------+---------+

    | 3  | ultram-rcdnlab-compute-10.localdomain | up    | enabled |

    | 6  | ultram-rcdnlab-compute-5.localdomain  | up    | enabled |

    | 9  | ultram-rcdnlab-compute-6.localdomain  | up    | enabled |

    | 12 | ultram-rcdnlab-compute-3.localdomain  | up    | enabled |

    | 15 | ultram-rcdnlab-compute-9.localdomain  | up    | enabled |

    | 18 | ultram-rcdnlab-compute-1.localdomain  | up    | enabled |

    | 21 | ultram-rcdnlab-compute-8.localdomain  | up    | enabled |

    | 24 | ultram-rcdnlab-compute-4.localdomain  | up    | enabled |

    | 27 | ultram-rcdnlab-compute-7.localdomain  | up    | enabled |

    | 30 | ultram-rcdnlab-compute-2.localdomain  | up    | enabled |

    | 33 | ultram-rcdnlab-compute-0.localdomain  | up    | enabled |

    +----+---------------------------------------+-------+---------+

    2.确定“新星列表”中的实例ID:

    [root@ultram-ospd ~]# su - stack
    [stack@ultram-ospd ~]$ source corerc

    [stack@ultram-ospd ~]$ nova list | grep ERROR

    | 085baf6a-02bf-4190-ac38-bbb33350b941 | vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c    | ERROR  | -          | NOSTATE     | 

                                                                                                         

    3.使用上一步中的CLI和实例ID在ESC上启动手动恢复:

    [admin@ultram-vnfm1-esc-0 ~]$ cd /opt/cisco/esc/esc-confd/esc-cli

    [admin@ultram-vnfm1-esc-0 esc-cli]$ ./esc_nc_cli recovery-vm-action DO vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c

    Recovery VM Action

    /opt/cisco/esc/confd/bin/netconf-console --port=830 --host=127.0.0.1 --user=admin --privKeyFile=/home/admin/.ssh/confd_id_dsa --privKeyType=dsa --rpc=/tmp/esc_nc_cli.hZsdLQ2Mle

    <?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">

      <ok/>

    </rpc-reply>

    4.检查日志、OpenStack Horizon以确认实例已恢复:

    [admin@ultram-vnfm1-esc-0 ~]$ tail -f /var/log/esc/yangesc.log 



    16:41:54,445 11-Aug-2017 INFO  ===== RECOVERY VM ACTION REQUEST RECEIVED =====
    16:41:54,445 11-Aug-2017 INFO  Type: DO
    16:41:54,445 11-Aug-2017 INFO  Recovery VM name: vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c
    16:41:58,092 11-Aug-2017 INFO  =====  RECOVERY VM ACTION REQUEST ACCEPTED  =====
    16:41:58,673 11-Aug-2017 WARN  
    16:41:58,673 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====
    16:41:58,674 11-Aug-2017 WARN  Type: VM_RECOVERY_INIT
    16:41:58,674 11-Aug-2017 WARN  Status: SUCCESS
    16:41:58,674 11-Aug-2017 WARN  Status Code: 200
    16:41:58,674 11-Aug-2017 WARN  Status Msg: Recovery event for VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] triggered.
    16:41:58,674 11-Aug-2017 WARN  Tenant: core
    16:41:58,674 11-Aug-2017 WARN  Service ID: NULL
    16:41:58,674 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
    16:41:58,674 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1
    16:41:58,674 11-Aug-2017 WARN  VM group name: s6
    16:41:58,674 11-Aug-2017 WARN  VM Source:
    16:41:58,674 11-Aug-2017 WARN      VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
    16:41:58,674 11-Aug-2017 WARN      Host ID:
    16:41:58,674 11-Aug-2017 WARN      Host Name:
    16:41:58,674 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:41:58,674 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====
    16:42:19,794 11-Aug-2017 WARN  
    16:42:19,794 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====
    16:42:19,794 11-Aug-2017 WARN  Type: VM_RECOVERY_REBOOT
    16:42:19,794 11-Aug-2017 WARN  Status: FAILURE
    16:42:19,794 11-Aug-2017 WARN  Status Code: 500
    16:42:19,794 11-Aug-2017 WARN  Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] failed to be rebooted.
    16:42:19,794 11-Aug-2017 WARN  Tenant: core
    16:42:19,795 11-Aug-2017 WARN  Service ID: NULL
    16:42:19,795 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
    16:42:19,795 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1
    16:42:19,795 11-Aug-2017 WARN  VM group name: s6
    16:42:19,795 11-Aug-2017 WARN  VM Source:
    16:42:19,795 11-Aug-2017 WARN      VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
    16:42:19,795 11-Aug-2017 WARN      Host ID:
    16:42:19,795 11-Aug-2017 WARN      Host Name:
    16:42:19,795 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:42:19,795 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====
    16:42:32,013 11-Aug-2017 WARN  
    16:42:32,013 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====
    16:42:32,013 11-Aug-2017 WARN  Type: VM_RECOVERY_UNDEPLOYED
    16:42:32,013 11-Aug-2017 WARN  Status: SUCCESS
    16:42:32,013 11-Aug-2017 WARN  Status Code: 204
    16:42:32,013 11-Aug-2017 WARN  Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been undeployed.
    16:42:32,013 11-Aug-2017 WARN  Tenant: core
    16:42:32,014 11-Aug-2017 WARN  Service ID: NULL
    16:42:32,014 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
    16:42:32,014 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1
    16:42:32,014 11-Aug-2017 WARN  VM group name: s6
    16:42:32,014 11-Aug-2017 WARN  VM Source:
    16:42:32,014 11-Aug-2017 WARN      VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
    16:42:32,014 11-Aug-2017 WARN      Host ID:
    16:42:32,014 11-Aug-2017 WARN      Host Name:
    16:42:32,014 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:42:32,014 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====
    16:43:13,643 11-Aug-2017 WARN  
    16:43:13,643 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====
    16:43:13,643 11-Aug-2017 WARN  Type: VM_RECOVERY_DEPLOYED
    16:43:13,643 11-Aug-2017 WARN  Status: SUCCESS
    16:43:13,643 11-Aug-2017 WARN  Status Code: 200
    16:43:13,643 11-Aug-2017 WARN  Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been deployed as part of recovery.
    16:43:13,643 11-Aug-2017 WARN  Tenant: core
    16:43:13,643 11-Aug-2017 WARN  Service ID: NULL
    16:43:13,643 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
    16:43:13,643 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1
    16:43:13,643 11-Aug-2017 WARN  VM group name: s6
    16:43:13,643 11-Aug-2017 WARN  VM Source:
    16:43:13,643 11-Aug-2017 WARN      VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
    16:43:13,643 11-Aug-2017 WARN      Host ID:
    16:43:13,643 11-Aug-2017 WARN      Host Name:
    16:43:13,643 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:43:13,643 11-Aug-2017 WARN  VM Target:
    16:43:13,644 11-Aug-2017 WARN      VM ID: a313e8dc-3b0f-4b41-8648-f9b9419bc826
    16:43:13,644 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
    16:43:13,644 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain
    16:43:13,644 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:43:13,644 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====
    16:43:33,827 11-Aug-2017 WARN  
    16:43:33,827 11-Aug-2017 WARN  ===== SEND NOTIFICATION STARTS =====
    16:43:33,827 11-Aug-2017 WARN  Type: VM_RECOVERY_COMPLETE
    16:43:33,827 11-Aug-2017 WARN  Status: SUCCESS
    16:43:33,827 11-Aug-2017 WARN  Status Code: 200
    16:43:33,827 11-Aug-2017 WARN  Status Msg: Recovery: Successfully recovered VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c].
    16:43:33,827 11-Aug-2017 WARN  Tenant: core
    16:43:33,827 11-Aug-2017 WARN  Service ID: NULL
    16:43:33,828 11-Aug-2017 WARN  Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
    16:43:33,828 11-Aug-2017 WARN  Deployment name: vnfd1-deployment-1.0.0-1
    16:43:33,828 11-Aug-2017 WARN  VM group name: s6
    16:43:33,828 11-Aug-2017 WARN  VM Source:
    16:43:33,828 11-Aug-2017 WARN      VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
    16:43:33,828 11-Aug-2017 WARN      Host ID:
    16:43:33,828 11-Aug-2017 WARN      Host Name:
    16:43:33,828 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:43:33,828 11-Aug-2017 WARN  VM Target:
    16:43:33,828 11-Aug-2017 WARN      VM ID: a313e8dc-3b0f-4b41-8648-f9b9419bc826
    16:43:33,828 11-Aug-2017 WARN      Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
    16:43:33,828 11-Aug-2017 WARN      Host Name: ultram-rcdnlab-compute-4.localdomain
    16:43:33,828 11-Aug-2017 WARN      [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
    16:43:33,828 11-Aug-2017 WARN  =====  SEND NOTIFICATION ENDS  =====



    [local]rcdn-ulram-lab# show card table
    Slot         Card Type                               Oper State     SPOF  Attach
    -----------  --------------------------------------  -------------  ----  ------
     1: CFC      Control Function Virtual Card           Standby        -           
     2: CFC      Control Function Virtual Card           Active         No          
     3: FC       1-Port Service Function Virtual Card    Active         No          
     4: FC       1-Port Service Function Virtual Card    Active         No          
     5: FC       1-Port Service Function Virtual Card    Standby        -           
     6: FC       1-Port Service Function Virtual Card    Active         No          
     7: FC       1-Port Service Function Virtual Card    Active         No          
    [local]rcdn-ulram-lab#