Guest

Cisco UCS B-Series Blade Server Software

Cisco usNIC Deployment Guide for Cisco UCS B-Series Blade Servers

  • Viewing Options

  • PDF (1.3 MB)
  • EPUB (166.3 KB)
  • MOBI (272.9 KB)
  • Feedback
Overview of Cisco usNIC

Contents

Overview of Cisco usNIC

The Cisco user-space NIC (Cisco usNIC) feature improves the performance of software applications that run on the Cisco UCS servers in your data center by bypassing the kernel when sending and receiving networking packets. The applications interact directly with a Cisco UCS VIC second generation or later adapter, which improves the networking performance of your high-performance computing cluster. To benefit from Cisco usNIC, your applications must use the Message Passing Interface (MPI) instead of sockets or other communication APIs.

Cisco usNIC offers the following benefits for your MPI applications:

  • Provides a low-latency and high-throughput communication transport.

  • Employs the standard and application-independent Ethernet protocol.

  • Takes advantage of low­latency forwarding, Unified Fabric, and integrated management support in the following Cisco data center platforms:
    • Cisco UCS server

    • Cisco UCS VIC second generation or later generation adapter

    • 10 or 40GbE networks

Standard Ethernet applications use user-space socket libraries, which invoke the networking stack in the Linux kernel. The networking stack then uses the Cisco eNIC driver to communicate with the Cisco VIC hardware. The following figure shows the contrast between a regular software application and an MPI application that uses usNIC.

Figure 1. Kernel-Based Network Communication versus Cisco usNIC-Based Communication

Cisco usNIC Prerequisites

To benefit from Cisco usNIC, your configuration has the following prerequisites:

  • UCS Driver ISO. For more information, see Downloading Cisco UCS VIC drivers.

  • A supported Linux operating system distribution release. For more information, see the appropriate Hardware and Software Interoperability guide.

  • A supported MPI implementation, such as the Cisco Open MPI distribution (included on the Cisco UCS Driver ISO), or version 4 or 5 of the Intel ® MPI Library. If the Intel ® MPI Library is used, the network must be configured with flow control enabled.

Configuring Cisco usNIC


Note


The Cisco usNIC packages do not support the upgrade or downgrade of an operating system. To update the operating system, first ensure you uninstall the usNIC packages, update your operating system, and then reinstall the usNIC drivers.

Alternatively, you can update the operating system, uninstall the usNIC drivers, and then reinstall the usNIC drivers.


Before You Begin
Make sure that the following software and hardware components are installed on the Cisco UCS server:
  • A supported Linux operating system distribution release. For more information, see the appropriate Hardware and Software Interoperability guide.

  • GCC, G++, and Gfortran

  • DAT user library (if using Intel ® MPI )

  • libnl user library (either version 1 or version 3)

  • Cisco UCS VIC second generation or later adapter

Important:

For information on supported Linux operating system distributions, see the content of the usNIC folder that is included in the UCS Drivers ISO bundle. See Cisco UCS Virtual Interface Card Drivers for Linux Installation Guide.

Procedure
    Step 1   Configure the Cisco usNIC properties and BIOS settings using Cisco UCS Manager GUI or Cisco UCS Manager CLI.
    Step 2   Ensure the kernel option CONFIG_INTEL_IOMMU is selected in the kernel. Enable the Intel IOMMU driver in the Linux kernel by manually adding 'intel_iommu =on' in the grub.conf file (/boot/grub/grub.conf):
     KEYBOARDTYPE=pc KEYTABLE=us rd NO DM rhgb quiet intel_iommu=on
    Step 3   Reboot your Cisco UCS server.

    You must reboot your server for the changes to take effect after you configure Cisco usNIC.

    Step 4   Verify that the running kernel has booted with the intel_iommu=on option.
    $ cat /proc/cmdline | grep iommu
    Step 5   Install the Cisco usNIC Linux drivers.

    For more information about installing the drivers, see Installing Linux Software Packages for Cisco usNIC.


    What to Do Next

    After you complete configuring Cisco usNIC and installing the Linux drivers, verify that Cisco usNIC is functioning properly. For more information about how to verify the installation, see Verifying the Cisco usNIC Installation.

    Creating a Cisco usNIC Connection Policy using the Cisco Manager GUI

    You can use the procedure described below or click Play on this video to watch how a Cisco usNIC Connection policy can be created.

    Procedure
      Step 1   In the Navigation pane, click the LAN tab.
      Step 2   On the LAN tab, expand LAN > Policies.
      Step 3   Expand the root node.
      Step 4   Right-click usNIC Connection Policies and choose Create usNIC Connection Policy.
      Step 5   In the Create usNIC Connection Policy dialog box, complete the following fields:
      Name Description

      Name field

      The name of the policy.

      This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special characters other than - (hyphen), _ (underscore), : (colon), and . (period), and you cannot change this name after the object has been saved.

      Description field

      A description of the policy. We recommend that you include information about where and when the policy should be used.

      Number of usNICs field

      The number of usNICs that you want to create.

      Each MPI process running on the server requires a dedicated usNIC. You can create up to 116 usNICs on one adapter to sustain 116 MPI processes running simultaneously. We recommend that you create at least as many usNICs, per usNIC-enabled vNIC, as there are physical cores on your server. For example, if you have 8 physical cores on your server, create 8 usNICs.

      Adapter Policy drop-down list

      The adapter policy that you want to specify for the usNIC. Cisco recommends that you choose the usNIC adapter policy, which is created by default.

      Configuring a usNIC Ethernet Adapter Policy

      Procedure
        Step 1   In the Navigation pane, click the Servers tab.
        Step 2   On the Servers tab, expand Servers > Policies > root > Adapter Policies.
        Step 3   Click Eth Adapter Policy usNIC.
        Step 4   In the Work pane, click the General tab.
        Step 5   (Optional)Modify the details in the Resources and Options sections as needed. For more information about configuring Ethernet adapter policies, see the Cisco UCS Manager Configuration Guide.

        Modifying a usNIC using the Cisco UCS Manager GUI

        Procedure
          Step 1   In the Navigation pane, click the Servers tab.
          Step 2   On the Servers tab, expand Servers > Service Profiles > root.
          Step 3   Expand the service profile node where you want to configure the usNIC and click vNICs.
          Step 4   In the Work pane, click the Network tab.
          Step 5   In the vNICs area, choose a vNIC and click Modify.
          Step 6   In the Adapter Performance Profile area of the Modify vNIC dialog box, choose Linux from the Adapter Policy drop-down list.
          Step 7   In the Connection Policies area, click the usNIC radio button.
          Step 8   Choose the usNIC connection policy that you created from the usNIC Connection Policy drop-down list.
          Step 9   Click OK.
          Step 10   Click Save Changes.
          Step 11   In the Navigation pane, click the service profile that you just modified.
          Step 12   In the Work pane, click the Policies tab.
          Step 13   Expand the BIOS Policy bar and choose usNIC in the BIOS Policy drop-down list.
          Step 14   Click Save Changes.

          Creating a usNIC using the Cisco UCS Manager CLI

          Before You Begin

          You must log in with admin privileges to perform this task.

          Procedure
             Command or ActionPurpose
            Step 1 UCS-A # scope service-profile server chassis-id / blade-id or rack_server-id  

            Enters the service profile for the specified chassis, blade or UCS managed rack server ID.

             
            Step 2 UCS-A /org/service-profile # show vnic   Displays the vnics that are available on the server. A usNIC vNIC is available by default when you upgrade to Cisco UCS Manager, release 2.2.  
            Step 3 UCS-A /org/service-profile # scope vnic vnic name   Enters the vNIC mode for the specified vNIC.  
            Step 4UCS-A /org/service-profile/vnic # set adapter-policy Linux   Specifies Linux and the adapter policy for the usNIC.  
            Step 5 UCS-A /org/service-profile/vnic # enter usnic-conn-policy-ref usnic connection policy reference name   Creates the usNIC connection policy reference for the vNIC with the specified name. The maximum size for the connection policy name is 16 characters.  
            Step 6 UCS-A /org/service-profile/vnic/usnic-conn-policy-ref* # commit-buffer   Commits the transaction to the system configuration.  
            Step 7 UCS-A /org/service-profile/vnic/usnic-conn-policy-ref # top   Enters the top-level mode.  
            Step 8 UCS-A # scope org   Enters the root organization mode.  
            Step 9 UCS-A /org # create usnic-conn-policy usnic connection policy name   Creates a usNIC connection policy with the specified name.  
            Step 10 UCS-A /org/usnic-conn-policy* # set usnic-count number of usnics  

            Specifies the number of Cisco usNICs to create. It is recommended that you enter 58 for this value.

             
            Step 11 UCS-A /org/usnic-conn-policy* # set adaptor-profile usNIC   Specifies the usNIC Ethernet adaptor profile for the usNIC connection policy. This usNIC adaptor profile is created by default when you upgrade from previous versions of Cisco UCS Manager to release 2.2.  
            Step 12 UCS-A /org/usnic-conn-policy* # commit-buffer   Commits the transaction to the system configuration.  

            This example shows how to create a Cisco usNIC and specify its properties:

            Server # scope org
            Server /org # create usnic-conn-policy usnic1
            Server /org/usnic-conn-policy* # set usnic-count 58
            Server /org/usnic-conn-policy* # set adaptor-profile usNIC
            Server /org/usnic-conn-policy* # commit-buffer
            Server /org/usnic-conn-policy # top
            
            
            Server # scope service-profile server 1/1
            Server /org/service-profile # show vnic
            
            vNIC:
            Name Fabric ID Dynamic MAC Addr Virtualization Preference
            ------------------ --------- ------------------ -------------------------
            eth0 A 00:25:B5:00:00:A1 NONE
            eth1 B 00:25:B5:00:00:A2 NONE
            eth2 A 00:25:B5:00:00:A3 NONE
            Server /org/service-profile # scope vnic eth0
            Server /org/service-profile/vnic # set adapter-policy Linux
            Server /org/service-profile/vnic # enter usnic-conn-policy-ref usnic1
            Server /org/service-profile/vnic/usnic-conn-policy-ref* # commit-buffer
            Server /org/service-profile/vnic/usnic-conn-policy-ref # exit 
            
            

            Modifying a usNIC using the Cisco UCS Manager CLI

            Before You Begin

            You must log in with admin privileges to perform this task.

            Procedure
               Command or ActionPurpose
              Step 1 UCS-A # scope service-profile server chassis-id / blade-id or rack_server-id  

              Enters the service profile for the specified chassis, blade or UCS managed rack server ID.

               
              Step 2 UCS-A /org/service-profile # show vnic   Displays the vnics that are available on the server. A usnic vnic is available by default when you upgrade to Cisco UCS Manager, release 2.2.  
              Step 3 UCS-A /org/service-profile # scope vnic vnic name   Enters the vnic mode for the specified vNIC.  
              Step 4 UCS-A /org/service-profile/vnic # enter usnic-conn-policy-ref usnic connection policy reference name   Specifies the usnic connection policy reference for the vNIC that you want to use.  
              Step 5 UCS-A /org/service-profile/vnic/usnic-conn-policy-ref* # commit-buffer   Commits the transaction to the system configuration.  

              This example shows how to modify Cisco usNIC properties:

              Server # scope service-profile server 1/1
              Server /org/service-profile # show vnic
              
              vNIC:
              Name Fabric ID Dynamic MAC Addr Virtualization Preference
              ------------------ --------- ------------------ -------------------------
              eth0 A 00:25:B5:00:00:A1 SRIOV USNIC
              eth1 B 00:25:B5:00:00:A2 NONE
              eth2 A 00:25:B5:00:00:A3 NONE
              Server /org/service-profile # scope vnic eth0
              Server /org/service-profile/vnic # enter usnic-conn-policy-ref usnic2
              Server /org/service-profile/vnic/usnic-conn-policy-ref* # commit-buffer
              Server /org/service-profile/vnic/usnic-conn-policy-ref # exit
              
              

              Deleting a usNIC using the Cisco UCS Manager CLI

              Before You Begin

              You must log in with admin privileges to perform this task.

              Procedure
                 Command or ActionPurpose
                Step 1 UCS-A # scope service-profile server chassis-id / blade-id or rack_server-id  

                Enters the service profile for the specified chassis, blade or UCS managed rack server ID.

                 
                Step 2 UCS-A /org/service-profile # show vnic   Displays the vNICs that are available on the server. A usNIC vNIC is available by default when you upgrade to Cisco UCS Manager, release 2.2.  
                Step 3 UCS-A /org/service-profile # scope vnic vnic name   Enters the vNIC mode for the specified vNIC.  
                Step 4 UCS-A /org/service-profile/vnic # show usnic-conn-policy-ref usnic connection policy reference name   Specifies the usNIC connection policy reference for the vNIC that you want to use.  
                Step 5 UCS-A /org/service-profile/vnic # delete usnic-conn-policy-ref usnic connection policy reference name   Deletes the specified usNIC connection policy reference.  
                Step 6 UCS-A /org/service-profile/vnic/usnic-conn-policy-ref* # commit-buffer   Commits the transaction to the system configuration.  

                This example shows how to modify Cisco usNIC properties:

                Server # scope service-profile server 1/1
                Server /org/service-profile # show vnic
                
                vNIC:
                Name Fabric ID Dynamic MAC Addr Virtualization Preference
                ------------------ --------- ------------------ -------------------------
                eth0 A 00:25:B5:00:00:A1 SRIOV USNIC
                eth1 B 00:25:B5:00:00:A2 NONE
                eth2 A 00:25:B5:00:00:A3 NONE
                Server /org/service-profile # scope vnic eth0
                Server /org/service-profile/vnic # show usnic-conn-policy-ref
                
                usNIC Connection Policy Reference:
                usNIC Connection Policy Name
                ----------------------------
                usnic2
                Server /org/service-profile/vnic # delete usnic-conn-policy-ref usnic2
                Server /org/service-profile/vnic* # commit-buffer
                Server /org/service-profile/vnic # exit 
                
                

                Installing Linux Software Packages for Cisco usNIC

                The following section lists the content of the usNIC folder, specific for each supported Linux operating system distribution that is included in the UCS Drivers ISO bundle. Documentation about known issues and installation instructions are also included in the README file in the usNIC folder.

                • kmod-usnic_verbs-{version}.x86_64.rpm—Linux kernel verbs driver for the usNIC feature of the Cisco VIC SR-IOV Ethernet NIC.

                • libdaplusnic_verbs-{version}.x86_64.rpm— User space library DAPL plugin for usNIC.

                • openmpi-cisco-{version}.x86 _64.rpmCisco usNIC Open MPI — Open MPI with the Cisco usNIC BTL MPI transport.

                • usnic_tools-{version}.x86_64.rpm — Utility programs for usNIC.

                Before You Begin

                Make sure that you have configured the Cisco usNIC properties in Cisco UCS Manager. For more information about how to configure the properties, see Configuring Cisco usNIC.

                You must also make sure that the host OS distribution on which you want to install Cisco usNIC has a supported version of the Cisco enic driver installed. The Cisco enic driver is the Linux kernel networking driver for the Cisco VIC SR-IOV Ethernet NIC.
                Procedure
                  Step 1   Upgrade to the latest version of the enic driver included in the Cisco UCS ISO for your Linux distribution.
                  Step 2   Install the Cisco usNIC software packages from the Cisco UCS Drivers ISO for your Linux distribution.
                  Step 3   # chkconfig rdma on

                  Enables Linux RDMA services. Once enabled, RDMA services will be started automatically after a system reboot.

                  Note   

                  You may need to perform this step on some Linux operating systems distributions, such as RHEL 6.4.

                  Step 4   Reboot your server for the installation changes to take effect automatically.
                  Important:

                  If you do not want to reboot your server, you can manually load the kernel modules to ensure the system loads the correct version of the driver and enforces the new memory lock configurations. For more information about how to load the modules, see Manually Loading the Kernel Modules for Cisco usNIC.


                  Source code for Linux Cisco usNIC software packages

                  The source code for the Cisco usNIC software packages is provided on the Cisco UCS Drivers ISO. It is recommended that you do not mix the source code and binary package installations.

                  Manually Loading the Kernel Modules for Cisco usNIC

                  If you do not want to reboot your server, you can manually load the Cisco usNIC kernel modules by using the following steps.

                  Before You Begin

                  Ensure you delete all the existing versions of the driver before you load the latest version of the driver. This will help you configure the system successfully.

                  Procedure
                     Command or ActionPurpose
                    Step 1# rmmod enic
                     

                    Unloads the existing enic driver module.

                    Note   

                    Make sure that you are not logged into the OS using the network, for example, via SSH. Otherwise, your network connection might get permanently disconnected. Alternatively, you can log in to the server using the Cisco UCS Manager KVM to perform this step.

                     
                    Step 2# modprobe enic
                     

                    Loads the enic driver module.

                     
                    Step 3# modprobe usnic_verbs
                     

                    Loads the usnic_verbs driver module.

                     

                    Upgrading the Linux Software Packages for Cisco usNIC

                    Procedure
                      Step 1   Uninstall the following usNIC software packages:
                      • usnic_tools

                      • openmpi-cisco

                      • libdaplusnic

                      • kmod-usnic_verbs

                      Step 2   Upgrade to the latest version of the enic driver included in the Cisco UCS Drivers ISO for your Linux distribution.
                      Step 3   Install the usNIC software packages from the Cisco UCS Drivers ISO for your Linux distribution.
                      Step 4   # chkconfig rdma on

                      Enables rdma and, once enabled, it will be started automatically after a system reboot.

                      Step 5   Reboot your server for the installation changes to take effect automatically.
                      Important:

                      If you do not want to reboot your server, you can manually load the kernel modules to ensure the system loads the correct version of the driver and enforce the new memory lock configurations. For more information about how to load the modules, see Manually Loading the Kernel Modules for Cisco usNIC.


                      Uninstalling Linux Software Packages for Cisco usNIC

                      Procedure
                        Step 1   Uninstall the following usNIC software packages:
                        • usnic_tools

                        • openmpi-cisco

                        • libdaplusnic

                        • kmod-usnic_verbs

                        Step 2   Reboot your Cisco UCS server.

                        Adding MPI to User Environments

                        Before MPI applications can be compiled and launched, an MPI implementation must be added to each user's environment. It is recommended that you only add one MPI implementation to a user's environment at a time.

                        Environment for the Cisco Open MPI

                        For Cisco Open MPI, two scripts are installed by the cisco-openmpi software package to help set the required environment variables. One script is for Bourne shell users; the other script is for C shell users:

                        • /opt/cisco/openmpi-vars.sh

                        • /opt/cisco/openmpi-vars.csh

                        The appropriate script should be sourced as part of the users's shell startup / login sequence (even for non-interactive shells).

                        Environment for the Intel ® MPI Library

                        In addition to the instructions provided by the Intel ® MPI Library documentation, additional environment variables must be set in each user's environment to enable Cisco usNIC functionality. Two scripts are installed by the libdaplusnic software package to help set the required environment variables. One script is for Bourne shell users; the other script is for C shell users:

                        • /opt/cisco/intelmpi-usnic-vars.sh

                        • /opt/cisco/intelmpi-usnic-vars.csh

                        The appropriate script should be sourced as part of the users's shell startup / login sequence.

                        Using the Intel ® MPI Library with usNIC requires the network to be configured with flow control enabled. This can be either IEEE 802.3x link-level flow control or IEEE 802.1Qbb Priority-based Flow Control (PFC). This feature is sometimes also called "no-drop." Refer to the configuration guide for the switch(es) in your network for information about enabling flow control. If flow control is not enabled in the network, then applications utilizing the Intel® MPI Library may work correctly, but possibly with extremely degraded performance.

                        In deployments of the Intel ® MPI Library, the MPI traffic must have flow control enabled on all ports, and no-drop should be configured for the COS value in use by Cisco usNIC traffic, which is COS 0 by default. Please refer to the switch configuration guide sections on "Configuring Flow Control" and "Configuring QoS."

                        Verifying the Cisco usNIC Installation

                        After you install the required Linux drivers for Cisco usNIC, perform the following procedure at the Linux prompt to make sure that the installation completed successfully.


                        Note


                        The examples shown below are configurations verified on Linux operating system distribution RHEL 6.5.


                        Procedure
                          Step 1   Search and verify if the usnic_verbs kernel module was loaded during the OS driver installation.
                          $ lsmod | grep usnic_verbs

                          The following details are displayed when you enter the lsmod | grep usnic_verbs command. The kernel modules listed on your console may differ based on the modules that you have currently loaded in your OS.

                          usnic_verbs            73762  2 
                          ib_core                74355  11 ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad,usnic_verbs
                          enic                   73723  1 usnic_verbs
                          
                          
                          Step 2   View the configuration of Cisco usNIC-enabled NICs.
                          $ /opt/cisco/usnic/bin/usd_devinfo

                          The following section is a brief example of the results that are displayed when you execute the usd_devinfo command. The results may differ based on your current installation. When the results are displayed on your console, make sure that the state for each of the listed ports are shown as PORT_ACTIVE.

                          The following example shows two ports (usnic_1 and usnic_0) that are configured on a Cisco UCS VIC adapter. If you configured only one Cisco usNIC-enabled vNIC, you will see a listing for only usnic_0.
                          usnic_0:
                                  Interface:               eth1
                                  MAC Address:             00:25:b5:c4:b1:10
                                  IP Address:              50.42.110.11
                                  Netmask:                 255.255.255.0
                                  Prefix len:              24
                                  MTU:                     9000
                                  Link State:              UP
                                  Bandwidth:               10 Gb/s
                                  Device ID:               UCSB-MLOM-40G-03 [VIC 1340] [0x012c]
                                  Firmware:                4.0(4a)
                                  VFs:                     58
                                  CQ per VF:               6
                                  QP per VF:               6
                                  Max CQ:                  348
                                  Max CQ Entries:          65535
                                  Max QP:                  348
                                  Max Send Credits:        4095
                                  Max Recv Credits:        4095
                                  Capabilities:
                                    CQ sharing: yes
                                    PIO Sends:  yes
                          
                          usnic_1:
                                  Interface:               eth2
                                  MAC Address:             00:25:b5:c4:b1:20
                                  IP Address:              50.42.120.11
                                  Netmask:                 255.255.255.0
                                  Prefix len:              24
                                  MTU:                     9000
                                  Link State:              UP
                                  Bandwidth:               10 Gb/s
                                  Device ID:               UCSB-MLOM-40G-03 [VIC 1340] [0x012c]
                                  Firmware:                4.0(4a)
                                  VFs:                     58
                                  CQ per VF:               6
                                  QP per VF:               6
                                  Max CQ:                  348
                                  Max CQ Entries:          65535
                                  Max QP:                  348
                                  Max Send Credits:        4095
                                  Max Recv Credits:        4095
                                  Capabilities:
                                    CQ sharing: yes
                                    PIO Sends:  yes
                          
                          
                          Step 3   Run the usnic_check script to view the installed RPMs and their versions.
                          $ /opt/cisco/usnic/bin/usnic_check
                          

                          If any errors occurred during the OS driver installation, warnings are generated.

                          If the usnic_verbs module failed to load, the following brief example shows the warnings that are generated:

                          $ /opt/cisco/usnic/bin/usnic_check 
                          enic RPM version 2.1.1.93-rhel6u5.el6 installed
                          usnic_verbs RPM version 1.0.4.318.rhel6u5-1 installed
                          WARNING: usnic_verbs module not loaded
                          libdaplusnic RPM version 2.0.39cisco1.0.0.317-1.el6 installed
                          Using /opt/cisco/openmpi/bin/ompi_info to check Open MPI info...
                          Open MPI version 1.8.4cisco1.0.0.320.rhel6u5 installed
                          WARNING: No usnic devices found
                          WARNING: No usnic devices found
                          3 warnings
                          
                          Step 4   Verify that the Cisco usNIC network packets are being transmitted correctly between the client and server hosts.
                          1. Determine the name of the Ethernet interface associated with the Cisco usNIC on the server host.
                            [server]$ /opt/cisco/usnic/bin/usnic_status 
                            usnic_0: 0000:07:0.0, eth1, 00:25:b5:c4:b1:10, 58 VFs
                             Per VF: 6 WQ, 6 RQ, 6 CQ, 6 INT
                            
                            In use:
                            0 VFs, 0 QPs, 0 CQs
                            
                            
                            usnic_1: 0000:0c:0.0, eth2, 00:25:b5:c4:b1:20, 58 VFs
                             Per VF: 6 WQ, 6 RQ, 6 CQ, 6 INT
                            
                            In use:
                            0 VFs, 0 QPs, 0 CQs
                          2. Determine the IP address for the Ethernet interface.
                            [server]$ ip addr show dev eth1 | grep "inet[^6]"
                                inet 50.42.110.11/24 brd 50.42.110.255 scope global eth1
                          3. Run the usd_pingpong program on the server host.
                            [server]$ /opt/cisco/usnic/bin/usd_pingpong

                            For more information about the command line options used with the usd_pingpong program, see the ibv_ud_pingpong(1) man page.

                          4. Execute the usd_pingpong program on the client host by using the IP address that corresponds to the Cisco usNIC on the server host.
                            [client]# /opt/cisco/usnic/bin/usd_pingpong
                          The following example shows the results that are displayed when you run the usd_pingpong program.
                          Server-side:
                          [server]$ /opt/cisco/usnic/bin/usd_pingpong -d usnic_0
                          open usnic_0 OK, IP=50.43.10.1
                          QP create OK, addr -h 50.43.10.1 -p 3333
                          Waiting for setup...
                          
                          Client-side:
                          [client]# /opt/cisco/usnic/bin/usd_pingpong -h 50.43.10.1 -d usnic_0
                          open usnic_0 OK, IP=50.43.10.2
                          QP create OK, addr -h 50.43.10.2 -p 3333
                          sending params...
                          payload_size=4, pkt_size=46
                          posted 63 RX buffers, size=64 (4)
                          100000 pkts, 4.953 us / HRTT
                          
                          Step 5   Download, compile, and execute the ring_c test program to validate that the MPI traffic is correctly transmitted between the client and server hosts.

                          You can obtain the ring_c test program from this link: https:/​/​raw.githubusercontent.com/​open-mpi/​ompi-release/​v1.8/​examples/​ring_​c.c .

                          The following example shows how to use the wget utility to obtain, compile, and execute the ring_c. Alternatively, you can use other methods of obtaining and running the test program.
                          Note   

                          Run the following commands with a single MPI implementation setup in your environment.

                          $ wget --no-check-certificate https://raw.githubusercontent.com/open-mpi/ompi-release/v1.8/examples/ring_c.c
                          --2015-04-23 10:11:42--  https://raw.githubusercontent.com/open-mpi/ompi-release/v1.8/examples/ring_c.c
                          Resolving raw.githubusercontent.com... 199.27.74.133
                          Connecting to raw.githubusercontent.com|199.27.74.133|:443... connected.
                          WARNING: certificate common name \u201cwww.github.com\u201d doesn\u2019t match requested host name \u201craw.githubusercontent.com\u201d.
                          HTTP request sent, awaiting response... 200 OK
                          Length: 2418 (2.4K) [text/plain]
                          Saving to: \u201cring_c.c\u201d
                          
                          100%[====================================================================>] 2,418       --.-K/s   in 0s      
                          
                          2015-04-23 10:11:42 (129 MB/s) - \u201cring_c.c\u201d saved [2418/2418]
                          
                          $ mpicc ring_c.c -o ring_c
                          [no output]
                          
                          $ mpiexec --host host1,host2 -n 4 ./ring_c 
                          Process 0 sending 10 to 1, tag 201 (4 processes in ring) 
                          Process 0 sent to 1 
                          Process 0 decremented value: 9 
                          Process 0 decremented value: 8 
                          Process 0 decremented value: 7 
                          Process 0 decremented value: 6 
                          Process 0 decremented value: 5 
                          Process 0 decremented value: 4 
                          Process 0 decremented value: 3 
                          Process 0 decremented value: 2 
                          Process 0 decremented value: 1 
                          Process 0 decremented value: 0 
                          Process 0 exiting 
                          Process 2 exiting 
                          Process 1 exiting 
                          Process 3 exiting ... 
                          
                          Note   

                          If desired, setup a different MPI implementation in your environment and re-run the mpicc and mpirun commands to verify that MPI implementation with Cisco usNIC functionality.


                          If the usd_pingpong program and the ring_c program executed successfully, you should now be able to run MPI applications over Cisco usNIC.

                          Troubleshooting Information

                          Use the troubleshooting information below to help fix Cisco usNIC installation issues.

                          Scenario

                          Errors

                          Possible causes

                          Solution

                          OS Driver installation verification using usnic_verbs.

                          1. The command does not display any output.

                          2. The command output does not list all the kernel modules currently loaded in the OS. See the example below:

                            [server]$ lsmod|grep usnic_verbs
                            usnic_verbs            75510  0 
                            ib_core                73747  13 usnic_verbs
                            enic                   78638  1 usnic_verbs
                            
                          1. The kernel module (kmod_usnic_verbs.rpm has not been installed.

                          2. The usnic_verbs.ko driver has not been loaded.

                          Install or reinstall the kernel module or usnic_verbs.ko driver.

                          Viewing the list of installed RPMs using usnic_check..

                          1. Warnings such as No usnic devices found.

                          2. Version mismatch errors such as usnic_verbs_xxxx does not match installed version.

                          A previously installed version can cause this error.

                          1. List all the installed versions using the following command:rpm -qa|grep usnic_verbs

                          2. Uninstall all versions using the following command:rpm -e

                          3. Make sure that the module has been removed.

                          4. Re-install all the RPMs.

                          Verifying that the Cisco usNIC packets are being transmitted correctly between client and server using usnic_pingpong.

                          1. “No such address or device” error. See the example below:

                            [server]# /opt/cisco/usnic/bin/usd_pingpong 
                            usd_open: No such device or address
                            
                          2. “Waiting for setup…” error. See the example below:
                            [server]$ /opt/cisco/usnic/bin/usd_pingpong –d usnic_0
                            open usnic_0 OK, IP=50.43.10.1
                            QP create OK, addr -h 50.43.10.1 -p 3333
                            Waiting for setup...
                          1. The Cisco usNIC connection policy is not assigned to be not set in the vNIC interface.

                          2. The server side does not receive packets from the client side.

                          1. Make sure that valid Cisco usNIC connection policy is configured in usNIC Connection Policies and assigned to the vNICs in the service profile.

                          2. Make sure that IP addresses of the Cisco usNIC devices on both the server and client are configured correctly.

                          3. Make sure that the client pingpong is attempting to send packets to the correct server IP address of Cisco usNIC device.

                          Running the Cisco usNIC traffic using the mpirun.

                          MTU size mismatch error. See the example below:
                          Example:
                          ]$ mpirun --host node05,node06 -np 12 --mca btl usnic,sm,self --mca btl_usnic_if_include usnic_1 IMB-MPI1 Sendrecv
                          Password: 
                          --------------------------------------------------------------------------
                          The MTU does not match on local and remote hosts.  All interfaces on
                          all hosts participating in an MPI job must be configured with the same
                          MTU.  The usNIC interface listed below will not be used to communicate
                          with this remote host.
                          
                            Local host:      node05
                            usNIC interface: usnic_1
                            Local MTU:       8958
                            Remote host:     node06
                            Remote MTU:      1458
                          
                          1. The MTU size is incorrectly set on the appropriate VLANs.

                          2. The MTU size is incorrectly set in the QoS.

                          Make sure that the MTU size has been set correctly on the VLANs and QoS.

                          See: Configuring QoS System Classes with the LAN Uplinks Manager.

                          Installing a Cisco enic driver.

                          Cisco enic dependency output errors. See the example below:
                          [root@localhost usNIC]# rpm -ivh kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64.rpm 
                          error: Failed dependencies:
                                          ksym(enic_api_devcmd_proxy_by_index) = 0x107cb661 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_alloc_discover) = 0xfb7e4707 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_pdev) = 0xae6ae5c9 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_res) = 0xd910c86b is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_res_bar) = 0x31710a7e is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_res_bus_addr) = 0x7be7a062 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_res_count) = 0x759e4b07 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_get_res_type_len) = 0xd122f0a1 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                                          ksym(vnic_dev_unregister) = 0xd99602a1 is needed by kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64
                          [root@localhost usNIC]# 
                          
                          1. The enic driver is incorrectly installed.

                          2. The enic driver is not installed.

                          Make sure that the enic driver is installed before the kmod-usnic_verbs module.

                          Enabling Intel IOMMU.

                          Intel IOMMU warnings. See the example below:
                          [root@localhost usNIC]# rpm -ivh kmod-usnic_verbs-1.0.4.318.rhel6u5-1.x86_64.rpm 
                          Preparing...                ########################################### [100%]
                             1:kmod-usnic_verbs       ########################################### [100%]
                          WARNING -
                          Intel IOMMU does not appear to be enabled - please add kernel parameter
                          intel_iommu=on to your boot configuration for USNIC driver to function.
                          [root@localhost usNIC]# 
                          

                          Intel IOMMU is not installed.

                          Enable Intel IOMMU in the Linux grub file and reboot the server.

                          Installing DAT user library.

                          Failed dependencies errors for libdapl. See the example below:
                          [root@localhost usNIC]# rpm -ivh libdaplusnic-2.0.39cisco1.0.0.317-1.el6.x86_64.rpm 
                          error: Failed dependencies:
                                          dapl is needed by libdaplusnic-2.0.39cisco1.0.0.317-1.el6.x86_64
                          [root@localhost usNIC]# 
                          

                          The libdapl is installed without installing the DAT library.

                          Install the DAT library.

                          Viewing the configuration of Cisco usNIC enabled NICS using usd_devinfo.

                          The command output does not list all the kernel modules currently loaded in the OS.

                          The RDMA service is not enabled.

                          Enable RDMA service using the following commands:
                          #service rdma start
                          Or
                          # chkconfig rdma on