Managing CPS vDRA Cluster

Accessing CPS vDRA Management CLI

There are two options for accessing the CPS vDRA Management CLI.

Access Via Web Browser

Perform the following steps to access the CPS vDRA Management CLI:


    Step 1   Enter the following URL in Firefox or Chrome:

    https://<masterip>/

    Step 2   Login to the application using your user ID and password.
    Step 3   Follow the Installation Management hyperlink in the following screen:
    Figure 1. CPS DRA Login



    Step 4   In the Management screen, click the Login link to display the in-browser terminal window.
    Figure 2. Installation Management



    Step 5   Login with a valid user name and password.
    Figure 3. Management Terminal Link




    Access Via SSH

    Access is available to the CPS vDRA via SSH listening on port 2024 of the master virtual machine. This port must be open in the OpenStack security rules in order to access the Management CLI via SSH.

    Starting CPS vDRA Cluster

    A CPS vDRA cluster is a self-organizing cluster that does not require operator actions to configure the system when you follow the instructions found in the installation guide. The system self-organizes by following the algorithm:
    1. The cluster master node is started and bootstraps the Docker engine, an embedded Docker registry, the Weave overlay network, and the CPS vDRA scheduling application.

    2. The worker nodes are started either after the master node is started or in parallel. The bootstrapping of the Docker engine and Weave overlay network point back to the master node.

    3. The scheduling function on the master node begins an auto discovery function on engine startup of the Docker engines that have joined the Weave overlay network.

    4. For each engine discovered, the system queries the Docker engine configuration to discover the node identifier and the role within the cluster that the engine will perform. The roles are used by the scheduling function to map application services to the appropriate virtual machines.
      1. The CPS vDRA application (for both Policy DRA and IMS DRA solutions) supports the following roles:
        1. master – This is always the master scheduling node.

        2. control-a[b] – This is a control node that works in concert with the other control node and the master node to provide OAM support for the application.

        3. diameter-endpoint – This is the node where all diameter traffic terminals.

        4. binding-worker – This is the node where binding/slf queries are executed.

      2. The vDRA Binding and SLF application supports the following roles:

        1. master – This is always the master scheduling node.

        2. control-a[b] – control node that works in concert with the other control nodes and the master node to provide OAM support for the application.

        3. persistence-router – node where binding/slf queries are routed.

        4. persistence-db – nodes where the binding database replica sets are located.

    5. As the Docker engines are registered, the scheduling application begins executing a controlled startup by starting modules as the underlying engines become available.
      1. A module is a set of interrelated services that are started, stopped and scaled as a set of related processes. These processes are either collocated on the same virtual machine or across multiple virtual machines. There are three type of modules that exist:
        1. infrastructure – These are core modules that are not shutdown when the application shuts down.

        2. application – These are modules that are removed when the application is shutdown.

        3. External – These are external services that are installed on the system and whose images are built and loaded outside of the system. See the scheduling external-service command for more information on configuring external services.

    Stopping Application Services In CPS vDRA Cluster

    The modules of type “application” can be shut down in a controlled manner by running the system stop command. This command will unload all modules in reverse run-level order and stop the associated running Docker services.

    Starting Services In CPS vDRA Cluster

    The modules of type “application” can be started in a controlled manner by running the system start command. This command will start all modules in run-level order and schedule the underlying Docker services on the registered Docker engines.

    Stopping External Services In CPS vDRA Cluster

    The modules of type “external” can be shut down in a controlled manner by running the system disable-external-services command. This command will unload all modules in reverse run-level order and stop the associated running Docker services.

    Starting External Services In CPS vDRA Cluster

    The modules of type “external” can be shut down in a controlled manner by running the system enable-external-services command. This command will unload all modules in reverse run-level order and stop the associated running Docker services.

    Restarting An Individual Docker Service

    Perform the following steps to restart an individual docker service:


      Step 1   Run the show docker service command to locate the container ID of the service to restart.
      scheduler# show docker service 
                                                                                                         PENALTY           
      MODULE           INSTANCE  NAME               VERSION          ENGINE  CONTAINER ID       STATE    BOX      MESSAGE  
      ---------------------------------------------------------------------------------------------------------------------
      admin-db         1         mongo-admin-a      3.4.0.0          aio     mongo-admin-a      HEALTHY  false    -        
      admin-db         1         mongo-admin-arb    3.4.0.0          aio     mongo-admin-arb    HEALTHY  false    -        
      admin-db         1         mongo-admin-b      3.4.0.0          aio     mongo-admin-b      HEALTHY  false    -        
      admin-db         1         mongo-admin-setup  12.9.9-SNAPSHOT  aio     mongo-admin-setup  HEALTHY  false    -        
      consul           1         consul-1           12.9.9-SNAPSHOT  aio     consul-1           HEALTHY  false    -        
      consul           1         consul-2           12.9.9-SNAPSHOT  aio     consul-2           HEALTHY  false    -        
      consul           1         consul-3           12.9.9-SNAPSHOT  aio     consul-3           HEALTHY  false    -        
      foobar           1         foobar             3.2.6.0          aio     foobar             HEALTHY  false    -        
      grafana          1         grafana            12.9.9-SNAPSHOT  aio     grafana            HEALTHY  false    -        
      haproxy-common   1         haproxy-common     12.9.9-SNAPSHOT  aio     haproxy-common-s1  HEALTHY  false    -        
      orchestrator-ui  1         orchestrator-ui    12.9.9-SNAPSHOT  aio     orchestrator-ui    HEALTHY  false    -        
      subversion       1         svn                12.9.9-SNAPSHOT  aio     svn                HEALTHY  false    -        
      
      Step 2   Using the provided container-id, run the docker restart container-id container-id command. This will issue a non-graceful stop on the Docker container and move the state of the container to ABORTED. The container will stay in this state for 10 seconds before restarting.
      Step 3   Verify the health of the restarted docker service by running the show docker service command again and waiting for the service to progress into the HEALTHY state. Optionally the log of the individual container can be followed by running the monitor log container container-id using the same container id from Step 2.

      CPS External Authentication and Authorization

      CPS system supports LDAP external authentication and authorization.

      Based on Conf-D group configurations, CPS roles are assigned to the applications running on CPS cluster.

      The following command configures the gid mapping for various roles.

      admin@orchestrator(config)# external-aaa pam gid-mapping
       1000 policy-admin
      admin@orchestrator(config-gid-mapping-1000/policy-admin)# commit
      Commit complete

      You can also view the status of configuration with the following command:

      admin@orchestrator# show running-config external-aaa | tab

      Sample Output:

      admin@orchestrator# show running-config external-aaa | tab
      GID GROUP
      --------------------
      1000 policy-admin

      Conf-D Group to CPS Roles Description

      The following table describes the CPS roles and Conf-D groups of applications/services:

      Table 1 Conf-D Group to CPS Roles Description

      Application/Service

      CPS Role

      Conf-D Groups

      Control center

      SUMADMIN

      crd-read-write

      Control center

      READONLY

      crd-read-only

      Policy Builder

      READ&WRITE

      policy-admin

      Policy Builder

      READ

      *

      SVN

      READ&WRITE

      policy-admin

      SVN

      READ

      *

      Grafana

      Admin

      grafana-admin

      Grafana

      Editor

      grafana-editor

      Grafana

      Viewer

      *

      * Indicates all authenticated users

      Bulkstats conf-D group: sftp daemon running on port 2026 retrieves all statistics within the /var/broadhop/stats directory. Users associated to the “bulkstats” or “admin” group are able to retrieve statistics.

      Oper conf-D group is not used.

      Installing New Software Images

      When a new ISO is provided with software, you need to perform the following steps to upgrade the current system software:

      Upload the application ISO into the onboard Docker registry using the following command:
      system software iso stage pull <image path>

      For example:

      system software iso stage pull http://171.70.34.121/microservices/latest/cisco-policy-dra.iso

      Upgrading To A New Software Version

      Perform the following steps to upgrade to a new software version:


        Step 1   Run the following command:
        system software iso load category product file cisco-policy-dra.iso activate true
        Step 2   In the Management CLI, run show system software available-versions to determine if the correct version of has been uploaded:
        scheduler# show system software available-versions 
        VERSION                            
        -----------------------------------
        12.9.9-dra.2017-03-08.122.9e49af2 
        
        Step 3   In the Management CLI, run the system upgrade version command to upgrade to the version found in Step 2:
        scheduler# system upgrade version 12.9.9-dra.2017-03-08.122.9e49af2 

        At this point the application will begin downloading the new scheduling and application images from the on-board Docker Registry. The download will take several seconds and the scheduler application will disconnect and restart. You must re-login after the disconnect occurs.

        Step 4   In the Management CLI, run the show scheduling status command to validate the progress of the upgrade.

        Aborting An Upgrade

        If an in-progress upgrade needs to be aborted, run the system abort-upgrade command. This will immediately stop all scheduling activities. Reverting to the previous versions is triggered by the downgrade to a previous software version procedure.

        Downgrading To Previous Software Version

        Perform the following steps to downgrade to a pervious software version:


          Step 1   In the Management CLI, run the show system software available-versions to determine if the correct version of has been uploaded:
          scheduler# show system software available-versions 
          VERSION                            
          -----------------------------------
          12.9.9-dra.2017-03-08.122.9e49af2 
          
          Step 2   In the Management CLI, run the system downgrade version command to upgrade to the version found in Step 3:
          scheduler# system downgrade version 12.9.9-dra.2017-03-08.122.9e49af2 

          At this point the application will begin downloading the new scheduling and application images from the on-board Docker Registry. The download will take several seconds and the scheduler application will disconnect and restart. You must re-login after the disconnect occurs.

          Step 3   In the Management CLI, run the show scheduling status command to validate the progress of the upgrade.

          Aborting A Downgrade

          If an in-progress downgrade needs to be aborted, run the system abort-downgrade command. This will immediately stop all scheduling activities. Reverting to the previous versions is triggered by the upgrading to a new software version procedure.