Software Upgrade using Site Isolation Procedure

Feature Summary and Revision History

Summary Data

Table 1. Summary Data

Applicable Products or Functional Area

PCF

Applicable Platform(s)

SMI

Feature Default Setting

Enabled - Always-on

Related Documentation

Not Applicable

Revision History

Table 2. Revision History
Revision Details Release

First introduced.

2023.02.0

Feature Description

The PCF supports the base images of all containers from the Ubuntu and Mongo versions, which got updated from 20.04 to 18.04 for the Ubuntu version and from 4.4 to the 4.0 version for Mongo containers. The Software Upgrade using Site Isolation Procedure requires the site isolation and a method of procedures for execution during the maintenance window considering the upgrade path. The in-service updates aren’t supported because there’s no upgrade from Mongo 4.0 to 4.4.

Prerequisites

Ensure that the PCF system runs with the Pre April 2023 PCF release version.

Pre-upgrade Backup Steps

Procedure


Step 1

To start the upgrade, log in to the SMI Cluster Manager node as an Ubuntu user and verify all the pods and nodes are operational.

# SSh to Master node and if not all the pods and nodes are running please don’t not proceed
 
cloud-user@pcf-cm-node-master-1:~$ kubectl get nodes -A
NAME                   STATUS   ROLES           AGE     VERSION
pcf-cm-node-master-1   Ready    control-plane   6d15h   v1.24.6
pcf-cm-node-master-2   Ready    control-plane   6d14h   v1.24.6
pcf-cm-node-master-3   Ready    control-plane   6d14h   v1.24.6
pcf-cm-node-worker-1   Ready    <none>          6d14h   v1.24.6
 
cloud-user@pcf-cm-node-master-1:~$ kubectl get pods -A
NAMESPACE         NAME                                                              READY   STATUS    RESTARTS        AGE
cee-cee-pcf       alert-logger-6bc6fd558d-mw6ch                                     1/1     Running   0               5d16h
cee-cee-pcf       alert-router-7c5c6576b8-jvc6h                                     1/1     Running   0               5d16h
cee-cee-pcf       alertmanager-0                                                    2/2     Running   0               5d16h
cee-cee-pcf       alertmanager-1                                                    2/2     Running   0               5d16h
cee-cee-pcf       alertmanager-2                                                    2/2     Running   0               5d16h
cee-cee-pcf       alertmanager-config-sync-c9fcf48bd-r44bv                          1/1     Running   0               5d16h
cee-cee-pcf       blackbox-exporter-blq6p                                           1/1     Running   0               5d16h
cee-cee-pcf       blackbox-exporter-dh76h                                           1/1     Running   0               5d16h
cee-cee-pcf       blackbox-exporter-l9xhw                                           1/1     Running   0               5d16h
cee-cee-pcf       bulk-stats-0                                                      3/3     Running   0               5d16h
cee-cee-pcf       bulk-stats-1                                                      3/3     Running   0               5d16h
cee-cee-pcf       cee-cee-pcf-product-documentation-547fd88785-zxd7h                2/2     Running   0               5d16h
cee-cee-pcf       core-retriever-d2znn                                              2/2     Running   0               5d16h
cee-cee-pcf       core-retriever-gm9dl                                              2/2     Running   0               5d16h
cee-cee-pcf       core-retriever-hn65w                                              2/2     Running   0               5d16h
pcf-ims           db-balance1-1                                                     1/1     Running   0               14h
pcf-ims           db-balance1-2                                                     1/1     Running   0               14h
pcf-ims           db-spr-config-0                                                   1/1     Running   0               14h
pcf-ims           db-spr-config-1                                                   1/1     Running   0               14h
pcf-ims           db-spr-config-2                                                   1/1     Running   0               14h
pcf-ims           redis-keystore-0                                                  2/2     Running   0               14h
pcf-ims           redis-keystore-1                                                  2/2     Running   0               14h
pcf-ims           redis-queue-0                                                     2/2     Running   0               14h
pcf-ims           zookeeper-1                                                       1/1     Running   0               14h
pcf-ims           zookeeper-2                                                       1/1     Running   0               14h
registry          charts-cee-2023-01-1-i20-0                                        1/1     Running   0               6d
registry          charts-cee-2023-01-1-i20-1                                        1/1     Running   0               6d
registry          charts-cee-2023-01-1-i20-2                                        1/1     Running   0               6d
1/1     Running   0               6d14h
registry          software-unpacker-2                                               1/1     Running   0               6d15h
smi-certs         ss-cert-provisioner-6cb559cf57-9rzzk                              1/1     Running   0               6d15h
smi-ops-control   opscenter-controller-647df69568-np6ql                             1/1     Running   0               6d15h
smi-vips          keepalived-l57sc                                                  3/3     Running   0               6d14h
smi-vips          keepalived-ls7mr                                                  3/3     Running   11              36d
smi-vips          keepalived-qssvm                                                  3/3     Running   18              36d
smi-vips          keepalived-v9fbl                                                  3/3     Running   8               36d 

# Should be no output from the command below:
cloud-user@pcf-cm-node-master-1:~$ kubectl get pods -A | grep 0/
 
# Should be no output from the command below:
cloud-user@pcf-cm-node-master-1:~$ kubectl get pods -A | grep -v Running
NAMESPACE         NAME                                                              READY   STATUS    RESTARTS        AGE
 

# Verify Current version of the CEE and PCF and ensure the software is with pre-April release:
 
cloud-user@pcf-cm-node-master-1:~$ helm ls -n pcf-ims
NAME                                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                                            APP VERSION
pcf-ims-cnat-cps-infrastructure         pcf-ims         1               2023-02-22 17:58:35.144604765 +0000 UTC deployed        cnat-cps-infrastructure-0.6.10-main-0045-230214110634-13d42ee    BUILD_2023.02.m0.i18
pcf-ims-cps-diameter-ep-rx-protocol-1   pcf-ims         1               2023-02-22 17:58:35.145251077 +0000 UTC deployed        cps-diameter-ep-0.6.43-main-0399-230207041116-a31a488            BUILD_2023.02.m0.i18
pcf-ims-cps-ldap-ep                     pcf-ims         1               2023-02-22 17:58:35.034167458 +0000 UTC deployed        cps-ldap-ep-0.8.13-main-0612-230208043335-ad5f65d                BUILD_2023.02.m0.i18
pcf-ims-etcd-cluster                    pcf-ims         1               2023-02-22 17:58:35.139498443 +0000 UTC deployed        etcd-cluster-1.4.0-1-4-0130-221017070357-25906ad                 BUILD_2023.02.m0.i18
pcf-ims-network-query                   pcf-ims         1               2023-02-22 17:58:35.121107291 +0000 UTC deployed        network-query-0.5.4-main-0057-230206125913-ed3642a               BUILD_2023.02.m0.i18
pcf-ims-ngn-datastore                   pcf-ims         1               2023-02-22 17:58:35.139994348 +0000 UTC deployed        ngn-datastore-1.10.0-1-10-0997-230210092614-c6b6164              BUILD_2023.02.m0.i18
pcf-ims-ops-center                      pcf-ims         15              2023-02-22 10:55:58.982801266 +0000 UTC deployed        pcf-ops-center-0.6.32-main-0445-230221061642-374d10a             BUILD_2023.02.m0.i18
pcf-ims-pcf-config                      pcf-ims         1               2023-02-22 17:58:35.151228581 +0000 UTC deployed        pcf-config-0.6.3-main-0021-221221114706-77d0a10                  BUILD_2023.02.m0.i18
pcf-ims-pcf-dashboard                   pcf-ims         1               2023-02-22 17:58:35.152400298 +0000 UTC deployed        pcf-dashboard-0.2.17-main-0136-221005221847-13bfa13              BUILD_2023.02.m0.i18
pcf-ims-pcf-engine-app-production       pcf-ims         1               2023-02-22 17:58:35.125468923 +0000 UTC deployed        pcf-engine-app-0.8.16-main-0424-230208043521-b26d906             BUILD_2023.02.m0.i18
pcf-ims-pcf-ldapserver-ep               pcf-ims         1               2023-02-22 17:58:35.152091423 +0000 UTC deployed        pcf-ldapserver-ep-0.1.8-main-0080-221220155902-e80a62f           BUILD_2023.02.m0.i18
pcf-ims-pcf-oam-app                     pcf-ims         1               2023-02-22 17:58:35.154061042 +0000 UTC deployed        pcf-oam-app-0.6.2-main-0015-230206125249-2118fad                 BUILD_2023.02.m0.i18
pcf-ims-pcf-rest-ep                     pcf-ims         1               2023-02-22 17:58:35.136755614 +0000 UTC deployed        pcf-rest-ep-0.7.46-main-0960-230118121105-2fd07f9                BUILD_2023.02.m0.i18
pcf-ims-pcf-services                    pcf-ims         1               2023-02-22 17:58:35.146493569 +0000 UTC deployed        pcf-services-0.6.17-main-0074-221221114612-90ebedc               BUILD_2023.02.m0.i18
 

Step 2

Collect and backup the Mongo data from the db-admin pods primary members.

  1. Collect the names of the Mongo admin pods.

    cloud-user@pcf-cm-node-master-1:~$ kubectl get pods -n pcf-ims | grep db-admin
    db-admin-0                                                        1/1     Running   0             13h
    db-admin-1                                                        1/1     Running   0             13h
    db-admin-2                                                        1/1     Running   0             13h
    db-admin-config-0                                                 1/1     Running   0             13h
    db-admin-config-1                                                 1/1     Running   0             13h
    db-admin-config-2                                                 1/1     Running   0             13h
    
  2. Log in to the db-admin pod to acquire access to the primary pod member.

    cloud-user@pcf-cm-node-master-1:~$ kubectl exec -it db-admin-0 -n pcf-ims bash
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
    Defaulted container "mongo" out of: mongo, cleanup (init)
    groups: cannot find name for group ID 303
    
    # Login to mongo prompt
    
    I have no name!@db-admin-0:/$ mongo
    MongoDB shell version v4.0.2
    connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
    Implicit session: session { "id" : UUID("fa2ee0ae-fcc3-45f4-80f4-f1658dd3297c") }
    MongoDB server version: 4.0.2
    Welcome to the MongoDB shell.
    
    # Get the primary pod member using rs.status() command
    admin:SECONDARY> rs.status()
    {
            "set" : "admin",
            "date" : ISODate("2023-02-23T08:52:22.268Z"),
            "myState" : 2,
            "term" : NumberLong(3),
            "syncSourceHost" : "mongo-admin-2:27017",
            "syncSourceId" : 3,
            "heartbeatIntervalMillis" : NumberLong(300),
            "majorityVoteCount" : 2,
            "writeMajorityCount" : 2,
            "votingMembersCount" : 3,
            "writableVotingMembersCount" : 3,
            "optimes" : {
                    "lastCommittedOpTime" : {
                            "ts" : Timestamp(1677142340, 1),
                            "t" : NumberLong(3)
                    },
                    "lastCommittedWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                    "readConcernMajorityOpTime" : {
                            "ts" : Timestamp(1677142340, 1),
                            "t" : NumberLong(3)
                    },
                    "readConcernMajorityWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                    "appliedOpTime" : {
                            "ts" : Timestamp(1677142340, 1),
                            "t" : NumberLong(3)
                    },
                    "durableOpTime" : {
                            "ts" : Timestamp(1677142340, 1),
                            "t" : NumberLong(3)
                    },
                    "lastAppliedWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                    "lastDurableWallTime" : ISODate("2023-02-23T08:52:20.219Z")
            },
            "lastStableRecoveryTimestamp" : Timestamp(1677142310, 1),
            "electionParticipantMetrics" : {
                    "votedForCandidate" : true,
                    "electionTerm" : NumberLong(3),
                    "lastVoteDate" : ISODate("2023-02-22T17:59:58.482Z"),
                    "electionCandidateMemberId" : 3,
                    "voteReason" : "",
                    "lastAppliedOpTimeAtElection" : {
                            "ts" : Timestamp(1677088640, 1),
                            "t" : NumberLong(2)
                    },
                    "maxAppliedOpTimeInSet" : {
                            "ts" : Timestamp(1677088640, 1),
                            "t" : NumberLong(2)
                    },
                    "priorityAtElection" : 1,
                    "newTermStartDate" : ISODate("2023-02-22T17:59:58.492Z"),
                    "newTermAppliedDate" : ISODate("2023-02-22T17:59:59.463Z")
            },
            "members" : [
                    {
                            "_id" : 1,
                            "name" : "mongo-admin-0:27017",
                            "health" : 1,
                            "state" : 2,
                            "stateStr" : "SECONDARY",
                            "uptime" : 53558,
                            "optime" : {
                                    "ts" : Timestamp(1677142340, 1),
                                    "t" : NumberLong(3)
                            },
                            "optimeDate" : ISODate("2023-02-23T08:52:20Z"),
                            "lastAppliedWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "lastDurableWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "syncSourceHost" : "mongo-admin-2:27017",
                            "syncSourceId" : 3,
                            "infoMessage" : "",
                            "configVersion" : 3,
                            "configTerm" : 3,
                            "self" : true,
                            "lastHeartbeatMessage" : ""
                    },
                    {
                            "_id" : 2,
                            "name" : "mongo-admin-1:27017",
                            "health" : 1,
                            "state" : 2,
                            "stateStr" : "SECONDARY",
                            "uptime" : 53543,
                            "optime" : {
                                    "ts" : Timestamp(1677142340, 1),
                                    "t" : NumberLong(3)
                            },
                            "optimeDurable" : {
                                    "ts" : Timestamp(1677142340, 1),
                                    "t" : NumberLong(3)
                            },
                            "optimeDate" : ISODate("2023-02-23T08:52:20Z"),
                            "optimeDurableDate" : ISODate("2023-02-23T08:52:20Z"),
                            "lastAppliedWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "lastDurableWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "lastHeartbeat" : ISODate("2023-02-23T08:52:22.266Z"),
                            "lastHeartbeatRecv" : ISODate("2023-02-23T08:52:22.265Z"),
                            "pingMs" : NumberLong(0),
                            "lastHeartbeatMessage" : "",
                            "syncSourceHost" : "mongo-admin-2:27017",
                            "syncSourceId" : 3,
                            "infoMessage" : "",
                            "configVersion" : 3,
                            "configTerm" : 3
                    },
                    {
                            "_id" : 3,
                            "name" : "mongo-admin-2:27017",
                            "health" : 1,
                            "state" : 1,
                            "stateStr" : "PRIMARY",
                            "uptime" : 53543,
                            "optime" : {
                                    "ts" : Timestamp(1677142340, 1),
                                    "t" : NumberLong(3)
                            },
                            "optimeDurable" : {
                                    "ts" : Timestamp(1677142340, 1),
                                    "t" : NumberLong(3)
                            },
                            "optimeDate" : ISODate("2023-02-23T08:52:20Z"),
                            "optimeDurableDate" : ISODate("2023-02-23T08:52:20Z"),
                            "lastAppliedWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "lastDurableWallTime" : ISODate("2023-02-23T08:52:20.219Z"),
                            "lastHeartbeat" : ISODate("2023-02-23T08:52:22.266Z"),
                            "lastHeartbeatRecv" : ISODate("2023-02-23T08:52:22.148Z"),
                            "pingMs" : NumberLong(0),
                            "lastHeartbeatMessage" : "",
                            "syncSourceHost" : "",
                            "syncSourceId" : -1,
                            "infoMessage" : "",
                            "electionTime" : Timestamp(1677088798, 1),
                            "electionDate" : ISODate("2023-02-22T17:59:58Z"),
                            "configVersion" : 3,
                            "configTerm" : 3
                    }
            ],
            "ok" : 1,
            "$gleStats" : {
                    "lastOpTime" : Timestamp(0, 0),
                    "electionId" : ObjectId("000000000000000000000000")
            },
            "lastCommittedOpTime" : Timestamp(1677142340, 1),
            "$configServerState" : {
                    "opTime" : {
                            "ts" : Timestamp(1677142326, 3),
                            "t" : NumberLong(5)
                    }
            },
            "$clusterTime" : {
                    "clusterTime" : Timestamp(1677142340, 1),
                    "signature" : {
                            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                            "keyId" : NumberLong(0)
                    }
            },
            "operationTime" : Timestamp(1677142340, 1)
    }
    admin:SECONDARY>
    
    Note:- In the above output primary pod is db-admin-2
    
  3. Log in to the primary db-admin pod and take the dump of data and create the tar file out of the dump.

    cloud-user@pcf-cm-node-master-1:~$ kubectl exec -it db-admin-2 -n pcf-ims bash
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
    Defaulted container "mongo" out of: mongo, cleanup (init)
    groups: cannot find name for group ID 303
    I have no name!@db-admin-2:/$ cd /tmp
    I have no name!@db-admin-2:/tmp$ ls
    mongodb-27017.sock
    
    # Get the data dump using mongodump command
    
    I have no name!@db-admin-2:/tmp$ mongodump --port 27017
    2023-02-23T06:58:28.624+0000    writing admin.system.version to dump/admin/system.version.bson
    2023-02-23T06:58:28.625+0000    done dumping admin.system.version (2 documents)
    2023-02-23T06:58:28.626+0000    writing cust_ref_data.OCS_TABLE to dump/cust_ref_data/OCS_TABLE.bson
    2023-02-23T06:58:28.626+0000    writing cust_ref_data.TAC_TABLE_N7 to dump/cust_ref_data/TAC_TABLE_N7.bson
    2023-02-23T06:58:28.626+0000    writing cust_ref_data.DUS_TABLE to dump/cust_ref_data/DUS_TABLE.bson
    2023-02-23T06:58:28.627+0000    writing cust_ref_data.TAC_TABLE_N15 to dump/cust_ref_data/TAC_TABLE_N15.bson
    2023-02-23T06:58:28.655+0000    done dumping cust_ref_data.TAC_TABLE_N15 (7152 documents)
    2023-02-23T06:58:28.656+0000    writing cust_ref_data.TAC_TABLE to dump/cust_ref_data/TAC_TABLE.bson
    2023-02-23T06:58:28.656+0000    done dumping cust_ref_data.TAC_TABLE_N7 (7152 documents)
    2023-02-23T06:58:28.657+0000    writing cust_ref_data.USD_TABLE to dump/cust_ref_data/USD_TABLE.bson
    2023-02-23T06:58:28.666+0000    done dumping cust_ref_data.OCS_TABLE (7569 documents)
    2023-02-23T06:58:28.667+0000    writing cust_ref_data.SGSN_IP_TABLE_2 to dump/cust_ref_data/SGSN_IP_TABLE_2.bson
    2023-02-23T06:58:28.684+0000    done dumping cust_ref_data.TAC_TABLE (7128 documents)
    2023-02-23T06:58:28.684+0000    writing cust_ref_data.PLMN_ID_TABLE_N7 to dump/cust_ref_data/PLMN_ID_TABLE_N7.bson
    2023-02-23T06:58:28.687+0000    done dumping cust_ref_data.USD_TABLE (5579 documents)
    dump/cust_ref_data/FEATURE_COUNTER_MAPPING.bson
    2023-02-23T06:58:28.705+0000    done dumping cust_ref_data.PCC_RULE_TABLE_N7 (747 documents)
    2023-02-23T06:58:28.706+0000    writing cust_ref_data.DNN_TABLE to dump/cust_ref_data/DNN_TABLE.bson
    2023-02-23T06:58:28.708+0000    done dumping cust_ref_data.DNN_TABLE (194 documents)
    2023-02-23T06:58:28.709+0000    writing cust_ref_data.APN_TABLE to dump/cust_ref_data/APN_TABLE.bson
    2023-02-23T06:58:28.709+0000    done dumping cust_ref_data.CRN_TABLE (733 documents)
    2023-02-23T06:58:28.747+0000    done dumping spr.subscriber_ssid (0 documents)
    2023-02-23T06:58:28.747+0000    done dumping spr.subscriber (0 documents)
    2023-02-23T06:58:28.747+0000    writing spr.auth_failures to dump/spr/auth_failures.bson
    2023-02-23T06:58:28.747+0000    writing spr.location_history to dump/spr/location_history.bson
    2023-02-23T06:58:28.749+0000    done dumping scheduler.tasks (0 documents)
    2023-02-23T06:58:28.751+0000    done dumping patches.files.chunks (0 documents)
    2023-02-23T06:58:28.753+0000    done dumping spr.location_history (0 documents)
    2023-02-23T06:58:28.754+0000    done dumping spr.auth_failures (0 documents)
    I have no name!@db-admin-2:/tmp$ ls
    dump  mongodb-27017.sock
    
    # Create tar file out of dump
    
    I have no name!@db-admin-2:/tmp$ tar cvf db-admin-dump.tar dump
    dump/
    dump/cust_ref_data/
    dump/cust_ref_data/USD_TABLE_N7.metadata.json
    dump/cust_ref_data/CRBN_TABLE.metadata.json
    dump/cust_ref_data/crdVersionInstance.bson
    dump/cust_ref_data/SERVICE_AREA_RESTRICTION_N15.bson
    dump/cust_ref_data/N7_CHG_REF_DATA_TABLE.metadata.json
    dump/cust_ref_data/TEARDOWN_TABLE_N7.metadata.json
    dump/cust_ref_data/QOS_OVERRIDE_TABLE.bson
    dump/cust_ref_data/E_PASS_TABLE_IMS.metadata.json
    dump/cust_ref_data/CRBN_TABLE_N7.bson
    dump/cust_ref_data/TAC_TABLE.bson
    dump/cust_ref_data/OCS_TABLE.bson
    dump/cust_ref_data/POLICY_CONTROL_REQUEST_TRIGGER_TABLE_N15.metadata.json
    dump/cust_ref_data/SL_TABLE.metadata.json
    dump/cust_ref_data/N5_psi_mapping_table.metadata.json
    dump/cust_ref_data/TRIGGER_TABLE.metadata.json
    dump/cust_ref_data/USD_TABLE.bson
    dump/cust_ref_data/TEARDOWN_TABLE.metadata.json
    dump/cust_ref_data/CRBN_TABLE.bson
    dump/cust_ref_data/PLMN_ID_TABLE_N15.bson
    dump/cust_ref_data/N5_AUTH_TABLE_N7.bson
    dump/cust_ref_data/QOS_OVERRIDE_TABLE_N7.bson
    dump/cust_ref_data/RX_AUTH_TABLE_N7.metadata.json
    dump/cust_ref_data/IMSI_TABLE.bson
    dump/cust_ref_data/N28_ACTION.metadata.json
    dump/cust_ref_data/PLMN_ID_TABLE_N7.metadata.json
    dump/cust_ref_data/FEATURE_COUNTER_MAPPING.metadata.json
    dump/cust_ref_data/SL_TABLE.bson
    dump/cust_ref_data/SUPI_TABLE_N7.bson
    dump/cust_ref_data/SGSN_IP_TABLE_2.bson
    dump/cust_ref_data/USD_TABLE.metadata.json
    dump/cust_ref_data/PLMN_ID_TABLE.bson
    dump/cust_ref_data/DUMMY_RAR_TABLE.bson
    dump/cust_ref_data/QOS_STATUS_TABLE.metadata.json
    dump/policy_trace/trace_id_version.metadata.json
    I have no name!@db-admin-2:/tmp$ ls
    db-admin-dump.tar  dump  mongodb-27017.sock
    
    Note:- db-admin-dump.tar is the tar file created
    
  4. Transfer the dump tar file to the host from the primary db-admin pod.

    cloud-user@pcf-cm-node-master-1:~$ kubectl cp db-admin-2:/tmp/db-admin-dump.tar db-admin-dump.tar -n pcf-ims
    Defaulted container "mongo" out of: mongo, cleanup (init)
    tar: Removing leading `/' from member names
    
    cloud-user@pcf-cm-node-master-1:~$ ls
    about.sh                                                          cpu_Load_Check.sh             ml_clusterHardwareInfo.csv
    Automated_System_Info_site1_03_FunctionalPreTest_BVLongevity.txt  db-admin-config-2-dump.tar    nohup.out
    Automation_Scripts_repo                                           db-admin-dump.tar             Noisy_Scenario
    checkDiskSpace.sh                                                 get_deploy_status.sh          PCF_compare_alert_config_with_log.sh
    checkMinionCPUAverage.sh                                          GetPCFInstalledBuild.sh       smi_dep_id_rsa
    check_mongo_pod_primary.sh                                        GetSystemDeploymentStatus.sh  validateK8sMinionCPUMemory.sh
    ConsolidateLogsSummary.py                                         log_start_time.txt
    
    

Step 3

Collect and backup the Mongo data from the primary members of the db-admin-config pods.

Note

 

Refer to Step 2, for detailed commands for the following steps.

  1. Collect the names of the Mongo admin pods.

    cloud-user@pcf-cm-node-master-1:~$ kubectl get pods -n pcf-ims | grep db-admin-config
    db-admin-config-0                                                 1/1     Running   0             13h
    db-admin-config-1                                                 1/1     Running   0             13h
    db-admin-config-2                                                 1/1     Running   0             13h
    
  2. Log in to the db-admin-config pod to acquire access to the primary pod member.

  3. Log in to the primary db-admin-config pod and take the dump of data and create the tar file out of the dump.

  4. Transfer the dump tar file to the host from the primary db-admin-config pod.

Step 4

SSH to the ops-center, enter "system mode shutdown" at the config prompt, and then commit.

Step 5

Delete the data files from the Mongo admin pods using the PCF namespace on all three master nodes.

Master-1
cloud-user@pcf-cm-node-master-1:~$ cd /data
cloud-user@pcf-cm-node-master-1:/data$ ls
cee-cee-pcf  etcd  k8s-offline  kubernetes  pcf-ims  software

# Go to namespace directory

cloud-user@pcf-cm-node-master-1:/data$ cd pcf-ims
cloud-user@pcf-cm-node-master-1:/data/pcf-ims$ ls
db-etcd-pcf-ims-etcd-cluster-0  db-local-data-db-admin-0  db-local-data-db-admin-config-0

# Delete all file under db-local-data-db-admin-0 and db-local-data-db-admin-config-0

cloud-user@pcf-cm-node-master-1:/data/pcf-ims/db-local-data-db-admin-0$sudo rm -rf *
cloud-user@pcf-cm-node-master-1:/data/pcf-ims/db-local-data-db-admin-config-0$sudo rm -rf *

Master-2
cloud-user@pcf-cm-node-master-2:~$ cd /data
cloud-user@pcf-cm-node-master-2:/data$ ls
cee-cee-pcf  etcd  k8s-offline  kubernetes  pcf-ims  software

# Go to namespace directory

cloud-user@pcf-cm-node-master-2:/data$ cd pcf-ims
cloud-user@pcf-cm-node-master-2:/data/pcf-ims$ ls
db-etcd-pcf-ims-etcd-cluster-0  db-local-data-db-admin-0  db-local-data-db-admin-config-0

# Delete all file under db-local-data-db-admin-0 and db-local-data-db-admin-config-0

cloud-user@pcf-cm-node-master-2:/data/pcf-ims/db-local-data-db-admin-0$sudo rm -rf *
cloud-user@pcf-cm-node-master-2:/data/pcf-ims/db-local-data-db-admin-config-0$sudo rm -rf *

Master-3
cloud-user@pcf-cm-node-master-3:~$ cd /data
cloud-user@pcf-cm-node-master-3:/data$ ls
cee-cee-pcf  etcd  k8s-offline  kubernetes  pcf-ims  software

# Go to namespace directory

cloud-user@pcf-cm-node-master-3:/data$ cd pcf-ims
cloud-user@pcf-cm-node-master-3:/data/pcf-ims$ ls
db-etcd-pcf-ims-etcd-cluster-0  db-local-data-db-admin-0  db-local-data-db-admin-config-0

# Delete all file under db-local-data-db-admin-0 and db-local-data-db-admin-config-0

cloud-user@pcf-cm-node-master-3:/data/pcf-ims/db-local-data-db-admin-0$sudo rm -rf *
cloud-user@pcf-cm-node-master-3:/data/pcf-ims/db-local-data-db-admin-config-0$sudo rm -rf *

Step 6

Run the April release upgrade (Ubuntu 20.04 and Mongo 4.4).


Post-Upgrade Verification Steps

Procedure


Step 1

Verify that the software is running with the April release after the upgrade.

cloud-user@pcf-cm-node-master-1:~$ helm ls -n pcf-ims
NAME                                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                                            APP VERSION
pcf-ims-cnat-cps-infrastructure         pcf-ims         1               2023-02-22 17:58:35.144604765 +0000 UTC deployed        cnat-cps-infrastructure-0.6.10-main-0045-230214110634-13d42ee    BUILD_2023.02.m0.i18
pcf-ims-cps-diameter-ep-rx-protocol-1   pcf-ims         1               2023-02-22 17:58:35.145251077 +0000 UTC deployed        cps-diameter-ep-0.6.43-main-0399-230207041116-a31a488            BUILD_2023.02.m0.i18
pcf-ims-cps-ldap-ep                     pcf-ims         1               2023-02-22 17:58:35.034167458 +0000 UTC deployed        cps-ldap-ep-0.8.13-main-0612-230208043335-ad5f65d                BUILD_2023.02.m0.i18
pcf-ims-etcd-cluster                    pcf-ims         1               2023-02-22 17:58:35.139498443 +0000 UTC deployed        etcd-cluster-1.4.0-1-4-0130-221017070357-25906ad                 BUILD_2023.02.m0.i18
pcf-ims-network-query                   pcf-ims         1               2023-02-22 17:58:35.121107291 +0000 UTC deployed        network-query-0.5.4-main-0057-230206125913-ed3642a               BUILD_2023.02.m0.i18
pcf-ims-ngn-datastore                   pcf-ims         1               2023-02-22 17:58:35.139994348 +0000 UTC deployed        ngn-datastore-1.10.0-1-10-0997-230210092614-c6b6164              BUILD_2023.02.m0.i18
pcf-ims-ops-center                      pcf-ims         15              2023-02-22 10:55:58.982801266 +0000 UTC deployed        pcf-ops-center-0.6.32-main-0445-230221061642-374d10a             BUILD_2023.02.m0.i18
pcf-ims-pcf-config                      pcf-ims         1               2023-02-22 17:58:35.151228581 +0000 UTC deployed        pcf-config-0.6.3-main-0021-221221114706-77d0a10                  BUILD_2023.02.m0.i18
pcf-ims-pcf-dashboard                   pcf-ims         1               2023-02-22 17:58:35.152400298 +0000 UTC deployed        pcf-dashboard-0.2.17-main-0136-221005221847-13bfa13              BUILD_2023.02.m0.i18
pcf-ims-pcf-engine-app-production       pcf-ims         1               2023-02-22 17:58:35.125468923 +0000 UTC deployed        pcf-engine-app-0.8.16-main-0424-230208043521-b26d906             BUILD_2023.02.m0.i18
pcf-ims-pcf-ldapserver-ep               pcf-ims         1               2023-02-22 17:58:35.152091423 +0000 UTC deployed        pcf-ldapserver-ep-0.1.8-main-0080-221220155902-e80a62f           BUILD_2023.02.m0.i18
pcf-ims-pcf-oam-app                     pcf-ims         1               2023-02-22 17:58:35.154061042 +0000 UTC deployed        pcf-oam-app-0.6.2-main-0015-230206125249-2118fad                 BUILD_2023.02.m0.i18
pcf-ims-pcf-rest-ep                     pcf-ims         1               2023-02-22 17:58:35.136755614 +0000 UTC deployed        pcf-rest-ep-0.7.46-main-0960-230118121105-2fd07f9                BUILD_2023.02.m0.i18
pcf-ims-pcf-services                    pcf-ims         1               2023-02-22 17:58:35.146493569 +0000 UTC deployed        pcf-services-0.6.17-main-0074-221221114612-90ebedc               BUILD_2023.02.m0.i18

Step 2

SSH to the ops-center, enter "system mode running" in the configuration prompt, and then commit.

Step 3

Use the same commands as in Step 1, and verify that all the pods and nodes are operational.

Step 4

Restore the Mongo dump to the db-admin pod as the primary member.

# copy the dump tar file to primary member of db-admin

cloud-user@pcf-cm-node-master-1:~$ kubectl cp db-admin-dump.tar db-admin-2:/tmp -n pcf-ims
Defaulted container "mongo" out of: mongo, cleanup (init)

# login to primary member of db-admin go to the path of the dump tar and restore dump using "mongorestore --port=27017 <dump tar file name>" 

cloud-user@pcf-cm-node-master-1:~$ kubectl exec -it db-admin-2 -n pcf-ims bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "mongo" out of: mongo, cleanup (init)
groups: cannot find name for group ID 303
I have no name!@db-admin-2:/$ cd /tmp
I have no name!@db-admin-2:/tmp$ ls
db-admin-dump.tar  dump  mongodb-27017.sock

# Untar the dump tar file

I have no name!@db-admin-2:/tmp$ tar xvf db-admin-dump.tar
dump/
dump/cust_ref_data/
dump/cust_ref_data/USD_TABLE_N7.metadata.json
dump/cust_ref_data/CRBN_TABLE.metadata.json
dump/cust_ref_data/crdVersionInstance.bson
dump/cust_ref_data/SERVICE_AREA_RESTRICTION_N15.bson
dump/cust_ref_data/N7_CHG_REF_DATA_TABLE.metadata.json
dump/spr/subscriber_ssid.bson
dump/spr/subscriber.bson
dump/spr/subscriber.metadata.json
dump/admin/
dump/admin/system.version.bson
dump/admin/system.version.metadata.json
dump/scheduler/
dump/scheduler/tasks.bson
dump/scheduler/tasks.metadata.json
dump/policy_trace/
dump/policy_trace/traces.bson
dump/policy_trace/traces.metadata.json
dump/policy_trace/trace_id_version.bson
dump/policy_trace/trace_id_version.metadata.json

# Run restore command to restore data
I have no name!@db-admin-2:/tmp$ mongorestore --port=27017 dump
2023-02-23T10:19:28.068+0000    preparing collections to restore from
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.n7-pcc-rule from dump/cust_ref_data/n7-pcc-rule.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.n7-policy-trigger from dump/cust_ref_data/n7-policy-trigger.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.volte from dump/cust_ref_data/volte.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for keystore.keystore from dump/keystore/keystore.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.Called_station_id from dump/cust_ref_data/Called_station_id.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.N7_QoS_Mapping_Ldap from dump/cust_ref_data/N7_QoS_Mapping_Ldap.metadata.json
2023-02-23T10:19:28.070+0000    reading metadata for cust_ref_data.PSI_Mapping from 
2023-02-23T10:19:28.071+0000    reading metadata for cust_ref_data.n5-charging-rules from dump/cust_ref_data/n5-charging-rules.metadata.json
2023-02-23T10:19:28.071+0000    reading metadata for keystore.changes from dump/keystore/changes.metadata.json
2023-02-23T10:19:28.071+0000    reading metadata for config.cache.collections from dump/config/cache.collections.metadata.json
2023-02-23T10:19:28.071+0000    reading metadata for cust_ref_data.QosDesc from dump/cust_ref_data/QosDesc.metadata.json

2023-02-23T10:19:34.742+0000    index: &idx.IndexDocument{Options:primitive.M{"name":"state_1", "ns":"scheduler.tasks", "v":2}, Key:primitive.D{primitive.E{Key:"state", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-02-23T10:19:34.742+0000    index: &idx.IndexDocument{Options:primitive.M{"name":"runningOn_1", "ns":"scheduler.tasks", "v":2}, Key:primitive.D{primitive.E{Key:"runningOn", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-02-23T10:19:34.742+0000    index: &idx.IndexDocument{Options:primitive.M{"name":"type_1", "ns":"scheduler.tasks", "v":2}, Key:primitive.D{primitive.E{Key:"type", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-02-23T10:19:34.742+0000    index: &idx.IndexDocument{Options:primitive.M{"name":"scheduleTime_1", "ns":"scheduler.tasks", "v":2}, Key:primitive.D{primitive.E{Key:"scheduleTime", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-02-23T10:19:34.743+0000    62 document(s) restored successfully. 15 document(s) failed to restore.

Note: Some duplicate key errors like below are expected. Please ignore the same.
2023-02-21T09:51:55.708+0000 continuing through error: E11000 duplicate key error collection: config.mongos index: _id_ dup key: { _id: "admin-db-0:27017" }


Step 5

Use the same commands as in Step 4, Restore the Mongo dump to the db-admin-config pod as the primary member.

Step 6

Check the PB and CRD data is loading.

Step 7

Use the same commands as in Step 1, and verify that all the pods and nodes are operational.