This section explains how to restore the data backups.
Restoring Elasticsearch Backups
The following procedure is a general guide to one approach for restoring Elasticsearch data. For more details on performing
snapshot and restore operation, see the official documentation at https://www.elastic.co/guide/en/elasticsearch/reference/6.8/modules-snapshots.html.
-
List all the existing repositories (full backups) to find the backup to restore. If you want to restore data from 2020-12-08,
you may need to restore the 2020-12-09 repository depending on the time the backup jobs were configured to run. The Chron
backup jobs run in UTC time. Use the following command to list the repositories:
curl
http://es-logs:9200/_cat/repositories?v
# curl http://es-logs:9200/_cat/repositories?v
id type
es-backup-3.9.0-20201201 s3
es-backup-3.9.0-20201202 s3
es-backup-3.9.0-20201203 s3
es-backup-3.9.0-20201204 s3
es-backup-3.9.0-20201205 s3
es-backup-3.9.0-20201206 s3
es-backup-3.9.0-20201207 s3
es-backup-3.9.0-20201208 s3
es-backup-3.9.0-20201209 s3
es-backup-3.9.0-20201210 s3
es-backup-3.9.0-20201211 s3
es-backup-3.9.0-20201212 s3
es-backup-3.9.0-20201213 s3
es-backup-3.9.0-20201214 s3
-
List the snapshots (incremental backups) using the following curl command:
curl 'http://es-logs:9200/_cat/snapshots/<repository-name>?v&s=id'
# curl 'http://es-logs:9200/_cat/snapshots/es-backup-3.9.0-20201209?v&s=id'
id status start_epoch start_time end_epoch end_time duration indices successful_shards failed_shards total_shards
snap-20201209001503 SUCCESS 1607472908 00:15:08 1607473311 00:21:51 6.7m 9 41 0 41
-
Restore the specific indices using the following curl command:
curl -X POST "es-logs:9200/_snapshot/<repository-name>/<snapshot-id>/_restore?pretty" -H 'Content-Type: application/json'
-d'{ "indices": "<index-name>,<index-name>" }'
# curl -X POST "es-logs:9200/_snapshot/es-backup-3.9.0-20201209/snap-20201209001503/_restore?pretty" -H 'Content-Type: application/json' -d'{ "indices": "logstash-2020.12.07,logstash-2020.12.08" }'
{
"accepted" : true
}
-
Monitor the restore progress. You can check the progress of the restore operation by listing the indices and viewing certain
details. The restore operation is complete once the indices health
state becomes green and their docs.count
and pri.store.size
columns are populated. Use the following curl command to view the indices and related details.
# curl 'http://es-logs:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open logstash-2020.11.20 2w5g8jzcRgCG_ofqFYTCFQ 5 1 0 0 2.5kb 1.2kb
green open logstash-2020.11.21 mvxBmpOpRnCZaDE_pbNWnw 5 1 0 0 2.5kb 1.2kb
green open logstash-2020.11.19 n20hdhl3TCSDQvuh2Q1tEQ 5 1 0 0 2.5kb 1.2kb
green open logstash-2020.11.22 p-4wwbEFRIyloNDRszAtLA 5 1 0 0 2.5kb 1.2kb
green open logstash-2020.12.13 FRmI-i5yQMeBmr-OAILLiw 5 1 6603982 0 10.4gb 5.2gb
green open logstash-2020.12.12 YO89Yzb4TUqr8pr-9QVOUQ 5 1 6601900 0 10.4gb 5.2gb
green open logstash-2020.11.18 0bp8BoFJTSaYQVJBuRo8xw 5 1 1564 0 1.6mb 835.9kb
green open logstash-2020.11.23 W0Uf_Gf0T6aSipsG9wvxvw 5 1 0 0 2.5kb 1.2kb
yellow open logstash-2020.12.07 pwRCyQJDTE-XqPCEDskM7A 5 1
green open logstash-2020.12.14 qF5mVL93SymbMJGOcy4mJw 5 1 3547310 0 7.4gb 3.4gb
green open logstash-2020.11.25 Sq9fBztvS9u6qaK16Ryg5w 5 1 0 0 2.5kb 1.2kb
green open logstash-2020.11.24 tZGLt6UEQneFdq4GosLgpQ 5 1 6752 0 5mb 2.5mb
green open .kibana V4nSOx93Qf642s8BiYqoyg 1 1 4 0 51.8kb 25.9kb
yellow open logstash-2020.12.08 VxQb2DVjQIaX6VV9PlQ3XA 5 1
If necessary, you can obtain more detailed restore progress information by running the following curl command:
curl -s '
http://es-logs:9200/_cat/recovery
' | grep <index-name>
# curl -s 'http://es-logs:9200/_cat/recovery' | grep 2020.12.07
logstash-2020.12.07 0 3m snapshot done n/a n/a 10.201.161.193 vms-logs-data-es-log-data-2 es-backup-3.9.0-20201209 snap-20201209001503 124 124 100.0% 124 1114905317 1114905317 100.0% 1114905317 0 0 100.0%
logstash-2020.12.07 0 41.3s peer index 10.201.161.193 vms-logs-data-es-log-data-2 10.201.76.193 vms-logs-data-es-log-data-1 n/a n/a 124 114 91.9% 124 1114905316 513079220 46.0% 1114905316 0 0 100.0%
logstash-2020.12.07 1 2.9m snapshot done n/a n/a 10.201.76.193 vms-logs-data-es-log-data-1 es-backup-3.9.0-20201209 snap-20201209001503 74 74 100.0% 74 1111465590 1111465590 100.0% 1111465590 0 0 100.0%
logstash-2020.12.07 1 52.9s peer index 10.201.76.193 vms-logs-data-es-log-data-1 10.201.70.65 vms-logs-data-es-log-data-0 n/a n/a 74 70 94.6% 74 1111465589 541903133 48.8% 1111465589 0 0 100.0%
logstash-2020.12.07 2 51.6s peer index 10.201.70.65 vms-logs-data-es-log-data-0 10.201.161.193 vms-logs-data-es-log-data-2 n/a n/a 133 126 94.7% 133 1115393078 660291756 59.2% 1115393078 0 0 100.0%
logstash-2020.12.07 2 3.1m snapshot done n/a n/a 10.201.70.65 vms-logs-data-es-log-data-0 es-backup-3.9.0-20201209 snap-20201209001503 133 133 100.0% 133 1115393079 1115393079 100.0% 1115393079 0 0 100.0%
logstash-2020.12.07 3 3m snapshot done n/a n/a 10.201.161.193 vms-logs-data-es-log-data-2 es-backup-3.9.0-20201209 snap-20201209001503 111 111 100.0% 111 1112960462 1112960462 100.0% 1112960462 0 0 100.0%
logstash-2020.12.07 3 2.1m peer done 10.201.161.193 vms-logs-data-es-log-data-2 10.201.76.193 vms-logs-data-es-log-data-1 n/a n/a 111 111 100.0% 111 1112960461 1112960461 100.0% 1112960461 0 0 100.0%
logstash-2020.12.07 4 3m snapshot done n/a n/a 10.201.76.193 vms-logs-data-es-log-data-1 es-backup-3.9.0-20201209 snap-20201209001503 144 144 100.0% 144 1112799924 1112799924 100.0% 1112799924 0 0 100.0%
logstash-2020.12.07 4 42.8s peer index 10.201.76.193 vms-logs-data-es-log-data-1 10.201.70.65 vms-logs-data-es-log-data-0 n/a n/a 144 131 91.0% 144 1112799923 408418291 36.7% 1112799923 0 0 100.0%
Restoring Cassandra Backups
The following sections are provided as a general guide to one approach for restoring Cassandra data. For more details on performing
the restore operation, see the official documentation at https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/operations/opsBackupSnapshotRestore.html.
Preparing the Backup to Restore From
Use the following procedure to prepare the Cassandra backup that you will be restoring from:
-
Download the backup tgz file from the backup location.
-
Copy the backup tgz file to the deployer container.
-
Copy the backup tgz file to a master node.
-
Copy the backup to the cassandra-0 node.
kubectl cp cassandra-<version-date>.tgz cassandra-0:/tmp/
-
Log in to cassandra-0.
kubectl exec -it cassandra-0 -c cassandra -- /bin/bash
-
Change the directory to /tmp (or whatever cassandra-0 directory you copied the backup file to).
-
Extract the node backups from the combined tgz archive.
tar zxvf cassandra-<version-date>.tgz
-
Change directory to the created backup directory.
cd /tmp/cassandra-<version-date>
-
Make directories for the various nodes.
mkdir cassandra-0
mkdir cassandra-1
mkdir cassandra-2
-
Extract the tgz files to the directories that were created.
tar zxvf cassandra-0--<version-date>.tgz -C cassandra-0
tar zxvf cassandra-1--<version-date>.tgz -C cassandra-1
tar zxvf cassandra-2--<version-date>.tgz -C cassandra-2
Restoring a Table from the Backups
You will likely need to truncate the table you plan on restoring. For an explanation of the reasons, see the official documentation
at https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/operations/opsBackupSnapshotRestore.html
Use the following command to truncate a table:
cqlsh -u vmsdba -p <cassandra_pass> -e 'CONSISTENCY LOCAL_QUORUM; TRUNCATE <keyspace>.<table>;'
# cqlsh -u vmsdba -p somepassword -e 'CONSISTENCY LOCAL_QUORUM; TRUNCATE skyfall_idm.customeruser;'
Use the following command to load the backup data from all three Cassandra nodes:
sstableloader -u vmsdba -pw <cassandra_pass> -d cassandra-0 cassandra-<node-number>/skyfall_idm/customeruser
The following is an example for loading backup data from Cassandra nodes.
# sstableloader -u vmsdba -pw somepassword -d cassandra-0 cassandra-0/skyfall_idm/customeruser
WARN 19:14:20,981 Small cdc volume detected at /var/lib/cassandra/cdc_raw; setting cdc_total_space_in_mb to 1243. You can override this in cassandra.yaml
WARN 19:14:21,119 Only 9.413GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /tmp/cassandra-3.9.0-2020-12-12/cassandra-0/skyfall_idm/customeruser/mc-67-big-Data.db to [/10.201.222.136, /10.201.143.85, cassandra-0/10.201.60.134]
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:0/1 0 % total: 66% 2.256KiB/s (avg: 2.256KiB/s)
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:0/1 0 % total: 66% 0.000KiB/s (avg: 2.254KiB/s)
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:1/1 100% total: 100% 3.349MiB/s (avg: 3.381KiB/s)
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:1/1 100% total: 100% 0.000KiB/s (avg: 3.269KiB/s)
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:1/1 100% total: 100% 0.000KiB/s (avg: 3.233KiB/s)
progress: [/10.201.222.136]0:1/1 100% [/10.201.143.85]0:1/1 100% [cassandra-0/10.201.60.134]0:1/1 100% total: 100% 0.000KiB/s (avg: 3.204KiB/s)
Summary statistics:
Connections per host : 1
Total files transferred : 3
Total bytes transferred : 19.702KiB
Total duration : 6151 ms
Average transfer rate : 3.202KiB/s
Peak transfer rate : 3.381KiB/s
# sstableloader -u vmsdba -pw somepassword -d cassandra-0 cassandra-1/skyfall_idm/customeruser
WARN 19:14:37,936 Small cdc volume detected at /var/lib/cassandra/cdc_raw; setting cdc_total_space_in_mb to 1243. You can override this in cassandra.yaml
WARN 19:14:38,085 Only 9.412GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /tmp/cassandra-3.9.0-2020-12-12/cassandra-1/skyfall_idm/customeruser/mc-61-big-Data.db /tmp/cassandra-3.9.0-2020-12-12/cassandra-1/skyfall_idm/customeruser/mc-62-big-Data.db to [/10.201.222.136, /10.201.143.85, cassandra-0/10.201.60.134]
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 85% 4.780KiB/s (avg: 4.780KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 85% 0.000KiB/s (avg: 4.777KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 85% 0.000KiB/s (avg: 4.776KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:1/2 85 % total: 90% 76.632KiB/s (avg: 5.027KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 95% 826.219KiB/s (avg: 5.294KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 175.651KiB/s (avg: 5.553KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.233KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.229KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.225KiB/s)
Summary statistics:
Connections per host : 1
Total files transferred : 6
Total bytes transferred : 23.024KiB
Total duration : 4408 ms
Average transfer rate : 5.223KiB/s
Peak transfer rate : 5.553KiB/s
# sstableloader -u vmsdba -pw somepassword -d cassandra-0 cassandra-2/skyfall_idm/customeruser
WARN 19:14:51,994 Small cdc volume detected at /var/lib/cassandra/cdc_raw; setting cdc_total_space_in_mb to 1243. You can override this in cassandra.yaml
WARN 19:14:52,127 Only 9.412GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /tmp/cassandra-3.9.0-2020-12-12/cassandra-2/skyfall_idm/customeruser/mc-61-big-Data.db /tmp/cassandra-3.9.0-2020-12-12/cassandra-2/skyfall_idm/customeruser/mc-62-big-Data.db to [/10.201.222.136, /10.201.143.85, cassandra-0/10.201.60.134]
progress: [/10.201.222.136]0:0/2 0 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 57% 3.159KiB/s (avg: 3.159KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 85% 2.456MiB/s (avg: 4.736KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:1/2 85 % total: 85% 0.000KiB/s (avg: 4.731KiB/s)
progress: [/10.201.222.136]0:1/2 85 % [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:2/2 100% total: 90% 406.330KiB/s (avg: 4.995KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:1/2 85 % [cassandra-0/10.201.60.134]0:2/2 100% total: 95% 1.476MiB/s (avg: 5.260KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 628.697KiB/s (avg: 5.523KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.129KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.062KiB/s)
progress: [/10.201.222.136]0:2/2 100% [/10.201.143.85]0:2/2 100% [cassandra-0/10.201.60.134]0:2/2 100% total: 100% 0.000KiB/s (avg: 5.048KiB/s)
Summary statistics:
Connections per host : 1
Total files transferred : 6
Total bytes transferred : 23.024KiB
Total duration : 4562 ms
Average transfer rate : 5.046KiB/s
Peak transfer rate : 5.523KiB/s
Rebuilding Cassandra Table Indices
As a final step, rebuild the table indices by first retrieving the index names of the table and then use that information
to rebuild the indices.
-
Get the index names of the table.
lsh -u vmsdba -p <cassandra_pass> -e 'DESCRIBE schema;' | grep "CREATE INDEX" | grep -i <keyspace>.<table> | awk '{print $3};'
# cqlsh -u vmsdba -p some password -e 'DESCRIBE schema;' | grep "CREATE INDEX" | grep -i skyfall_idm.customeruser | awk '{print $3};'
customeruser_roles
customeruser_clientid_idx
customeruser_deleted_idx
customeruser_tenantidset_idx
customeruser_pwdpolicyname
-
Rebuild the indices.
nodetool rebuild_index -- <keyspace> <table> <index>
# nodetool rebuild_index -- skyfall_idm customeruser customeruser_roles
# nodetool rebuild_index -- skyfall_idm customeruser customeruser_clientid_idx
# nodetool rebuild_index -- skyfall_idm customeruser customeruser_deleted_idx
# nodetool rebuild_index -- skyfall_idm customeruser customeruser_tenantidset_idx
# nodetool rebuild_index -- skyfall_idm customeruser customeruser_pwdpolicyname
Restoring Consul Data
The following sections are provided as a general guide to one approach for restoring Consul data. For more details on performing
the restore operation, see the official documentation at https://www.consul.io/commands/kv/import.
Preparing a Consul Backup for Restoring
Use the following procedure to prepare the Consul data backup that you will be restoring from:
-
Download the backup gz file from the backup location.
-
Transfer the backup gz file to the deployer container.
-
Transfer the backup gz file to a master node.
-
Copy the Consul tar to the Consul container on the kube-master.
docker cp <consul-backup-tag>.json.gz consul:/tmp/
-
Login to the Consul container on the kube-master.
docker exec -it consul bash
-
Unzip consul backup tar.
gunzip -f /tmp/<consul-backup-tag>.json.gz
-
Set the required Consul environment variables.
export CONSUL_HTTP_TOKEN=<consul_master_token>
export CONSUL_HTTP_SSL=true
export CONSUL_HTTP_SSL_VERIFY=false
Restoring Consul Data
Use the following command to restore the Consul backup that you have prepared:
consul kv import @/tmp/<consul-backup-tag>.json
The following is an example for restoring Consul backup:
#Download the backup gz file from the backup location to installer container
aws s3 cp s3://saitestcsr-msx-bucket.platform.ciscovms.com/backup-rclone/2021-01-07T22:50:06Z-consul-backup-3.10.0-1.8.2-159.json.gz /tmp/
#Transfer the backup gz file to a master node
scp -i keys/id_rsa /tmp/2021-01-07T22:50:06Z-consul-backup-3.10.0-1.8.2-159.json.gz centos@10.20.0.9:/tmp/
#Copy consul tar to consul container
docker cp /tmp/2021-01-07T22:50:06Z-consul-backup-3.10.0-1.8.2-159.json.gz consul:/tmp/
#login to consul container
docker exec -it consul bash
#from inside consul container
gunzip -f /tmp/2021-01-07T22\:50\:06Z-consul-backup-3.10.0-1.8.2-159.json.gz
export CONSUL_HTTP_TOKEN=308f8ce8-c3f8-5719-8772-f16425635f76
export CONSUL_HTTP_SSL=true
export CONSUL_HTTP_SSL_VERIFY=false
consul kv import @/tmp/2021-01-07T22:50:06Z-consul-backup-3.10.0-1.8.2-159.json
Restoring ArangoDB
The following sections are provided as a general guide to one approach for restoring ArangoDB data. For more details on performing
the restore operation, see the official documentation at https://www.arangodb.com/docs/stable/programs-arangorestore.html.
Preparing the ArangoDB Backup for Restoring
Use the following procedure to prepare the ArangoDB backup that you will be restoring from:
-
Download the backup gz file from the backup location.
-
Transfer the backup gz file to the deployer container.
-
Transfer the backup gz file to a master node.
-
Copy the ArangoDB-backup tar to the ArangoDB pod.
kubectl -n vms cp <arango backup tar>.gz <pers-arangodb-sngl-pod>:/tmp/<arango backup tar>.gz
-
Log in to the ArangoDB pod.
kubectl -n vms exec -it <pers-arangodb-sngl-pod> /bin/sh
-
Extract the ArangoDB backup tar.
/bin/tar zxf /tmp/<arangodb backup tar>.gz -C /tmp/
Restoring ArangoDB Data
Use the following command to restore the ArangoDB backup that you have prepared:
for TENANT in $(ls /tmp/ | grep ^tenant_); do /usr/bin/arangorestore --server.database $TENANT --server.endpoint http+ssl://127.0.0.1:8529 --server.username root --create-database true --server.password <arangodb_password> --input-directory /tmp/$TENANT; done
Post-Restore Cleanup
After you have performed your restore operation, clean up the following files:
rm -f /tmp/<arangodb backup tar>.gz
rm -rf /tmp/tenant*
The following is an example for ArangoDB restore and cleanup operation:
#Download the backup gz file from the backup location to the installer container
aws s3 cp s3://saitestcsr-msx-bucket.platform.ciscovms.com/backup-rclone/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz /tmp/
#Transfer the backup gz file to a master node
scp -i keys/id_rsa /tmp/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz centos@10.20.0.9:/tmp/
#Copy the arangodb backup tar to the arangodb pod
kubectl -n vms cp /tmp/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz pers-arangodb-sngl-51xsckxs-bebf75:/tmp/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz
#Login into arangodb pod
kubectl -n vms exec -it pers-arangodb-sngl-51xsckxs-bebf75 /bin/sh
#Extract the arangodb tar
/bin/tar zxf /tmp/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz -C /tmp
#Restore arangodb data
for TENANT in $(ls /tmp/ | grep ^tenant_); do /usr/bin/arangorestore --server.database $TENANT --server.endpoint http+ssl://127.0.0.1:8529 --server.username root --create-database true --server.password 7hfCWS91cYxvOAr6ipsi --input-directory /tmp/$TENANT; done
#Post-restore cleanup
rm -f /tmp/20210108-000111-arangodb-backup-infra-ei-isolated-3.10.0-3.6.1-hardened-alpine3.11.6.json.gz
rm -rf /tmp/tenant*
Restoring Postgres Data
The following sections are provided as a general guide to one approach for restoring Postgres data. For more details on performing
the restore operation, see the official documentation at https://www.postgresql.org/docs/9.1/backup-dump.html#BACKUP-DUMP-RESTORE.
Preparing the Postgres Backup for Restoring
Use the following procedure to prepare the Postgres backup that you will be restoring from:
-
Download the backup gz file from the backup location.
-
Transfer the backup gz file to the deployer container.
-
Transfer the backup gz file to a master node.
-
Copy the postgres backup tar to suite-postgresql-0.
kubectl -n vms cp <postgres backup tar>.gz suite-postgresql-0:/tmp/<postgres backup tar>.gz
-
Login to suite-postgresql-0.
kubectl -n vms exec -it suite-postgresql-0 bash
-
Extract postgres backup tar.
/bin/tar zxf /tmp/<postgres backup tar>.gz -C /tmp/
Restoring Postgres Data
Use the following procedure to restore the Postgres backup that you have prepared:
cat /tmp/suite-cryptoservice | psql
Post-Restore Cleanup
After you have performed your restore operation, clean up the following files:
rm -f /tmp/<postgres backup tar>.gz /tmp/suite-cryptoservice
The following is an example for Postgres restore and cleanup operation:
#Download the backup gz file from the backup location to installer container
aws s3 cp s3://saitestcsr-msx-bucket.platform.ciscovms.com/backup-rclone/20210108-010117-pgsqldb-backup-9.6.tar.gz /tmp/
#Transfer the backup gz file to a master node
scp -i keys/id_rsa /tmp/20210108-010117-pgsqldb-backup-9.6.tar.gz centos@10.20.0.9:/tmp/
#Copy arangodb backup tar to arangodb pod
kubectl -n vms cp /tmp/20210108-010117-pgsqldb-backup-9.6.tar.gz suite-postgresql-0:/tmp/20210108-010117-pgsqldb-backup-9.6.tar.gz
#login into suite-postgresql-0 pod
kubectl -n vms exec -it suite-postgresql-0 bash
#extract postgres tar
/bin/tar zxf /tmp/20210108-010117-pgsqldb-backup-9.6.tar.gz -C /tmp
#restore postgres data
cat /tmp/suite-cryptoservice | psql
#post restore cleanup
rm -rf /tmp/20210108-010117-pgsqldb-backup-9.6.tar.gz /tmp/suite-cryptoservice
Restoring CockroachDB Data
The following sections are provided as a general guide to one approach for restoring CockroachDB data. For more details on
performing the restore operation, see the official documentation at https://www.cockroachlabs.com/docs/v20.2/cockroach-dump#restore-a-table-from-a-backup-file.
Preparing the CockroachDB Backup for Restoring
Drop the existing database that you want to restore:
ansible -m shell -a 'kubectl -n vms exec cockroachdb-0 -c cockroachdb -- /cockroach/cockroach sql --certs-dir=/cockroach/cockroach-certs/ --execute "drop database <database_name> cascade;"' kube-master[0]
Create a new database, user, and password:
ansible-playbook cockroachdb-add-database.yml --extra-vars '{"newDatabase":{ "name": "<db_name>", "user":"<db_username>", "service":"<db_service>" }}'
Prepare the Backup to Restore from
Use the following procedure to prepare the CockroachDB backup that you will be restoring from:
-
Download the backup gz file from the backup location.
-
Transfer the backup gz file to the deployer container.
-
Transfer the backup gz file to a master node.
-
Copy CockroachDB backup tar to cockroachdb-0.
kubectl -n vms cp <cockroachdb backup tar>.gz cockroachdb-0:/tmp/<cockraochdb backup tar>.gz -c cockroachdb
-
Log in to CockroachDB.
kubectl -n vms exec -it cockroachdb-0 -c cockroachdb bash
-
Create a dump directory that will be used to extract the cockroachdb tar.
mkdir /tmp/dump
-
Extract cockroachdb backup tar.
tar xvf /tmp/<cockroachdb backup tar>.gz -C /tmp/dump
-
Unzip the database to restore.
/bin/gzip -d -f /tmp/dump/<database_tag>.sql.gz
Restoring the Cockroach Database
Use the following procedure to restore the CockroachDB backup that you have prepared:
/cockroach/cockroach sql --host cockroachdb-public --certs-dir=/cockroach/cockroach-certs --database=<db_name>
< /tmp/dump/<database_tag>.sql
Post Restore Cleanup
After you have performed your restore operation, clean up the following files:
rm -f /tmp/<cockraochdb backup tar>.gz
rm -rf /tmp/dump
The following is an example for CockroachDB restore and cleanup operation:
# Drop existing database
ansible -m shell -a 'kubectl -n vms exec cockroachdb-0 -c cockroachdb -- /cockroach/cockroach sql --certs-dir=/cockroach/cockroach-certs/ --execute "drop database serviceconfigmanager cascade;"' kube-master[0]
# Create database
ansible-playbook cockroachdb-add-database.yml --extra-vars '{"newDatabase":{ "name": "serviceconfigmanager", "user":"serviceconfigmanager", "service":"serviceconfigmanager" }}'
#Download the backup gz file from the backup location to installer container
aws s3 cp s3://saitestcsr-msx-bucket.platform.ciscovms.com/backup-rclone/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz /tmp/
#Transfer the backup gz file to a master node
scp -i keys/id_rsa /tmp/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz centos@10.20.0.9:/tmp/
#Copy cockraochdb backup tar to cockroachdb pod
kubectl -n vms cp /tmp/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz cockroachdb-0:/tmp/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz -c cockroachdb
#Login to cockroachdb pod
kubectl -n vms exec -it cockroachdb-0 -c cockroachdb bash
#Create dump dir to extract cockroachdb tar
mkdir /tmp/dump
#Extract cockroachdb tar
tar xvf /tmp/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz -C /tmp/dump
#Unzip database needed to restore
/bin/gzip -d -f /tmp/dump/2021-01-08T01:30:43Z-serviceconfigmanager.sql.gz
#Restoring database
/cockroach/cockroach sql --host cockroachdb-public --certs-dir=/cockroach/cockroach-certs --database=serviceconfigmanager < /tmp/dump/2021-01-08T01:30:43Z-serviceconfigmanager.sql
#Post restore cleanup
rm -f /tmp/20210108-010100-cockroachdb-backup-3.10.0-20.2.0-159.tar.gz
rm -rf /tmp/dump
Restoring NSO
When restoring NSO, keep in mind the following:
-
The backup tarball contains NSO cdb and streams files which will be restored.
-
The restore playbook assumes that the target NSO pod(s) do not exist.
-
After the cdb and streams are restored, the playbook does not start the target pods. Therefore, there will be no target NSO
pods before and after the restore operation is complete.
-
The NSO restore playbook will allow you to specify which specific shard you would like to restore.
Use the following procedure to restore NSO:
-
Choose a backup file from the S3 drive in the format of nso-TAG-3.x.y-yyyy-mm-dd.hh-mm-ss.tgz. For example, nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06.tgz.
-
Go to the installer node and place it under /{msx-version}/ansible/vms-backup. For example: /msx-3.10.0/ansible/vms-backup
-
Untar the tarball using 'tar xzvf {nso-TAG-vms-version}-yyyy-mm-dd.hh-mm-ss.tgz. For example:
#cd /msx-3.10.0/ansible/vms-backup #tar xzvf nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06.tgz
nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06/
nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06/ncs-data-vol-nso-manageddevice-shard0-0.tgz
nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06/ncs-streams-nso-manageddevice-shard0-0.tgz
nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06/ncs-data-vol-nso-manageddevice-shard1-0.tgz
nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06/ncs-streams-nso-manageddevice-shard1-0.tgz
To restore all the shards, run the following command:
library/ansible-playrole backup/restore/nso-restore kube-master[0] "servicepack_name: <SP_name>, backup_tag: nso-TAG-3.x.y-yyyy-mm-dd.hh-mm-ss, BR_mode: restore"
Where 'nso-TAG-3.x.y-yyyy-mm-dd.hh-mm-ss' is the backup file name without .tgz.
The following is an example for restoring all Shards:
#cd /msx-3.10.0/ansible
#ls -l vms-backup
drwxr-xr-x 2 root root 4096 Dec 18 19:49 infra
drwxr-xr-x 2 root root 4096 Jan 9 01:17 nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06
#library/ansible-playrole backup-restore/nso-restore kube-master[0] "servicepack_name: manageddevice, backup_tag: nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06, BR_mode: restore "
To restore a specific shard, run the following command:
library/ansible-playrole backup/restore/nso-restore kube-master[0] "servicepack_name: <SP_name>, backup_tag: nso-TAG-3.x.y-yyyy-mm-dd.hh-mm-ss, shardNumber: <num>"
where <num> is shard0, shard1, shard2, etc.
The following is an example for restoring Shard1:
#cd /msx-3.10.0/ansible
#ls -l vms-backup
drwxr-xr-x 2 root root 4096 Dec 18 19:49 infra
drwxr-xr-x 2 root root 4096 Jan 9 01:17 nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06
#library/ansible-playrole backup-restore/nso-restore kube-master[0] "servicepack_name: manageddevice, backup_tag: nso-infra-ei-isolated-3.10.0-2021-01-09.01-17-06, BR_mode: restore, shardNumber: shard1"