Backup and Restore

About backup and restore

The backup and restore functions enable you to create backup files and restore them on a different appliance if necessary for your network configuration.

Backup

  • You can back up automation data only or both automation and Assurance data.


    Important


    NetFlow data is not backed up when you back up Catalyst Center's automation and Assurance data.


  • Automation data consists of Catalyst Center databases, credentials, file systems, and files. The automation backup is a backup of all data.

  • The Assurance data consists of network assurance and analytics data. The first backup of Assurance data is a backup of all data. After that, backups are incremental.


    Important


    • Do not modify or delete the backup files. If you do, you might not be able to restore the backup files to Catalyst Center.

    • A backup can only be restored on a Catalyst Center cluster that has the same FIPS mode setting configured as the source cluster. Backup and restore operations involving clusters with different FIPS mode settings will fail (since Catalyst Center will label backups as incompatible).


  • Catalyst Center creates the backup files and posts them to a remote server. Each backup is uniquely stored using the UUID as the directory name. For information about the remote server requirements, see Backup server requirements.

  • Only a single backup can be performed at a time. Performing multiple backups at once is not supported.

  • When a backup is being performed, you cannot delete the files that have been uploaded to the file service, and changes that you make to these files might not be captured by the backup process.

  • We recommend:

    • Perform a daily backup to maintain a current version of your database and files.

    • Perform a backup after making changes to your configuration. For example, when changing or creating a new policy on a device.

    • Perform a backup only during a low-impact or maintenance period.

  • You can schedule weekly backups on a specific day of the week and time.

Restore

  • You can restore the backup files from the remote server using Catalyst Center.

  • When you restore the backup files, Catalyst Center removes and replaces the existing database and files with the backup database and files. While a restore is being performed, Catalyst Center is unavailable.

  • You cannot do a backup from one version of Catalyst Center and restore it to another version of Catalyst Center. You can only restore a backup to an appliance that is running the same Catalyst Center software release with the same first four digits and the same application versions as the appliance from which the backup was taken. To view the current applications and versions, choose System > Software Management and click Currently Installed Applications.

  • Refer to these topics in the Cisco Catalyst Center Data Migration article:

    • The procedure for migration scenarios 2, 3, 5, and 8 describe how to restore a backup to a Catalyst Center appliance with a different IP address. This situation could happen if the IP address is changed on Catalyst Center and you need to restore from an older system.


      Important


      After a backup and restore of Catalyst Center:

      • You must access the Integration Settings window and update (if necessary) the Callback URL Host Name or IP Address. For more information, see Configure integration settings.


    • The "Conversion and appliance upgrade considerations" topic describes additional points to keep in mind when restoring a backup file and upgrading to a higher-end Catalyst Center appliance.

Backup and restore event notifications

You can receive a notification whenever a backup or restore event takes place. To configure and subscribe to these notifications, complete the steps described in the "Work with event notifications" topic of the Cisco Catalyst Center Platform User Guide. When completing this procedure, ensure that you select and subscribe to the SYSTEM-BACKUP-v2 and SYSTEM-RESTORE-v2 events.

A notification is generated and sent whenever an event listed in this table occurs:

Operation Event

Backup

The process to create a backup file for your system has started.

A backup file could not be created for your system. This event typically happens because:

  • The necessary disk space is not available on remote storage.

  • You can't retrieve the status of your system's server, which is a precheck for the backup operation.

  • You encountered connectivity issues or latency while creating a backup file on your system.

Restore

The process to restore a backup file has started.

The restoration of a backup file failed. This event typically happens because:

  • The backup file is corrupted.

  • You encountered connectivity issues or latency while creating a backup file from your system.

Backup server requirements

The backup server must run one of the supported operating systems:

  • Red Hat Enterprise 8 or later

  • Ubuntu 16.04 (or Mint, and so on) or later

Server requirements for automation data backup

To support automation data backups, the server must meet these requirements:

  • Must use SSH (port22)/remote sync (rsync). Catalyst Center does not support using FTP (port 21) when performing a backup.

  • The Linux rsync utility must be installed.

  • The C.UTF-8 locale must be installed. To confirm whether C.UTF-8 is installed, enter:

    # localectl  list-locales | grep -i c.utf
    C.utf8
    en_SC.utf8
    
  • The backup user must own the destination folder for the backup or have read-write permissions for the user's group. For example, assuming the backup user is backup and the user's group is staff, this sample outputs show the required permissions for the backup directory:

    • Example 1: Backup directory is owned by backup user:

      $ ls -l  /srv/ 
      drwxr-xr-x  4 backup     root  4096 Apr 10 15:57 acme
      
    • Example 2: backup user's group has required permissions:

      $ ls -l  /srv/ 
      drwxrwxr-x. 7 root   staff  4096 Jul 24  2017 acme
      
  • SFTP subsystem must be enabled. The SFTP subsystem path depends on which Ubuntu or Red Hat release is installed. For the latest release, the following line must be uncommented and present in the SSHD configuration:

    • Ubuntu-based Linux: Subsystem sftp /usr/lib/openssh/sftp-server

    • Red Hat-based Linux: Subsystem sftp /usr/libexec/openssh/sftp-server

    The file where you need to uncomment the preceding line is usually located in /etc/ssh/sshd_config.


Note


You cannot use an NFS-mounted directory as the Catalyst Center backup server directory. A cascaded NFS mount adds a layer of latency and is therefore not supported.


Server requirements for Assurance backup

To support Assurance data backups, the server can be a Linux- or Windows-based NFS server, or an NFS provided by a storage vendor (such as NetApp, Isilon, and so on) that meets these requirements:

  • Support NFS v4 and NFS v3. (To verify this support, from the server, enter nfsstat -s.)

  • Have read and write permissions on the NFS export directory.

  • Have a stable network connection between Catalyst Center and the NFS server.

  • Have sufficient network speed between Catalyst Center and the NFS server.

  • Have the C.UTF-8 locale installed. To confirm whether C.UTF-8 is installed, enter:

    # localectl  list-locales | grep -i c.utf
    C.utf8
    en_SC.utf8
    

Note


You cannot use an NFS-mounted directory as the Catalyst Center backup server directory. A cascaded NFS mount adds a layer of latency and is therefore not supported.


Requirements for multiple Catalyst Center deployments

If your network includes multiple Catalyst Center clusters, you cannot use the same backup location for automation and Assurance backups. For multiple Catalyst Center deployments, the best practice is to separate the backup directory structure for each Catalyst Center cluster. This example configuration shows how to separate your backup directory structure.

Resource Example configuration

Catalyst Center clusters

  1. cluster1

  2. cluster2

Backup server hosting automation and Assurance backups

The example directory is /data/, which has ample space to host both types of backups.

Directory ownership and permissions

Earlier in this section, see "Server requirements for automation data backup."

Directory ownership and permissions

Earlier in this section, see "Server requirements for Assurance backup."

NFS export configuration

The content of the /etc/exports file:

/data/assurance/cluster1 *(rw,sync,no_subtree_check,all_squash)
/data/assurance/cluster2 *(rw,sync,no_subtree_check,all_squash)

Backup server directory layout

To simplify backups, we recommend that you use this directory layout for your backup server:

Single Catalyst Center cluster deployment

  • Full backup (Automation and Assurance):

    • cluster1: /data/automation/cluster1

    • cluster1: /data/assurance/cluster1

  • Automation-only backup:

    cluster1: /data/automation/cluster1

Multiple Catalyst Center cluster deployment

  • Full backup (Automation and Assurance):

    • cluster1: /data/automation/cluster1

    • cluster1: /data/assurance/cluster1

    • cluster2: /data/automation/cluster2

    • cluster2: /data/assurance/cluster2

  • Automation-only backup:

    • cluster1: /data/automation/cluster1

    • cluster2: /data/automation/cluster2

Backup storage requirements

Catalyst Center stores backup copies of Assurance data on an external NFS location and automation data on an external rsync location. You must allocate enough external storage for your backups to cover the required retention. We recommend this storage:

Machine profile

Machine profile alias

Cisco part number NFS storage (14 days incremental)

Automation storage on rsync server (daily full)

medium

medium

Second-generation appliance:

  • DN2-HW-APL

  • DN2-HW-APL-U (promotional)

1.7 TB

50 GB

Third-generation appliance: DN3-HW-APL

t2_large

large

Second-generation appliance:

  • DN2-HW-APL-L

  • DN2-HW-APL-L-U (promotional)

3 TB

100 GB

Third-generation appliance: DN3-HW-APL-L

t2_2xlarge

extra large

Second-generation appliance:

  • DN2-HW-APL-XL

  • DN2-HW-APL-XL-U (promotional)

8.4 TB

300 GB

Third-generation appliance: DN3-HW-APL-XL

Bandwidth and latency requirements

Catalyst Center requires specific bandwidth and latency for backing up your server. Use these tables to estimate the time needed to transfer backups for each appliance profile, excluding the time taken to generate the backup. The estimates show times for initial backup of all data. Subsequent incremental backups take less time.

To check the "NFS mount write" speed, enter magctl disk check -d /data/nfs -c 4096 in the Catalyst Center shell. If you have a three-node HA cluster, enter the command on each node after you configure the NFS in Catalyst Center.

Automation-only backup

Appliance profile

Rsync database size

Bandwidth

Transfer time

Medium

50 GB

1 Gbps

7 minutes

Medium

50 GB

10 Gbps

41 seconds

Large

100 GB

1 Gbps

14 minutes

Large

100 GB

10 Gbps

2 minutes

Extra large

300 GB

1 Gbps

41 minutes

Extra large

300 GB

10 Gbps

4 minutes

All data backup

Appliance profile

All data backup size

NFS mount write speed

Bandwidth

Transfer time

Medium

1.75 TB

100–125 MB/s

1 Gbps

4 hours

Medium

1.75 TB

1000–1250 MB/s

10 Gbps

25 minutes

Large

3.1 TB

100–125 MB/s

1 Gbps

7 hours

Large

3.1 TB

1000–1250 MB/s

10 Gbps

43 minutes

Extra large

8.69 TB

100–125 MB/s

1 Gbps

20 hours

Extra large

8.69 TB

1000–1250 MB/s

10 Gbps

2 hours

Example of NFS server configuration—Ubuntu

The remote share for backing up an Assurance database (NDP) must be an NFS share. If you need to configure an NFS server, use this procedure (Ubuntu distribution) as an example.

Procedure


Step 1

Enter the sudo apt-get update command to access and update the advanced packaging tool (APT) for the NFS server.

For example, enter a similar command:
$ sudo apt-get update 

Step 2

Enter the sudo apt-get install command to install the advanced packaging tool for NFS.

For example, enter a similar command:
$ sudo apt-get install -y nfs-kernel-server 

Step 3

Enter the sudo systemctl enable –now nfs-server command to enable and start the NFS server.

Step 4

Enter the sudo mkdir -p command to create nested directories for the NFS server.

For example, enter a similar command:
$ sudo mkdir -p /var/nfsshare/ 

Step 5

Enter the sudo chown nobody:nogroup command to change the ownership of the group to nobody and nogroup.

For example, enter a similar command:
$ sudo chown nobody:nogroup /var/nfsshare 

Step 6

Enter the sudo vi /etc/exports command to add the following line to the end of /etc/exports:

$ sudo vi /etc/exports 
/var/nfsshare *(rw,all_squash,sync,no_subtree_check) 

Step 7

Enter the sudo exportfs -a command to export the file systems for the NFS server.

For example, enter a similar command:
$ sudo exportfs -rv 

Example of NFS server configuration—Red Hat

This procedure shows an example NFS server configuration for Red Hat.

Procedure


Step 1

Enter the sudo yum check-update command to access and update the Yellowdog Updater Modified (YUM) for the NFS server.

For example, enter a similar command:
$ sudo yum check-update 

Step 2

Enter the sudo apt-get install command to install the advanced packaging tool for NFS.

For example, enter a similar command:
$ sudo yum install -y nfs-utils 

Step 3

Enter the sudo systemctl enable –now nfs-server command to enable and start the NFS server.

Step 4

Enter the sudo mkdir -p command to create nested directories for the NFS server.

For example, enter a similar command:
$ sudo mkdir -p <your_NFS_directory> 

Step 5

Enter the sudo chown nobody:nogroup command to change the ownership of the group to nobody and nogroup.

For example, enter a similar command:
$ sudo chown nobody:nogroup /var/nfsshare 

Step 6

Enter the sudo vi /etc/exports command to add this line to the end of /etc/exports:

$ sudo vi /etc/exports 
/var/nfsshare *(rw,all_squash,sync,no_subtree_check) 

Step 7

Enter the sudo exportfs -a command to export the file systems for the NFS server.

For example, enter a similar command:
$ sudo exportfs -rv 

Configure firewall rules to allow NFS

By default, the firewall is disabled on Debian/Ubuntu distributions but enabled on Red Hat distributions. Check whether firewall is enabled on Debian/Ubuntu distributions and if it is, add firewall rules.

Configure firewall rules—Debian/Ubuntu

For Debian/Ubuntu, use this sample configuration as an example. Refer to your Linux vendor and version for details.

Procedure


Step 1

Enter this command to check whether the firewall is enabled or disabled:

$ sudo ufw status

If the firewall is disabled, the output shows:

Status: inactive

If the firewall is enabled, the output shows:

Status: active

Step 2

If the firewall is enabled, set the static port for the mountd process to allow for easy firewall rule creation. To set the static port for mountd, change this line to add --port 32767 to /etc/default/nfs-kernel-server:

RPCMOUNTDOPTS="--manage-gids  --port 32767"

Step 3

Enter these commands to add firewall rules to allow NFS:

sudo ufw allow portmapper
sudo ufw allow nfs
sudo ufw allow mountd

Configure firewall rules—Red Hat

For Red Hat, use this sample configuration as an example. Refer to your Linux vendor and version for details.

Procedure


Step 1

Add the mountd port to services and to nfs.conf.

Note

 

Red Hat-based distributions use a different port for mountd than Debian-based distributions. Red Hat distributions use port 20048 for mountd in the /etc/service file.

Add these lines to /etc/nfs.conf if they don't exist:

[mountd]
manage-gids = 1
port = 20048

Step 2

Enter this command to restart the NFS services and firewall:

sudo systemctl restart nfs-server rpcbind nfs-mountd

Step 3

Enter these commands to add firewall rules to allow NFS:

sudo firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}
sudo firewall-cmd --reload

Configure backup servers

If you plan to back up automation data only, you need to configure the Catalyst Center automation backup server. If you plan to back up both automation and Assurance data, you need to configure the Catalyst Center automation backup server and the NFS backup server.

This procedure shows you how to set up both servers.

Before you begin

Make sure these requirements have been met:

  • Only a user with SUPER-ADMIN-ROLE permissions can perform this procedure. For more information, see About user roles.

  • The server that you plan to use for data backups must meet the requirements described in Backup server requirements.

Procedure


Step 1

From the main menu, choose System > Backup & Restore > Configure.

Step 2

To configure the automation backup server:

  1. Define these settings:

    Field

    Description

    SSH IP Address

    IP address of the remote server that you can SSH into.

    SSH Port

    Port address of the remote server that you can SSH into.

    Server Path

    Path to the folder on the server where the backup files are saved.

    Username

    Username used to protect the encrypted backup.

    Password

    Password used to protect the encrypted backup.

    Encryption Passphrase

    Passphrase used to encrypt the security-sensitive components of the backup. These security-sensitive components include certificates and credentials.

    This passphrase is required and you will be prompted to enter this passphrase when restoring the backup files. Without this passphrase, backup files are not restored.

  2. Click Apply.

Step 3

To configure the NFS backup server, click the NFS tab then:

  1. Define these settings:

    Field

    Description

    Host

    IP address or host name of the remote server that you can SSH into.

    Server Path

    Path to the folder on the server where the backup files are saved.

  2. Click Apply.


Back up data now

You can choose to back up one of these data sets:

  • Automation data only

  • Both automation and Assurance data

When you perform a backup, Catalyst Center copies and exports the data to the location on the remote server that you configured.


Note


Data is backed up using SSH/rsync. Catalyst Center does not support using FTP (port 21) when performing a backup.


Before you begin

Make sure that these requirements are met:

Procedure


Step 1

From the main menu, choose System > Backup & Restore > Backups.

Note

 

If you have not yet configured a backup server, Catalyst Center requires that you configure one before continuing. Click Configure Settings and see Configure backup servers.

Step 2

Click Add.

The Create Backup pane opens.

Step 3

In the Backup Name field, enter a unique name for the backup.

Step 4

Click Create now to perform the backup immediately.

Step 5

Define the scope of the backup:

  • Click Cisco Catalyst Center (All data) to back up automation and Assurance data.
  • Click Cisco Catalyst Center (without Assurance data) to back up only automation data.

Step 6

Click Create.

Note

 

You can view the current backup status and the history of previous backups in the Activity tab.

You can create a new backup only when there is no backup job in progress.

You can view the successfully completed backup jobs in the Backup tab.

During the backup process, Catalyst Center creates the backup database and files. The backup files are saved to the specified location on the remote server. You are not limited to a single set of backup files, but can create multiple backup files that are identified with their unique names. You receive a Backup done! notification when the process is finished.

Note

 

If the backup process fails, there is no impact to the appliance or its database. Catalyst Center displays an error message stating the cause of the backup failure. The most common reason for a failed backup is insufficient disk space. If your backup process fails, make sure that there is sufficient disk space on the remote server and attempt another backup.


Schedule data backups

You can schedule recurring backups and define the day of the week and the time of day when they will occur.

Before you begin

Make sure these requirements have been met:

Procedure


Step 1

From the main menu, choose System > Backup & Restore > Schedule.

Step 2

Click Add.

Step 3

In the Backup Name field, enter a unique name for the backup.

Step 4

Click Schedule weekly.

Choose the days and time for scheduling the backup.

Step 5

Define the scope of the backup:

  • Click Cisco Catalyst Center (All data) to back up automation and Assurance data.
  • Click Cisco Catalyst Center (without Assurance data) to back up automation data only.

Step 6

Click Schedule.

Note

 

You can view the scheduled backup jobs in the Schedule tab. After the backup starts, you can view backup status in the Activity tab.

You can create a new backup only when there is no backup job in progress.

You can view the successfully completed backup jobs in the Backup tab.

During the backup process, Catalyst Center creates the backup database and files. The backup files are saved to the specified location on the remote server. You are not limited to a single set of backup files, but can create multiple backup files that are identified with their unique names. You receive a Backup done! notification when the process is finished.

Note

 

If the backup process fails, there is no impact to the appliance or its database. Catalyst Center displays an error message stating the cause of the backup failure. The most common reason for a failed backup is insufficient disk space. If your backup process fails, make sure that there is sufficient disk space on the remote server and attempt another backup.


Restore data from backups

When you restore data from a backup file, Catalyst Center removes and replaces the existing database and files with the backup database and files. The data that is restored depends on what is on the backup:

  • Automation data backup: Catalyst Center restores the full automation data.

  • Automation and Assurance data backup: Catalyst Center restores the full automation data and the Assurance data as far back as the date that you choose.


Caution


The Catalyst Center restore process only restores the database and files. The restore process does not restore your network state and any changes made since the last backup, including any new or updated network policies, passwords, certificates, or trusted certificates bundle.



Note


  • You cannot do a backup from one version of Catalyst Center and restore it to another version of Catalyst Center. You can only restore a backup to an appliance that is running the same Catalyst Center software release with the same first four digits and the same application versions as the appliance from which the backup was taken. To view the current applications and versions, choose System > Software Management and click Currently Installed Applications.

  • If multiple clusters share the same Cisco AI Network Analytics configuration and are active at the same time, restoring a backup that includes the AI Network Analytics configuration on a different Catalyst Center cluster might result in data inconsistency and service disruption.

    Therefore, the AI Network Analytics configuration must be active on a single cluster. To uninstall the AI Network Analytics package from any inactive cluster, choose System > Software Management > Currently Installed Applications > AI Network Analytics > Uninstall.


Before you begin

Make sure these requirements have been met:

  • Only a user with SUPER-ADMIN-ROLE permissions can perform this procedure. For more information, see About user roles.

  • You have backups from which to restore data.

When you restore data, Catalyst Center enters maintenance mode and is unavailable until the restore process is done. Make sure you restore data at a time when Catalyst Center can be unavailable.

If you restore from a backup (on either the Cisco ISE or Catalyst Center side), Group-Based Access Control policy data does not synchronize automatically. You must run the policy migration operation manually to ensure that Cisco ISE and Catalyst Center are synchronized.

Procedure


Step 1

From the main menu, choose System > Backup & Restore.

The Backup & Restore window displays the following tabs: Backups, Schedule, Activity, and Configure.

If you already successfully created a backup on a remote server, it appears in the Backups tab.

Step 2

In the Backup Name column, locate the backup that you want to restore.

Step 3

In the Actions column, choose Restore.

During the restore process, Catalyst Center goes into maintenance mode. Wait until Catalyst Center exits maintenance mode before proceeding.

Step 4

Click the Backups tab to view the results of a successful restore.


Set up a file share

There are a few use cases that require the use of a file share, such as

  • completing backup and restore operations

  • installing Catalyst Center remotely, and

  • creating a repository to store RCA log bundles.

The topics in this section describe how to set up both Linux- and Windows-based NFS.

Configure a Linux-based NFS file share

This section describes how to configure an NFS file share in Ubuntu and Red Hat Linux distributions.


Note


If you are configuring a VMware VM, we recommend that you use the vmxnet driver, which provides 10G support and low overhead.


Configure a file share in an Ubuntu distribution

To configure a file share in an Ubuntu distribution, complete the steps that are detailed here.
Procedure

Step 1

Install the NFS package: apt-get install -y nfs-kernel-server

Step 2

Enable and start the NFS service:

systemctl enable nfs-server
systemctl start nfs-server

Step 3

Verify that the NFS service is enabled and has started (this should happen by default): systemctl status nfs-kernel-server

The resulting output should resemble this example:

nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2020-03-22 15:35:12 UTC; 18min ago
Main PID: 19253 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/nfs-server.service
Mar 22 15:35:12 k8smaster systemd[1]: Starting NFS server and services...
Mar 22 15:35:12 k8smaster systemd[1]: Started NFS server and services.

Step 4

Configure NFS export by completing these tasks.

  1. Set up the NFS file share directory:

    sudo mkdir -p /srv/nfs
    sudo chmod 755 -R /srv/nfs/
    sudo chown -R nobody:nobody /srv/nfs/
  2. Add the NFS file share entry to the /etc/exports file.

    • Open the file: sudo vim /etc/exports

    • Add this line: /srv/nfs *(rw,sync,no_subtree_check,all_squash)

  3. Export the NFS file share: sudo exportfs -rv

    You should see this message: exporting *:/srv/nfs

  4. Verify the NFS export.

    To check the availability of NFS file share, enter the showmount -e NFS-server-IP-address command from a different Linux machine. The resulting output should resemble this example.

    Export list for NFS-server-IP-address:
    /srv/nfs *

Step 5

Configure the firewall rules to allow NFS.

  1. Confirm whether a firewall is enabled (its status is active): sudo ufw status

    By default, a firewall is disabled in Ubuntu/Debian distributions. If necessary, enable a firewall before completing this step.

  2. Set the static port for mountd:

    • Open the /etc/default/nfs-kernel-server file.

    • Find the RPCMOUNTDOPTS="--manage-gids line.

    • Add a space and --port 32767 to the end of this line.

  3. Add the mountd port by adding these lines to the /etc/services file:

    mountd 32767/tcp
    mountd 32767/udp
  4. Restart NFS services: sudo systemctl restart nfs-kernel-server nfs-mountd portmap

  5. Add the firewall rules to allow NFS:

    sudo ufw allow portmapper
    sudo ufw allow nfs
    sudo ufw allow mountd

Configure a file share in a Red Hat or CentOS distribution

To configure a file share in Red Hat or CentOS distribution, complete the steps that are detailed here.
Procedure

Step 1

Install the NFS package: yum install -y nfs-utils

Step 2

Enable and start the NFS service:

systemctl enable nfs-server
systemctl start nfs-server

Step 3

Check the service's status: systemctl status nfs-server

The resulting output will resemble this example.

nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Sun 2020-03-22 12:14:30 EDT; 2s ago
Process: 10418 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 10404 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
Process: 10402 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 10418 (code=exited, status=0/SUCCESS)
Mar 22 12:14:30 cene8.ecrt.local systemd[1]: Starting NFS server and services...
Mar 22 12:14:30 cene8.ecrt.local systemd[1]: Started NFS server and services.

Step 4

Configure the NFS export by completing these tasks.

  1. Set up the NFS file share directory:

    sudo mkdir -p /srv/nfs
    sudo chmod 755 -R /srv/nfs/
    sudo chown -R nobody:nobody /srv/nfs/
  2. Add the NFS file share entry to the /etc/exports file.

    • Open the file: sudo vim /etc/exports

    • Add this line: /srv/nfs *(rw,sync,no_subtree_check,all_squash)

  3. Export the NFS file share: sudo exportfs -rv

    You should see this message: exporting *:/srv/nfs

  4. Verify the NFS export.

    To check the availability of NFS file share, enter the showmount -e NFS-server-IP-address command from a different Linux machine. The resulting output should resemble this example.

    Export list for NFS-server-IP-address:
    /srv/nfs *
  5. Check whether a mountd port has been configured in the nfs.conf file: grep -A2 mountd /etc/nfs.conf

    If the resulting output looks like this, it indicates that you need to configure the mountd port.

    #[mountd]
    # debug=0
    # manage-gids=n 

    In this is the case, add these lines to the nfs.conf file:

    [mountd]
    manage-gids = 1
    port = 20048
  6. Restart NFS services: sudo systemctl restart nfs-server rpcbind nfs-mountd

    By default, a firewall (managed by the firewalld service) is enabled in Red Hat/CentOS version 7 and later (earlier versions use iptables). When NFS is running on the server, this will prevent file share access. From an external client, enter the showmount -e NFS-server-IP-address command. The resulting output should resemble this example.

    clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

Step 5

Add the firewall rules to allow NFS:

firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}
firewall-cmd --reload

Step 6

Configure SELinux for NFS.

Note

 

SELinux is enabled by default in Red Hat and CentOS distributions.

  1. Enable NFS read/write boolean options.

    If you want to ... Then ...

    enable read-only NFS exports,

    enter the setsebool -P nfs_export_all_ro 1 command.

    enable read/write NFS exports,

    enter these commands:

    • setsebool nfsd_anon_write on

    • setsebool -P nfs_export_all_rw 0

  2. Verify booleans are set:
    getsebool -a |grep nfs_expo
    nfs_export_all_ro --> on
    nfs_export_all_rw --> off
  3. Set the SELinux context on the NFS directory:

    semanage fcontext -a -t public_content_rw_t "/srv/nfs(/.*)?"
    restorecon -Rv /srv/nfs/

Configure a Windows-based NFS file share

Complete these steps to configure a Windows-based NFS file share for your Catalyst Center deployment.

Procedure


Step 1

Start Server Manager.

Step 2

Install the NFS service:

  1. Choose Manage > Add Roles and Features to start the Add Roles and Features wizard.

  2. Click Next three times to skip the Before You Begin, Installation Type, and Server Selection wizard screens.

  3. In the Server Roles wizard screen, check the Server for NFS check box, and then click Next.

  4. In the Features wizard screen, check the Services for Network File System Management check box, and then click Next.

  5. In the Confirmation wizard screen, verify that the options you selected are listed.

  6. Click Install.

Step 3

Start the New Share wizard:

  1. In Server Manager's navigation pane, click File and Storage Services.

    The File and Storage Services page opens.

  2. In this page's navigation pane, click Shares.

  3. Click Tasks, then choose New Share.

    The New Share wizard opens.

Step 4

Complete the New Share wizard:

  1. In the Select the profile for this share wizard screen, click the NFS Share - Advanced profile, and then click Next.

  2. In the Select the server and path for this share wizard screen, specify where the file share will reside, and then click Next.

    If... Then...

    you want the share to reside on a dedicated disk or partition within a folder in the Shares directory,

    1. Click the Select by volume radio button.

    2. Click the disk or partition you want to use.

    you want to navigate to the location where the share will reside,

    1. Click the Type a custom path radio button.

    2. Click the text field to open the Select Folder dialog box.

    3. Navigate to the folder where you want the share to reside.

    4. Click Select Folder.

  3. In the Specify share name wizard screen, enter the share's name and then click Next.

  4. In the Specify authentication methods wizard screen, choose these options and then click Next:

    • No server authentication (AUTH/SYS)

    • Enable unmapped user access

    • Allow unmapped user access by UID/GID

  5. In the Specify the share permissions wizard screen, click Add.

  6. Complete these tasks in the Add Permissions dialog box, then click Next:

    1. Click Add.

    2. Set the host, client group, or netgroup that can access this share.

    3. In the Share permissions drop-down list, choose Read/Write.

    4. Leave the Allow root access (not recommended) check box unchecked.

  7. Complete these tasks in the Permissions wizard screen, then click Next:

    1. Click Customize permissions to open the Advanced Security Settings for share-name dialog box.

    2. Click Add to open the Permission Entry for share-name dialog box.

    3. Click the Select a principal link to open the Select User, Computer, Service Account, or Group dialog box.

    4. In the last text box, enter Everyone and then click Check Names. When Everyone is displayed, click OK.

    5. In the Basic permissions area, select all available options (including Full control) and then click OK.

    6. Repeat the previous two steps for the Anonymous LOGON user.

  8. Click Next twice to skip the Management Properties and Quota wizard screens.

  9. In the Confirmation wizard screen, click Create.

Step 5

Secure the Windows NFS server:

  1. In Server Manager's main menu, choose Tools > Windows Firewall with Advanced Security.

  2. In the Inbound Rules table, there are two Server for NFS services (NFS-TCP-in and NFS-UDP-in). Complete these tasks for both services:

    1. Double-click a service to open its Properties dialog box.

    2. Click the Scope tab.

    3. In the Remote IP address section, restrict host access by entering the IP addresses configured for Catalyst Center (including the VIP for the NIC that interfaces with the Windows NFS server).

    4. Click OK.