The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The following topics provide procedures for setting up, installing, and maintaining the gateway geographical redundancy solution. Geographical redundancy is configured and monitored using Oracle Active Data Guard (ADG) for geographical redundancy. This chapter also explains how to install Prime Network Operations Reports and the Prime Network Integration Layer (PN-IL) with gateway geographical redundancy.
Note Gateway high availability is supported only when the gateway software, Oracle database, and Infobright database (applicable for Operations Reports) are installed on the same server.
This chapter covers the following topics:
Before proceeding with this chapter, make sure you have read Geographical Redundancy Functional Overview.
Table 4-1 lists the steps you must follow to prepare for an installation, perform an installation and verify an installation of the Prime Network gateway geographical redundancy solution. The standby P2 node is only relevant if you are installing geographical + local redundancy. An x means you must perform the step on that server .
Note The steps in the following table area based on these assumptions:
Primary Node P11
|
Standby Node P22
|
||||
---|---|---|---|---|---|
Collect the server details so that you have all information handy prior to installation. |
|||||
Configure the server hardware. Note If your setup contains primary and a remote site, make sure the remote site is the replica of the primary site. |
|
||||
Install the RHEL and all recommended patches on the servers. |
|||||
Install the RPMs required on Red Hat for Prime Network. If you are installing Operations Reports, be sure to check this section. |
|||||
Configure disk groups, volumes, and partitions. If you are installing Operations Reports, be sure to check the required volume sizes. |
|||||
Mount the installation files (in the same directory on both nodes). |
|||||
Verify that all nodes are ready for installation by checking disk access, Linux versions, and NTP synchronization. |
|||||
Mount the external shared storage, Oracle, and Prime Network mount points on the relevant directories. |
|||||
Back up the /etc/host and root cron jobs files (the installation software will modify them). |
|||||
(Local + geographical) For cluster node makes sure the specified services are configured to start automatically each time the machine is rebooted. |
|||||
Install the server and Oracle database using install_prime_HA.pl . |
Installing the Prime Network Gateway Geographical Redundancy Software |
||||
Configure the embedded database (using the add_emdb_storage.pl -ha script). |
|||||
If desired, install any new device packages so that you have the latest device support. |
|||||
Installing and Configuring PN-IL for Local + Geographical Redundancy |
|||||
(Optional) Setup RHCS Web GUI if it is not configured during installation. |
|||||
(Local + geographical HA only) (Optional) Setup RHCS Web GUI if it is not configured during installation. |
These topics list the prerequisites for installing gateway geographical redundancy:
Table 4-2 shows the core system requirements for geographical redundancy. All the hardware and software requirements are also applicable for virtual machines. Geographical redundancy requires a Prime Network embedded database and does not support IPv6 gateways or databases. If your high availability deployment differs from these requirements, please contact your Cisco account representative for assistance with the planning and installation of high availability.
Note Geographical redundancy for PN-IL is only supported if the local redundancy solution is also installed.
If you are installing both local and geographical redundancy, for the local redundancy site, refer to the requirements in Hardware and Software Requirements for Local Redundancy.
Red Hat 5.8, Red Hat 6.5 64-bit Server Edition (English language). |
|
Oracle 12c is included in the Prime Network embedded database installation. |
|
RHEL 5.8 and RHEL 6.5 certified platform. For recommended hardware for small, medium and large networks, see the Cisco Prime Network 4.3 Installation Guide . |
|
Note If you are using the network-conf script, when you are prompted for the IP address of units, use the floating IP address of the gateway.
If for some reason the necessary IP addresses are not updated after a switchover or failover, you can set them manually (which includes setting the necessary LDAP parameters). See Changing the Gateway IP Address on a Gateway and All Units (changeSite.pl). For more information on using LDAP for user authentication, see Using an External LDAP Server for Password Authentication in the Cisco Prime Network 4.3 Administrator Guide . |
|
Based on requirements determined by the Cisco Prime Network Capacity Planning Guide . To obtain a copy of Capacity Planning Guide , contact your Cisco representative. Geographical redundant storage should have the same capacity and mount points as the local site. |
|
The rsync utility must be installed on all servers that are part of the geographical redundant solution. |
|
The scp program must be installed on all servers that are part of the geographical redundant solution. |
3.Virtual machine and bare metal requirements for hard disk, memory, and processor are same. Refer to the Cisco Prime Network 4.3 Installation Guide for memory and processor requirements. |
In addition to the ports listed in the Cisco Prime Network 4.3 Installation Guide , the following ports must be free.
You can check the status of the listed ports by executing the following command:
There are a number of pre installation steps you need to perform before you install the geographical redundancy solution. These steps are similar to those for local redundancy, except that you are performing them on the primary server (P1) and the remote DR server (S2). These steps include the following:
Extra steps are included if you are using both geographical and local redundancy. The preparation procedures are in Table 4-1, starting with . Some procedures will refer you to the instructions for local redundancy; this is because the steps are identical but are performed on the primary node (P1) and the remote DR node (S1) instead of the primary and secondary cluster nodes (P1 and P2).
The geographical redundancy solution uses a remote site that contains a single server that provides failover in case of a failure at the primary site. It is installed using install_prime_HA.pl script that is available in RH_ha.zip file in the installation DVD as described in Installation DVDs.
You can use this procedure to:
You can run the installation in interactive or in non-interactive mode. Interactive mode installation prompts you to enter the gateway HA data values one at a time. The Prime Network installer then updates the auto_install_RH.ini file template, which populates the install_Prime_HA.pl script.
Note It is recommended you run the installation in interactive mode first to populate the auto_install_RH.ini template with the user input. This gives you the ability to verify the input and run the installation again in non-interactive mode, if needed.
Alternatively, you can enter all the installation values in the auto_install_RH.ini template, located in the RH_ha directory, then run the installation in non-interactive mode. The installation mode is determined by the presence or absence of the -autoconf flag.
Note The geographic redundancy configuration takes time. Depending on the speed of the local and remote site connection and size of the database, the configuration can take several hours.
To set up and configure the geographical redundancy site:
Step 1 Change to root user, then unzip the RH_ha.zip file located on the installation DVD in the /tmp path.This is a mandatory process to unzip the RH_ha file in the /tmp/RH_ha directory.
Note If you are running the Korn shell (/bin/ksh) and the prompt is the hash tag (#), the installation will fail. Run the installation script using bash.
Step 2 From the /tmp/RH_ha directory, run the install_Prime_HA.pl in interactive or non-interactive mode.
Step 3 Depending on whether you want to configure geographical + local redundancy or geographical redundancy only, do one of the following for the prompts shown in Table 4-4 or Table 4-5 :
Step 4 Execute the install_Prime_HA.pl script in interactive or non-interactive method.
For interactive installation, execute the following commands:
See Table 4-4 or Table 4-5 for descriptions of parameters you will be asked to enter at various stages of the interactive installation.
a. Edit the auto_install_RH.ini file template found under the RH_ha directory with all of the installation details.
Note To prevent a security violation, it is highly recommended to remove the password in auto_install_RH.ini file after the successful installation.
After the install_Prime_HA.pl script is completed, Prime Network gateway and embedded database are installed on the remote site.
The following tables describe the installation prompts, depending on your deployment:
Enter no ; this procedure is for geographical redundancy alone . To install geographical redundancy with local redundancy, see Table 4-5 . To install local redundancy, see Installing and Maintaining Gateway Local Redundancy |
||
yes or no depending on whether NTP should be configured on two gateways. If not configured, first configure NTP and then continue with the installation. For more details on procedures, see configuring NTP in the Cisco Prime Network 4.3 Installation Guide . |
||
Location of the mount point given for the oracle-home / oracle-user . |
||
yes or no value indicating whether you want to use the default Oracle mount point or not. |
||
Location of the database redologs. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database data files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database backup files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database archive files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
User-defined Prime Network OS user ( pnuser ). Username must start with a letter and contain only the following characters: [A-Z a-z 0-9]. |
||
Directory should be located under Prime Network file system mount point but not the mount point itself. |
||
Mount point of Prime Network installation. Should be the same for all relevant nodes. Example: For install.pl the path will be /dvd/Server. |
||
Directory containing the embedded Oracle zip files. Can be a temporary location where the files were copied from the installation DVDs; or directly specify the location on DVD. |
||
Root user password for the node running the installation. For local redundancy dual-node clusters, this node must be one of the cluster nodes. |
||
For geographic redundancy, hostname for the remote site (the value returned by the system call hostname). |
||
For geographic redundancy, root user password for the remote site. |
||
Password for Prime Network root, bosenable, bosconfig, bosusermngr, and web monitoring users (users for various system components). Passwords must contain: |
||
E-mail address to which embedded database will send error messages. |
||
Name of network interface to which logical IPs will be added. Must be identical on all servers (for example: eth0, bge0). |
Table 4-5 shows the installation prompts when setting up local and geographical redundancy.
Enter yes ; this procedure is for geographical redundancy + local redundancy. To install geographical only, see Table 4-4 . To install local redundancy, see Installing and Maintaining Gateway Local Redundancy |
||
yes or no depending on whether NTP should be configured on three gateways. If not configured, first configure NTP and then continue with the installation. For more details on procedures, see configuring NTP in the Cisco Prime Network 4.3 Installation Guide . |
||
Answer yes if the node is connected to storage with more than one connection (recommended). |
||
Location of the mount point given for the oracle-home / oracle-user . |
||
yes or no value indicating whether you want to use the default Oracle mount point or not. |
||
Location of the database redologs. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database data files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database backup files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
Location of the database archive files. Should be located under one of the Oracle mounts but not directly on the mount, and should be compliant with the storage requirements. |
||
User-defined Prime Network OS user ( pnuser ). Username must start with a letter and contain only the following characters: [A-Z a-z 0-9]. |
||
Directory should be located under Prime Network file system mount point but not the mount point itself. |
||
The mount point of the Prime Network installation. The mount point should be the same for all relevant nodes. Example: For install.pl the path will be /dvd/Server. |
||
Directory containing the embedded Oracle zip files. Can be a temporary location where the files were copied from the installation DVDs; or directly specify the location on DVD. |
||
Root user password for the node running the installation. For local redundancy dual-node clusters, this node must be one of the cluster nodes. |
||
For geographic redundancy, hostname for the remote site . This is the value returned by the system call hostname. |
||
For geographic redundancy, root user password for the remote site. |
||
Password for Prime Network root, bosenable, bosconfig, bosusermngr, and web monitoring users (users for various system components). Passwords must contain: |
||
E-mail address to which embedded database will send error messages. |
||
An available multicast address accessible and configured for both cluster nodes. |
||
User-defined cluster name. Cannot be more than 15 non-NUL (ASCII 0) characters. For local redundancy, cluster name must be unique within the LAN. |
||
Type of fencing device configured for the node running the installation. (See Fencing Options .) |
||
Type of fencing device configured for the second cluster running the installation. (See Fencing Options .) |
||
Port and the password for cluster web interface. LUCI_PORT must be available and should not be in Prime Network debug range: or in Prime Network AVM port range: |
||
IP address of the node running the installation. Local redundancy dual-node clusters: Must be one of the cluster nodes. |
||
IP address of DR node at remote site (geographical redundancy). |
||
Name of network interface to which logical IPs will be added. Must be identical on all servers (for example: eth0, bge0). |
||
Hostname of fencing device configured for node running the installation (for some fencing devices, this can be an IP address). |
||
Login name for fencing device configured for node running the installation. |
||
Password for fencing device configured for node running the installation. |
||
Hostname of fencing device configured for second cluster node (for some fencing devices, this can be an IP address). |
||
Login name for fencing device configured for second cluster node. |
||
Password for fencing device configured for node second cluster node. |
Step 5 Configure the Embedded Database by running the add_emdb_storage.pl utility and you must include -ha flag while running this utility.
a. Log in as prime network user
b. Change directories to NETWORKHOME /Main/scripts/embedded_db and enter the following command:
c. Enter the number corresponding to the estimated database profile that meets your requirement.
d. Insert the event and workflow archiving size in days.
Step 6 Configure the remote site using the setup_Prime_DR.pl command in interactive or non-interactive mode. For more information on setup_Prime_DR.pl script, see Installation DVDs.
Note The setup_Prime_DR.pl script must run on the node running the primary database.
For interactive mode, enter the following commands:
a. Edit the auto_install_RH.ini file template found under the RH_ha directory with all of the installation details.
Example: perl setup_Prime_DR.pl -autoconf /tmp/RH_dr/ auto_install_RH.ini
Note If the setup_Prime_DR.pl script is executed from the same node as the install_Prime_HA.pl script, and if all the parameters are same, you can use the same auto_install_RH.ini file. The prompts and outputs while executing this script are a subset of the install script prompts.
Step 7 Verify the setup as described in Verifying the Geographical Redundancy Setup.
Table 4-6 shows the geographical redundancy verification tests.
Note The geographical redundancy verification tests are for the embedded database and must be performed by Cisco personnel only.
These topics provide information pertaining to ongoing management of an ADG geographical redundancy configuration. The utilities used for these operations are stored in /var/adm/cisco/prime-network/scripts/ha/util.
Prime Network generates the following system events for geographical redundancy monitoring:
– An GWSync has not occurred in the last 10 minutes.
– The standby database is down.
– The standby database is up but has been out of sync for 60 minutes.
The log files for data replication are described in the following table. To troubleshoot problems with the replication process, see Verifying the Geographical Redundancy Setup .
The primeha command is a central utility for checking the status of the high availability nodes, performing switchovers and failovers, and stopping and resuming data replication.
Use the following command to view the status of the cluster:
The below output is an example for a network that has both local and geographical redundancy.
To uninstall geographical redundancy, use this procedure. If Operations Reports was also installed, this procedure will remove it.
If your deployment also has local redundancy, uninstall the software on the primary cluster server (P1) first using the procedure in Uninstalling Local Redundancy.
Step 1 If any RHCS services are running, log into the primary cluster server and freeze the relevant services (service can be ana , oracle , and, if Operations Reports is installed, ifb ).
Step 2 Log in as the root user and change to the following directory:
Step 3 Enter the following command:
This section explains how to install the Prime Network Integration Layer (PN-IL) 1.2 for a local + geographical redundancy deployment. It also explains how to integrate the deployment with Cisco Prime Central. For information on the Prime Central releases with which you can integrate PN-IL 1.2, see the Cisco Prime Network 4.3 Release Notes .
These topics provide the information you will need to install and configure PN-IL geographical, and local redundancy:
If you want to migrate an existing standalone installations of PN-IL (local + geographical) to suite mode, you can use the procedure in Configuring and Migrating PN-IL with Prime Central (Suite Mode with Local + Geographical Redundancy).
The PN-IL high availability files are provided on the Prime Network installation DVD named Disk 1: New Install DVD . Disk 2 contains the tar file sil-esb-1.2.0.tar.gz , which contains the PN-IL installation files and scripts, including:
Table 4-7 provides the basic steps you must follow to set up local + geographical redundancy for PN-IL. If you want to migrate an existing standalone installations of PN-IL (local + geographical) to suite mode, you can use the procedure in Configuring and Migrating PN-IL with Prime Central (Suite Mode with Local + Geographical Redundancy).
Note that you only have to install PN-IL on the primary cluster server (P1), not on the remote (DR) server (S2). However, you will have to do some configuration tasks on the remote server.
Collect server details, so that you have all information handy prior to installation. |
|
||||
Installing PN-IL on a Prime Network Server (Local + Geographical Redundancy) |
|||||
Configure PN-IL (in standalone or suite mode) on both nodes, and unfreeze RHCS. |
Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy) |
||||
Use this procedure to install PN-IL with local + geographical redundancy on the primary cluster server (P1). The primary cluster node will copy the necessary files to the remote DR node (S1). For the remote DR node, you only have to perform some minor configurations.
Make sure Prime Network is installed and is up and running on the both the primary cluster node (P1) and the remote DR node (S2). In the following procedure, $ANAHOME is the pnuser environment variable for the Prime Network installation directory (/export/home/ pnuser by default).
Step 1 On the primary cluster node (P1), log in as root and freeze the ana service.
Note The cluster server should be the active node where the ana service is running.
Step 2 On the remote DR node (S1), log in as root and save your rsync settings so they are not overwritten during the PN-IL installation process.
Step 3 On the primary cluster node (P1), log in as pnuser .
Step 4 On the primary cluster node, create an installation directory for PN-IL.
For example, if the Prime Network installation directory was /export/home/pn41, you would run this command to create an installation directory called pnil:
Step 5 On the primary cluster node (P1), copy the installation files from the installation DVD, extract them, and start the installation script. These examples use the PN-IL installation directory /pnil.
a. Copy the PN-IL installation tar file from Disk 2 to the directory you created in Step 4. In the following example, the installation directory is named pnil .
b. Change to the directory you created in Step 4 and extract the files from the PN-IL installation tar:
c. Change to directory where the installation tar files were extracted and run the installation script:
Step 6 On the primary cluster node (P1), reload the user profile.
Step 7 Log into the remote DR server (S1) as root and move the original rsync exclude file (that you moved in Step 2) back to its proper place.
Step 8 Configure PN-IL as described in Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy).
Note Do not unfreeze the ana service until PN-IL has been configured.
Note You do not have to install the geographical redundancy files on the remote server (S1); the necessary files will be copied to the remote DR server by the primary cluster node.
Configuration tasks must be performed on both the primary cluster node (P1) and the remote DR node (S1).
In standalone mode, Prime Network is not integrated with Prime Central and can independently expose MTOSI and 3GPP web services to other OSS/applications. In the following procedure:
Step 1 From the primary cluster node (P1), log in as pnuser .
Step 2 On the primary cluster node (P1), configure PN-IL in standalone mode.
itgctl config 1 --anaPtpServer 192.0.2.22 --anaPtpUser root --anaPtpPw myrootpassword --authURL https://192.0.2.22:6081/ana/services/userman
Step 3 On the primary cluster node (P1), start PN-IL.
Step 4 Open a new session on the remote DR server (S1) and log in as pnuser .
Step 5 On the remote DR server (S1), configure PN-IL in standalone mode but use the remote DR server’s IP address ( --anaPtpServer remote-DR-ip ).
Step 6 On the primary cluster node (P1), start PN-IL.
Note To avoid the automatic start of PN-IL on the DR server, disable the PN-IL Health monitor, and stop the PN-IL service on that server, using the following command: $PRIMEHOME/local/scripts/il-watch-dog.sh disableandstop.
Step 7 On the primary cluster node, log in as the operating system root user and unfreeze the ana service.
Step 8 To enable NBI, contact Cisco representative.
Next, perform the necessary configuration steps that are described in Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy).
When Prime Network and PN-IL are running in suite mode , that means they are integrated with Prime Central. This procedure explains how to integrate PN-IL with a deployment of Prime Central that uses geographical redundancy. You can use this procedure for:
Figure 4-1 illustrates the deployment of both local and geographical redundancy in Suite Mode.
Note PN-IL geographical redundancy is only supported when the deployment also has local redundancy. Therefore, Prime Central must also be using both local and geographical redundancy.
Figure 4-1 Local Redundancy with Geographical Redundancy Suite Mode
In the following procedure, $PRIMEHOME is the pnuser environment variable for the PN-IL installation directory you created in Installing PN-IL on a Prime Network Server (Local + Geographical Redundancy).
Before you begin, verify the following:
To integrate PN-IL with Prime Central:
Step 1 From the Prime Network primary cluster node (P1), log in as pnuser .
Step 2 On the Prime Network primary cluster node (P1), configure PN-IL in suite mode, edit the necessary integration files, and run the integration script:
a. Move to the PN-IL integration directory.
b. Edit the ILIntegrator.prop file and change the value of the ‘HOSTNAME’ property to ana-cluster-ana, which is the fixed name for the Prime Network cluster server.
c. Execute the following integration script to integrate PN-IL with Prime Central. Prime Central will assign an ID number to PN-IL. Note the ID number because you will need it later to integrate the remote DR server (S1) with Prime Central.
Note When you run DMIntegrator.sh, you must exactly follow the format below or the script will fail.
DMIntegrator uses these variables. You must enter them in this exact order.
Specifies the IP address of the Prime Central database server |
|
Specifies the name of Prime Central database user (usually primedba ) |
|
Specifies the port for Prime Central database (usually 1521 ) |
Step 3 On the Prime Network primary cluster node (P1), reload the user profile:
Step 4 On the Prime Network primary cluster node (P1), retrieve the ID that Prime Central assigned to Prime Network using itgctl list. You will need it in a future step.
Step 5 Open a new session to the Prime Network remote DR server (S1) as a root user and rename file as shown below.
Step 6 On the Prime Network remote DR server (S1), configure PN-IL in suite mode as pnuser . Edit the necessary integration files, and run the integration script .
b. Move to the PN-IL integration directory.
c. Edit the ILIntegrator.prop file and change the value of the ‘HOSTNAME’ property to the Prime Network remote DR server (S1) hostname. For example:
d. Execute the following integration script to integrate PN-IL into the deployment:
DMIntegrator uses these variables. You must enter them in this exact order.
Step 7 On the remote DR node (S1), reload the user profile:
Step 8 Log out from Prime Network application user and as root user change the following file name
Step 9 As the operating system root user, on the primary cluster node (P1), unfreeze the ana service.
Next, disable the PN-IL health monitor as described in Disabling the PN-IL Health Monitor.
This section explains how to install the Prime Network Integration Layer (PN-IL) 1.2 for a geographical redundancy only deployment. It also explains how to integrate the deployment with Cisco Prime Central. For information on the Prime Central releases with which you can integrate PN-IL 1.2, see the Cisco Prime Network 4.3 Release Notes .
Note PN-IL geographical redundancy only has a primary server (P1) at the local site and remote server (S1) at a remote geographical site for a full disaster recovery.
These topics provide the information you will need to install and configure PN-IL geographical only deployments:
If you want to migrate an existing standalone installations of PN-IL (with geographical redundancy) to suite mode, you can use the procedure in Configuring and Migrating PN-IL with Prime Central (Suite Mode with Local + Geographical Redundancy).
Table 4-7 provides the basic steps you must follow to set up geographical redundancy only for PN-IL. If you want to migrate an existing standalone installations of PN-IL (with geographical redundancy only) to suite mode, you can use the procedure in Configuring and Migrating PN-IL with Prime Central (Suite Mode with Local + Geographical Redundancy).
Note that you only have to install PN-IL on the primary server (P1), not on the remote (DR) server (S2). However, you will have to do some configuration tasks on the remote server.
Collect server details, so that you have all information handy prior to installation. |
|
||||
Configure PN-IL (in standalone or suite mode) on both nodes. |
Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy) |
||||
Use this procedure to install PN-IL with geographical redundancy on the primary server (P1). The primary node will copy the necessary files to the remote DR node (S1). For the remote DR node, you only have to perform some minor configurations.
Make sure Prime Network is installed and is up and running on the both the primary node (P1) and the remote DR node (S1). In the following procedure, $ANAHOME is the pnuser environment variable for the Prime Network installation directory (/export/home/ pnuser by default).
Step 1 On the remote DR node (S1), log in as root and save your rsync settings so they are not overwritten during the PN-IL installation process.
Step 2 On the primary node (P1), log in as pnuser .
Step 3 On the primary node, create an installation directory for PN-IL.
For example, if the Prime Network installation directory was /export/home/pn41, you would run this command to create an installation directory called pnil:
Step 4 On the primary cluster node (P1), copy the installation files from the installation DVD, extract them, and start the installation script. These examples use the PN-IL installation directory /pnil.
a. Copy the PN-IL installation tar file from Disk 2 to the directory you created in Step 4. In the following example, the installation directory is named pnil .
b. Change to the directory you created in Step 4 and extract the files from the PN-IL installation tar:
c. Change to directory where the installation tar files were extracted and run the installation script:
Step 5 On the primary node (P1), reload the user profile.
Step 6 Log into the remote DR server (S1) as root and move the original rsync exclude file (that you moved in Step 1) back to its proper place.
Step 7 Configure PN-IL as described in Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy).
Note Do not unfreeze the ana service until PN-IL has been configured.
Note You do not have to install the geographical redundancy files on the remote server (S1); the necessary files will be copied to the remote DR server by the primary node.
Configuration tasks must be performed on both the primary node (P1) and the remote DR node (S1).
In standalone mode, Prime Network is not integrated with Prime Central and can independently expose MTOSI and 3GPP web services to other OSS/applications. In the following procedure:
Step 1 From the primary node (P1), log in as pnuser .
Step 2 On the primary node (P1), configure PN-IL in standalone mode.
itgctl config 1 --anaPtpServer 192.0.2.22 --anaPtpUser root --anaPtpPw myrootpassword --authURL https://192.0.2.22:6081/ana/services/userman
Step 3 On the primary node (P1), start PN-IL.
Step 4 Open a new session on the remote DR server (S1) and log in as pnuser .
Step 5 On the remote DR server (S1), configure PN-IL in standalone mode but use the remote DR server’s IP address ( --anaPtpServer remote-DR-ip ).
Step 6 On the primary cluster node (P1), start PN-IL.
Next, perform the necessary configuration steps that are described in Configuring PN-IL on a Prime Network Gateway (Local + Geographical Redundancy).
When Prime Network and PN-IL are running in suite mode , that means they are integrated with Prime Central. This procedure explains how to integrate PN-IL with a deployment of Prime Central that uses geographical redundancy only. You can use this procedure for:
In the following procedure, $PRIMEHOME is the pnuser environment variable for the PN-IL installation directory you created in Installing PN-IL on a Prime Network Server (Local + Geographical Redundancy).
Before you begin, verify the following:
To integrate PN-IL with Prime Central:
Step 1 From the Prime Network primary node (P1), log in as pnuser and stop prime network integration layer.
Step 2 On the Prime Network primary node (P1), configure PN-IL in suite mode, edit the necessary integration files, and run the integration script:
a. Move to the PN-IL integration directory.
b. Execute the following integration script to integrate PN-IL with Prime Central. Prime Central will assign an ID number to PN-IL. Note the ID number because you will need it later to integrate the remote DR server (S1) with Prime Central.
Note When you run DMIntegrator.sh, you must exactly follow the format below or the script will fail.
DMIntegrator uses these variables. You must enter them in this exact order.
Specifies the IP address of the Prime Central database server |
|
Specifies the name of Prime Central database user (usually primedba ) |
|
Specifies the port for Prime Central database (usually 1521 ) |
Step 3 On the Prime Network primary cluster node (P1), reload the user profile:
Step 4 On the Prime Network primary node (P1), retrieve the ID that Prime Central assigned to Prime Network using itgctl list. You will need it in a future step.
Step 5 Open a new session to the Prime Network remote DR server (S1) as a root user and rename file as shown below.
Step 6 On the Prime Network remote DR server (S1), configure PN-IL in suite mode as pnuser . Edit the necessary integration files, and run the integration script .
b. Move to the PN-IL integration directory.
c. Edit the ILIntegrator.prop file and change the value of the ‘HOSTNAME’ property to the Prime Network remote DR server (S1) hostname. For example:
d. Execute the following integration script to integrate PN-IL into the deployment:
DMIntegrator uses these variables. You must enter them in this exact order.
Step 7 On the remote DR node (S1), reload the user profile:
Step 8 Log out from Prime Network application user and as root user change the following file name
Next, disable the PN-IL health monitor as described in Disabling the PN-IL Health Monitor.