Install and Setup Guide for Cisco Security MARS, Release 5.3.x
Administering the MARS Appliance

Table Of Contents

Administering the MARS Appliance

Performing Command Line Administration Tasks

Log In to the Appliance via the Console

Reset the Appliance Administrator Password

Shut Down the Appliance via the Console

Log Off the Appliance via the Console

Reboot the Appliance via the Console

Determine the Status of Appliance Services via the Console

Stop Appliance Services via the Console

Start Appliance Services via the Console

View System Logs via the Console

Checklist for Upgrading the Appliance Software

Burn an Upgrade CD-ROM

Prepare the Internal Upgrade Server

Important Upgrade Notes

General Notes

Upgrade to 5.3.6

Upgrade to 5.3.5

Upgrade to 5.3.4

Upgrade to 5.3.3

Upgrade to 5.3.2

Upgrade to 5.3.1

Upgrade to 5.2.8

Upgrade to 5.2.7

Determine the Required Upgrade Path

Download the Upgrade Package from Cisco.com

Specify the Proxy Settings for the Global Controller or Local Controller

Upgrade Global Controller or Local Controller from its User Interface

Upgrade from the CLI

Upgrading a Local Controller from the Global Controller

Specify the Proxy Settings in the Global Controller

Upgrade Local Controller from the Global Controller User Interface

Configuring and Performing Appliance Data Backups

Typical Uses of the Archived Data

Format of the Archive Share Files

Archive Intervals By Data Type

Configure the NFS Server on Windows

Install Windows Services for UNIX 3.5

Configure a Share using Windows Services for UNIX 3.5

Enable Logging of NFS Events

Configure the NFS Server on Linux

Configure the NetApp NFS Server

Configure Lookup Information for the NFS Server

Configure the Data Archive Setting for the MARS Appliance

Access the Data Within an Archived File

Troubleshooting Data Archiving

Recovery Management

Recovering a Lost Administrative Password

Downloading and Burning a Recovery DVD

Recovery the MARS Operating System

Re-Imaging a Local Controller

Re-Imaging a Global Controller

Restoring Archived Data after Re-Imaging a MARS Appliance

Upsizing a MARS Appliance

Configuring a Standby or Secondary MARS Appliance

Guidelines for Restoring


Administering the MARS Appliance


Revised: November 7, 2008, OL-14672-01

This chapter describes a core set of maintenance tasks for Cisco Security Monitoring, Analysis, and Response System (MARS). Because these tasks affect the overall health and accuracy of the MARS system, you should develop an operational strategy and process for performing them. This chapter contains the following sections:

Performing Command Line Administration Tasks

Checklist for Upgrading the Appliance Software

Configuring and Performing Appliance Data Backups

Recovery Management

Upsizing a MARS Appliance

Configuring a Standby or Secondary MARS Appliance

Guidelines for Restoring

For all other MARS Appliance configuration and administration tasks, see either the User Guide for Cisco Security MARS Global Controller or the User Guide for Cisco Security MARS Local Controller, depending on which product you own.

Performing Command Line Administration Tasks

This section details basic administrative tasks that you perform using a console connection to the MARS Appliance. This section contains the following procedures:

Log In to the Appliance via the Console

Reset the Appliance Administrator Password

Shut Down the Appliance via the Console

Log Off the Appliance via the Console

Reboot the Appliance via the Console

Determine the Status of Appliance Services via the Console

Stop Appliance Services via the Console

Start Appliance Services via the Console

View System Logs via the Console

Log In to the Appliance via the Console

After the MARS Appliance boots, the console service starts and prompts the user to log in. Successful login launches a command line application (shell) that operates the CLI.

To log in to the MARS Appliance via a console connection, follow these steps:


Step 1 Establish a console connection to the MARS Appliance. For options and details, see Establishing a Console Connection, page 5-4.

Step 2 At the login: prompt, enter the MARS Appliance administrator name.

Step 3 At the password: prompt, enter the MARS Appliance password.

Result: The system prompt appears in the following form:

Last login: Tue Jul  5 05:57:31 2005 from <host>.<domain>.com

  Cisco Security MARS - Mitigation and Response System

    ? for list of commands

[pnadmin]$ 


Note There is only one set of MARS Appliance login credentials (administrator name and password) that have the console connection privilege.



Tip To exit the console connection, enter exit at the command prompt.



Reset the Appliance Administrator Password

There is always a single set of MARS Appliance administrator credentials consisting of the administrator name pnadmin and a corresponding password. Unlike other MARS administrative accounts, this unique administrative account is granted all privileges and cannot be deleted.

This procedure details how to reset the password after you log in with the existing credentials. If you do not have the existing MARS Appliance administrator login credentials with which to log in, the only method of recovery is to re-image the appliance, which resets the password to the factory defaults. For information on resetting the administrator login and password without first logging in, see Recovery Management.

To reset the MARS Appliance administrator login credentials, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 At the system prompt, type passwd and then press Enter.

Result: The MARS Appliance displays the following prompt:

New password:

Step 3 Type the new password, and then press Enter.


Note The new password should not contain the administrator account name, must contain a minimum of 6 characters, and it should include at least 3 character types (numerals, special characters, upper case letters, and lowercase letters). Each of the following examples is acceptable: 1PaSsWoRd, *password44, Pass*word.


The MARS Appliance displays the following prompt:

Retype new password

Step 4 Type the new password again, and then press Enter.

Result: The MARS Appliance displays the command prompt, and the password is changed.


Shut Down the Appliance via the Console

You can shut down an appliance remotely via a console connection. However, to power up the appliance, you must have physical access to the device. For more information on powering up the appliance, see Powering on the Appliance and Verifying Hardware Operation, page 4-8.


Caution Powering off the MARS Appliance by using only the power switch may cause the loss or corruption of data. Use this procedure to shut down the MARS Appliance.

To use the console to shut down the MARS Appliance, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 At the system prompt, type shutdown, and then press Enter.

Step 3 At the Are you sure you want to shut down? (Y/N) prompt, type Y for yes and then press Enter.

Result: The MARS Appliance powers off.


Log Off the Appliance via the Console

Logging off via the console closes the administrative session at the applause. Good security practices recommend logging off when you are not using the console.

To log off the MARS Appliance via the console, follow these steps:


Step 1 At the system prompt, type exit.

Step 2 Press Enter.

Result: The console connection closes, and the login: prompt reappears.


Reboot the Appliance via the Console

From time to time, you may need to manually reboot the appliance. For example, if a service seems to be hung, rebooting may resolve the issue. Rebooting ensures that the services are shut down safely before the appliance restarts.

To reboot the MARS Appliance via the console, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 At the system prompt, type reboot, and then press Enter.

Result: The MARS Appliance displays the following message:

Are you sure you want to reboot? (Y/N)

Step 3 Type Y for yes and then press Enter.

Result: The MARS Appliance reboots. When the reboot is finished, the login: prompt reappears.


Determine the Status of Appliance Services via the Console

You can use the console connection to obtain system and service status information.

To determine the status of the MARS Appliance's services, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 At the system prompt, type pnstatus, and then press Enter.

The system displays the following status information:

Module		State		Uptime
DbIncidentLoaderSrv		RUNNING		01:12:18
KeywordQuerySrv		RUNNING		01:12:18
csdam		RUNNING		01:12:18
csiosips		RUNNING		01:12:18
csips		RUNNING		01:12:18
cswin		RUNNING		01:12:18
device_monitor		RUNNING		01:12:18
discover		RUNNING		01:12:18
graphgen		RUNNING		01:12:18
pnarchiver		RUNNING		01:12:18
pndbpurger		RUNNING		01:12:18
pnesloader		RUNNING		01:12:18
pnmac		RUNNING		01:12:18
pnparser		RUNNING		01:12:19
process_event_srv		RUNNING		01:12:19
process_inlinerep_srv		RUNNING		01:12:19
process_postfire_srv		RUNNING		01:12:19
process_query_srv		RUNNING		01:12:19
superV		RUNNING		01:12:20

Possible states are:

RUNNING. The service is operational.

STOPPED. The service is not running.


Note All services should be running on a Local Controller. However, a Global Controller only has three services running: graphgen, pnarchiver, and superV—all other services are stopped.


Stop Appliance Services via the Console

You can stop all MARS Appliance services from the console. To list the services and their status, you can use the pnstatus command. For more information, see Determine the Status of Appliance Services via the Console.

To stop all services on the MARS Appliance, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 Type pnstop.

Step 3 Press Enter.

Result: The system immediately shows the message:

Please Wait . . . 

Followed by the return of the prompt, indicating the command has completed.

Step 4 To verify the status of the services, enter pnstatus.

The superV service does not stop. This service monitors and restarts the other services as needed.


Start Appliance Services via the Console

If the services are stopped, you can manually start all MARS Appliance services from the console. To list the services and their status, you can use the pnstatus command. For more information, see Determine the Status of Appliance Services via the Console.

To start all stopped MARS services, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 Type pnstart.

Step 3 Press Enter.

Result: The system prompt disappears and then returns, indicating the services are restarted.

Step 4 To verify the status of the services, enter pnstatus.


View System Logs via the Console

This section details the procedure for running the pnlog show command. This command displays the log status and can be used by support personnel for analysis.

For more information on the pnlog command, see pnlog, of Appendix A, "Command Reference." The syntax for the pnlog show command is as follows:.

pnlog show <gui|backend|cpdebug>

These options do a running output of a particular log file in the backend. There are three different logs that you can view: the web interface logs, the backend logs (shows logs for processes that the pnstatus command reports on), and CheckPoint debug logs. Use Ctrl+C or ^C to stop this command.

When using cpdebug, you should have pnlog setlevel set to more than 0, which is the default value and turns off the CPE Debug messages.

To generate a .cab file of log and system Registry information, follow these steps:


Step 1 Log in to the MARS Appliance. For more information, see Log In to the Appliance via the Console.

Step 2 Type pnlog show and the appropriate argument.

Step 3 Press Enter.

Result: The console begins scrolling the output of the executed command.

Step 4 To stop the output at any time, press Ctrl+C.

Result: The system returns to the system prompt.


Checklist for Upgrading the Appliance Software

MARS upgrade packages are the primary vehicle for major, minor, and patch software releases. As administrator of the MARS Appliance, you should check the upgrade site weekly for patch upgrades. In addition to addressing high-priority caveats, patch upgrade packages update system inspection rules, event types, and provide the most recent signature support.


Caution Never try to upgrade the hardware components of the MARS Appliance. Doing so could result in bodily injury and void support contracts. Contact Cisco for your hardware upgrade needs.

The following checklist describes the steps required to upgrade your MARS Appliance to the most recent version. Each task might contain several steps; the tasks and steps within should be performed in order. The checklist contains references to the specific procedures used to perform each task.

Task

 

1. Determine whether you should upgrade or reimage the MARS Appliance.

Two scenarios exist for bringing your MARS Appliance in line with the current software release: upgrade versus reimage. The method required to get to the current release can differ greatly between these two scenarios.

Upgrade the MARS Appliance to the current release and preserve the configuration and event data. To preserve the configuration and the event data, you must perform the upgrade following the tasks in this checklist; continue with Task 2.

Reimage the MARS Appliance to the current release without preserving any configuration or event data. If you have no desire to preserve configuration and event data, you can reimage the appliance using the most recent ISO image. For information on how to reimage your appliance, see Recovery Management.

Result: You determine whether you will upgrade or reimage your MARS Appliance.

2. Determine the version that you are running.

Before you upgrade your appliance, you must determine what version you are running. You can determine this in one of two ways:

web interface. To the determine the version in the web interface, select Help > About.

CLI. To determine the version from the CLI, enter version at the MARS command prompt.

The format of the version appears as x.y.z (build_number), for example, 3.4.1 (1922).

Note If you are running a version earlier than 3.2.2, please contact Cisco support for information on obtaining the appropriate upgrade files. If you are running 3.2.2 or later, follow the instructions in this checklist.

Result: You have identified the version running on your appliance and know whether you must contact Cisco support or continue with this checklist.

3. Determine the medium for upgrading.

Before upgrading your appliance, you must determine what medium to use. Your choice of medium determines whether you must upgrade from the CLI.

CD-ROM. Before you can upgrade, you must download the software and burn an image to a CD-ROM. You can insert this CD-ROM in the DVD drive of the MARS Appliance to perform the upgrade. If you select the CD-ROM medium, you must upgrade each appliance individually and you must use the CLI.

Internal Upgrade Server. Identify the Internal Upgrade Server to be used. Before you can upgrade, you must download the software image to an internal HTTP, HTTPS, or FTP server. It is from this internal server that you must upgrade your MARS Appliance. This server should meet specific requirements, allowing each MARS Appliance to quickly and securely download the updates. When using an Internal Upgrade Server, you can upgrade from the CLI or the HTML interface unless otherwise noted.

Note If you are running a version earlier than 3.4.1, you cannot use the web interface to upgrade. In versions earlier than 3.4.1, the web interface only allows for connections to the upgrade.protegonetworks.com support site, which is no longer available. To upgrade from versions earlier the 3.4.1, you must use the CLI.

Result: You have determined which medium to use for your upgrade. If you chose the Internal Upgrade Server option, you have identified and prepared your server, and you have verified that the server can be reached by each standalone Local Controller or Global Controller that you intend to upgrade. If a proxy server resides between the Internal Upgrade Server and the appliance, you must provide those settings before upgrading.

For more information, see:

Burn an Upgrade CD-ROM

Prepare the Internal Upgrade Server.

4. Understand the required upgrade path and limitations.

Upgrading from one version of the appliance software to the next must follow a cumulative upgrade path; you must apply each upgrade package in the order it was made available between the version running on the appliance and the version you want to run.

Also, a limitation exists between a Global Controller and any Local Controllers that it monitors. The Global Controller can only monitor Local Controllers that are running the same version it is. If you are attempting to monitor a Local Controller that is running an earlier software version, the Local Controller will appear offline to the Global Controller. However, MARS includes an upgrade option where the Global Controller pushes the same upgrade version to the Local Controllers that it is monitoring, allowing you to manage the upgrade process from within the Global Controller user interface.

You have identified the complete list of upgrade packages that you must download.

For more information, see:

Important Upgrade Notes

Determine the Required Upgrade Path.

5. Download all required upgrade packages from the Cisco.com website.

After you have identified the upgrade packages to download, log in to Cisco.com using your Cisco.com account and download the various packages. To download upgrade packages, you must have a valid SMARTnet support contract for the MARS Appliance.

Depending on your selection in Step 3., you will either store these files on the Internal Upgrade Server or burn a CD-ROM image.

Result: All upgrade packages that are required to upgrade from the version you are running to the most recent version are located in a known path on either the Internal Upgrade Server or a CD-ROM.

For more information, see:

Download the Upgrade Package from Cisco.com.

6. Understand the upgrade approach you want to use.

Select from the following upgrade options:

Note If you are running a version earlier than 3.4.1, you must select an option that supports upgrading from the CLI.

Upgrade from an appliance that connects to the Internal Upgrade Server directly (CLI or web interface).

Upgrade from an appliance that connects to the Internal Upgrade Server through a proxy (CLI or web interface).

Upgrade a Local Controller using the Global Controller via either a proxy server or a direct connection to the Internal Upgrade Server (web interface only).

Upgrade from a CD-ROM at the command line (CLI only).

Result: You have determined the appropriate upgrade approach to use based on your selected medium and currently running version.

7. Identify any required proxy server settings.

If your appliance runs on a network that is separated from the Internal Upgrade Server by a proxy server, you must identify the proxy server settings. If you are using the HTML interface to upgrade, you can specify these settings using the Admin > System Parameters > Proxy Settings page. Otherwise, make note of the settings so that you can provide them at the command line during upgrade.

Note You can specify the proxy server settings in the web interface for versions 3.4.1 and later. However, you can specify proxy server settings at the CLI for versions 2.5.1 and later.

Result: You have either specified the proxy server settings in the web interface, or you have noted the settings for later use.

For more information, see:

Specify the Proxy Settings for the Global Controller or Local Controller.

8. Upgrade the appliance to the next appropriate version, as determined by the upgrade path.

From the appliance, use the method you chose in Step 6. to upgrade incrementally, as determined in Step 5., to the desired version.

Result: You have applied each required upgrade package.

For more information, see:

Upgrade Global Controller or Local Controller from its User Interface

Upgrade from the CLI

Upgrading a Local Controller from the Global Controller


Burn an Upgrade CD-ROM

Burning an upgrade CD-ROM does not have any special requirements. If you require more than one upgrade package, you can include three upgrade packages per CD, as packages are typically around 200 MB.


Note You must apply the upgrade packages in sequential order, and the appliance will reboot between each upgrade. It can take 30-40 minutes for an upgrade to be applied and the system to restart before you can apply the next patch.


Prepare the Internal Upgrade Server

The Internal Upgrade Server requirements vary based on the upgrade option you selected and the version running on your appliance.


Note MARS requires that the Internal Upgrade Server enforces user authentication. Therefore, you must specify a username and password pair to authenticate to the server whether it is accessed via HTTP, HTTPS, or FTP. In addition, if you are passing through a proxy server, that server must also enforce inline authentication.


For CLI-based upgrades of version 2.5.1 or later, the Internal Upgrade Server must be configured to meet the following requirements:

Be an FTP, HTTP, or HTTPS server

Require user authentication

Accept connections from the MARS Appliance

Connections pass through a proxy server that also uses authentication

For web interface-based upgrades of releases 3.4.1 or later, the Internal Upgrade Server must be configured to meet the following requirements:

Be an HTTPS or FTP server

Require user authentication

Accept connections from the MARS Appliance

Connections pass through a proxy server that also uses authentication. In addition, the proxy server setting must be configured in the web interface before the upgrade.

Important Upgrade Notes

To ensure that the upgrade from earlier versions is trouble free, this section contains the notes provided in previous releases according the release number. Please refer to the notes that pertain to the release you are upgrading from and any releases following that one.

General Notes

The MARS Appliance performs a file system consistency check (fsck) on all disks when either of the following conditions is met:

If the system has not been rebooted during the past 180 days.

If the system has been rebooted 30 times.

The fsck operation takes a long time to complete, which can result in significant unplanned downtime when rebooting the system after meeting a condition above. For example, a MARS 50 appliance can take up to 90 minutes to perform the operation.

Upgrade to 5.3.6

No important notes exist for the 5.3.6 upgrade.

Upgrade to 5.3.5

No important notes exist for the 5.3.5 upgrade.

Upgrade to 5.3.4

No important notes exist for the 5.3.4 upgrade.

In addressing CSCsm57453: Incident not created for some of same events, the behavior now differs between 4.3.4 and 5.3.4. In the x.3.3 releases, incident firing was throttled at 100 incidents for event bursts from a vulnerability assessment (VA) reporting device. In 5.3.4, incidents firing throttles at over 430 incidents, and in 4.3.4, the incidents firing throttles at just over 220 incidents.

Upgrade to 5.3.3

No important notes exist for the 5.3.3 upgrade.

Upgrade to 5.3.2

The upgrade is from 5.3.1 to 5.3.2. The following important notes exist for this upgrade:

Release-Note for CSCsk19730/CSCsk12130

If you've edited a system rule on a Global Controller, you may enounter one of two conditions where the rules on the Global Controller are out of sync with those on the Local Controller.

Symptom: The edited rule in the Global Controller disappears from the list of rules on the Local Controller. (CSCsk12130)

Condition: The user edited a rule on the Global Controller and then upgraded to a different version of the MARS system software and then added of a new Local Controller to the Global Controller.

Symptom: A rule that was edited in the Global Controller looks as if it is an empty rule in the Local Controller and be inactive. (CSCsk19730)

Condition: This occurs under in some cases where a Local Controller is added to a newly upgraded Global Controller.

Work Arounds: If the Local Controller is deleted from and re-added to the Global Controller under x.3.2, the issue should resolve itself. However, in conditions with a large topology or many custom rules, we recommend contacting technical support for a work around that avoids the need to delete and re-add the Local Controller.

Another possible work around if the number of edited rules are small is to edit and make further changes to the rule and activate. In this case, the issue should be resolved for that rule.

Upgrade of IOS 12.3 and 12.4 devices. In previous releases, these devices were supported under the IOS 12.2 release when defining the device type in theMARS web interface. After you upgrade to 5.3.2, the next discovery of such a device will automatically upgrade the version to its correct value.

For example, an IOS 12.4 device is added to MARS 5.3.1 as 12.2 and after the upgrade to 5.3.2, when the discovery occurs for that device, the device type is automatically updated to IOS 12.4. The same is true for devices that are running IOS 12.3. However, if you have not enabled device discovery, use the Change Version feature to change between IOS 12.2, 12.3, and 12.4.

Wireless LAN Controller Support is restricted to the 5.3.x train. To enable support for wireless access points via the Cisco Wireless LAN Controller, you must use the 5.3.2 or later software, which also restricts the appliance models that can be used.

Juniper/NetScreen IDP 3.x and 4.x Support is incomplete. While device support has been added, the signature/data work portion of these devices will be provided in a future release of MARS software.

Renaming of QualysGuard 3.x device type. During the upgrade, any QualysGuard devices defined under Security and Monitoring Devices will changed their device type from QualysGuard 3.x to QualysGuard ANY.

Upgrade to 5.3.1

Beginning with the 4.3.1 and 5.3.1 releases, the dynamic IPS signature updates (if enabled) is an aspect of the version of software running on a MARS Appliance. Therefore, in addition to running the same MARS software versions on the Global Controller and Local Controller, the IPS signature version must match or the communications fail.

In a Global Controller-Local Controller deployment, configure the dynamic signature URL and all relevant settings on the Global Controller. When the Global Controller pulls the new signatures from CCO, all managed Local Controllers download the new signatures from the Global Controller.

Upgrade to 5.2.8

The upgrade is from 5.2.7 to 5.2.8. No important notes exist for this release.

Upgrade to 5.2.7

The upgrade is from 5.2.4 to 5.2.7; no 5.2.5 or 5.2.6 releases exist.

Determine the Required Upgrade Path

When upgrading from one software version to another, a prerequisite version is always required. This prerequisite version is the minimum level required to be running on the appliance before you can upgrade to the most recent version.

Table 6-1 Upgrade Path Matrix for 5.x Releases

From Version
Upgrade To
Upgrade Package

5.2.4

5.2.7

csmars-5.2.7.pkg

5.2.7

5.2.8

csmars-5.2.8.pkg

5.2.8

5.3.1

csmars-5.3.1.pkg

5.3.1

5.3.2

csmars-5.3.2.pkg

5.3.2

5.3.3

csmars-5.3.3.pkg

5.3.3

5.3.4

csmars-5.3.4.pkg

5.3.4

5.3.5

csmars-5.3.5.pkg

5.3.5

5.3.6

csmars-5.3.6.pkg


Download the Upgrade Package from Cisco.com

Upgrade images and supporting software are found on the CCO software download pages dedicated to MARS. You can access these pages at the following URLs, assuming you have a valid CCO account and that you have registered your SMARTnet contract number for your MARS Appliance.

Top-level page:

http://www.cisco.com/go/mars/

And then click the Download Software link in the Support box on the right side of the MARS product home page.

Result; The Download Software page loads.

From this top-level page, you can select one of the following options:

CS-MARS IPS Signature Updates Archives

CS-MARS IPS Signature Updates

CS-MARS Patches and Utilities (supplementary files)

CS-MARS Recovery Software

CS-MARS Upgrade Packages


Note If you are upgrading from a release earlier than those posted on CCO, please contact Cisco support for information on obtaining the required images. Do not attempt to skip releases along the upgrade path.


For information on obtaining a CCO account, see the following URL:

http://www.cisco.com/en/US/applicat/cdcrgstr/applications_overview.html

Specify the Proxy Settings for the Global Controller or Local Controller

If you know that your appliance cannot directly access the Internal Upgrade Server, you can specify the proxy settings. This procedure describes how to specify the proxy settings with the assumption that you will upgrade the appliance from the user interface associated with that appliance. For information on upgrading a Local Controller from within the Global Controller user interface, see Upgrading a Local Controller from the Global Controller.

To specify proxy settings, follow these steps:


Step 1 Open the MARS user interface in your browser.

Step 2 Select Admin > System Parameters > Proxy Settings.

Step 3 In the Proxy Address and Proxy Port fields, enter the address and port used by the proxy server that sits between your appliance and the Internal Upgrade Server.

Step 4 (Optional) In the Proxy User field, specify the username that the appliance must use to authenticate to the proxy server.


Note This username and password pair is neither the Cisco.com nor the Internal Upgrade Server login and password.


Step 5 (Optional) In the Proxy Password field, specify the password associated with the username you just provided.

Step 6 Click Submit to save your changes.


Upgrade Global Controller or Local Controller from its User Interface


Note This procedure is valid for versions 3.4.1 and later.


To upgrade the appliance from the user interface, follow these steps:


Step 1 Open the MARS user interface in your browser.

Step 2 Select Admin > System Maintenance > Upgrade.

Step 3 In the IP Address field, enter the address of the server where the upgrade package files are stored.

Step 4 In the User Name and Password fields, enter your Internal Upgrade Server login information.


Note MARS requires that the Internal Upgrade Server enforces user authentication. Therefore, you must specify a username and password pair to authenticate to the server.


Step 5 In the Path field, specify the path where the package file is stored, relative to the type of server access used.

Step 6 Select the appropriate protocol in the Server Type box.

You can download the install package using either HTTPS or FTP.

Step 7 In the Package Name field, specify the full name of the package file that you have downloaded.

Step 8 Click Download.

Result: Depending on the size of the package, this download can take some time. After the download is complete, the Install button becomes active.

Step 9 Click Install.

Result: After you click Install, the system needs some time to process the upgrade. After the upgrade is complete, the system reboots. During the upgrade, the user interface is also restarted.


Upgrade from the CLI

You can connect to the Internal Upgrade Server and complete the upgrade using HTTP or HTTPS, or you can download the upgrade package onto an FTP server and perform the upgrade. For more information on the upgrade command, see pnupgrade, page A-50.

To upgrade using the CLI, follow these steps:


Step 1 Log in to the appliance via the console port or SSH connection.

Step 2 Enter your MARS login name and password.

Step 3 To verify that the appliance is running the prerequisite version, run the CLI command:

version

The appliance must be running the supported prerequisite version. If it is not, you must follow the upgrade path to reach that version.

Step 4 Do one of the following:


Note MARS requires that the Internal Upgrade Server enforces user authentication. Therefore, you must specify a username and password pair to authenticate to the server whether it is accessed via HTTP, HTTPS, or FTP. In addition, if you are passing through a proxy server, that server must also enforce inline authentication.


To upgrade from a CD-ROM located in the appliance's DVD drive, run the CLI command:

pnupgrade cdrom://package/pn-ver.pkg

Where package is the path on the CD where you have stored the *.pkg file and where [ver] is the version number of the package file to which you want to upgrade, such as 3.3.4.

To upgrade from an internal HTTP or HTTPS server, run the CLI command:

pnupgrade https://upgrade.myhttpserver.com/upgrade/packages/ 
pn-ver.pkg [user] [password]

— or —

pnupgrade http://upgrade.myhttpserver.com/upgrade/packages/ 
pn-ver.pkg [user] [password]

Where upgrade.myttpserver.com/upgrade/packages is the server name and path where you have downloaded the other *.pkg file, and where ver is the version number, such as 3.3.4, and [user] and [password] are your Internal Upgrade Server login name and password.

To upgrade from your FTP server after you have downloaded the file, run the CLI command:

pnupgrade ftp://upgrade.myftpserver.com/upgrade/packages/ 
pn-ver.pkg [user] [password] 

Where upgrade.myftpserver.com/upgrade/packages is the server name and path where you have downloaded the other *.pkg file, and where [ver] is the version number, such as 3.3.4, [user] and [password] are your Internal Upgrade Server login name and password.

To upgrade from the Internal Upgrade Server through a proxy server, run the CLI command:

pnupgrade proxyServerIP:proxyServerPort [proxyUser:proxyPassword] 
https://upgrade.myhttpserver.com/upgrade/packages/pn-ver.pkg [user] [password]

Where the variables are defined as follows:

proxyServerIP:proxyServerPort identifies the IP address/port pair that connects to the proxy server residing between your appliance and the Internal Upgrade Server.

proxyUser:proxyPassword identifies the username and password pair required for the appliance to authenticate to the proxy server.

upgrade.myttpserver.com/upgrade/packages is the server name and path where you have downloaded the *.pkg file.

ver is the version number, such as 3.3.4.

[user] and [password] are your Internal Upgrade Server login name and password.

Result: A progress bar indicates the download percentage. After download is complete, the system takes some time to process the upgrade. After the upgrade is complete, the system reboots.


Upgrading a Local Controller from the Global Controller

When upgrading a Local Controller from within the Global Controller user interface, you need to determine whether the Local Controller resides behind a proxy server. If so, you must configure the proxy settings for the Local Controller within the Global Controller user interface. After you have specified the settings, you can upgrade the Local Controller as you normally would.


Note If Local Controller proxy information is not provided and you attempt to download an upgrade for that appliance, the Local Controller attempts to connect to Internal Upgrade Server and fails after a period of time.


When you upgrade a Global Controller and its monitored Local Controllers, you first upgrade Global Controller, which requires that you identify the Internal Upgrade Server information. The Global Controller then pushes this server information to all its selected Local Controllers, which allows the Local Controller to locate the Internal Upgrade Server and start the download and upgrade process. The Local Controller does not retrieve the upgrade package from the Global Controller.

Before You Begin

This procedure is valid for versions 3.4.1 and later.

Verify that each Local Controller is running the same software version that the Global Controller was running before its upgrade. Target Local Controllers must be running the prerequisite software version that the Global Controller was running before its upgrade.


Note If you upgrade a Global Controller/Local Controller pair, the Local Controller may appear offline for the first 10 minutes after the appliances reboot. The scheduler wakes up and re-syncs 10 minutes after startup.

If you notice that the Local Controller appears offline, verify that at least 10 minutes have passed since the appliances rebooted. Alternatively, you can jump start the communication by navigating to Admin > Local Controller Management in the Global Controller user interface.


Specify the Proxy Settings in the Global Controller

To specify the proxy settings for a Local Controller in the Global Controller user interface, follow these steps:


Step 1 Open the MARS user interface in your browser.

Step 2 Select Admin > System Maintenance > Upgrade.

Step 3 Click Proxy Settings. next to the Local Controller that you want to upgrade.

Result: The Global Controller user interface loads the Proxy Information page (Admin > System Parameters > Proxy Settings) on the selected Local Controller.

Step 4 In the Proxy Address and Proxy Port fields, enter the address and port used by the proxy server that sits between your appliance and the Internal Upgrade Server.

Step 5 In the Proxy User field, specify the username that the appliance must use to authenticate to the proxy server.


Note This username and password pair is not the Internal Upgrade Server Login and Password. MARS requires that proxy servers enforce inline user authentication. Therefore, you must specify a username and password pair to authenticate to the proxy server.


Step 6 In the Proxy Password field, specify the password associated with the username you just provided.

Step 7 Click Submit to save your changes.


Upgrade Local Controller from the Global Controller User Interface

You can upgrade any Local Controllers that are managed by a Global Controller from within the Global Controller user interface. This enables you to work your way through the list of Local Controllers without connecting to each appliance individually.


Step 1 Open the MARS user interface in your browser.

Step 2 Select Admin > System Maintenance > Upgrade.

Result: The list of Local Controllers that can be selected to upgrade appears.

Step 3 In the Login and Password fields, enter the Internal Upgrade Server login and password that you have assigned to your Internal Upgrade Server.


Note MARS requires that the Internal Upgrade Server enforces user authentication. Therefore, you must specify a username and password pair to authenticate to the server.


Step 4 Select the check box next to the Local Controller to upgrade, and click Download.

If you have specified proxy settings for the selected appliance, a popup window prompts you to verify the settings. After you verify the information, click OK. If you have forgotten to enter proxy information, click Cancel and then enter the proxy information for that Local Controller as described in Specify the Proxy Settings in the Global Controller.

Result: Depending on the size of the package, this download can take some time. After the download is complete, the Install button becomes active.

Step 5 Click Install.

Result: After you click Install, the remote system needs some time to process the upgrade. After the upgrade is complete, the remote system reboots. During the upgrade, the user interface is also restarted.


Configuring and Performing Appliance Data Backups

You can archive data from a MARS Appliance and use that data to restore the operating system (OS), system configuration settings, dynamic data (event data), or the complete system. The appliance archives and restores data to and from an external network-attached storage (NAS) system using the network file system (NFS) protocol. While you cannot schedule when the data backup occurs, the MARS Appliance performs a configuration backup every morning at 2:00 a.m. and events are archived every hour. The configuration backup can take several hours to complete.

When archiving is enabled, dynamic data is written twice: once to the local database and once to the NFS archive. As such, the dynamic data that is archived includes only the data that is received or generated after you enable the data archive setting. Therefore, we recommend that you enable archiving before configuring your appliance to receive audit events from reporting devices.

You can use the same NFS server to archive the data for more than one MARS Appliance; however, you must specify a unique directory in the NFS path for each appliance that you want archive. If you use the same base directory, the appliances overwrite each others' data, effectively corrupting the images.


Note For the complete list of supported NFS servers, see:

http://www.cisco.com/en/US/docs/security/security_management/cs-mars/4.3/compatibility/local_controller/dtlc43x.html


Each MARS Appliance seamlessly archives data using an expiration date that you specify. When the MARS internal storage reaches capacity, it automatically purges the data in the oldest partition of the local database, roughly 10% of the stored event and session data. The data in the NFS file share has a life span specified in days. Therefore, to keep a year's worth of data, you would specify 365 days as the value for the Remote Storage Capacity (in Days) field. All data older than 365 days is purged from the archive file.

When planning for space requirements, use the following guidance: Estimate 6 GB of storage space for one year's worth of data, received at a sustained 10 events/second. This estimate assumes an average of 200 Bytes/event and a compression factor of 10, both realistic mean values. In addition to capacity planning, plan the placement of your NFS server to ensure a reliable network connection that can transmit 10 MB/second exists between the NFS server and the MARS Appliance. You should consider using the eth1 interface to avoid high-traffic networks that might introduce latency and to ensure that the backup operation is not competing with other operations in the MARS Appliance. Also, define a default route to the NFS server on the MARS Appliance and that you verify any intermediate routers and firewalls allow for multi-hour NFS connections to prevent session timeouts during the backup operation.


Note Data archiving is local to a given appliance. When you configure data archiving on a Global Controller, you are archiving the data for that appliance; you cannot configure the Global Controller to archive data from Local Controllers that it monitors.


For more information on the uses and format of the archived data, see the following topics:

Typical Uses of the Archived Data

Format of the Archive Share Files

Archive Intervals By Data Type

Guidelines for Restoring

pnrestore, page A-43

To configure data archiving, you must perform the following procedures:

1. Configure the NFS server:

Configure the NFS Server on Windows

Configure the NFS Server on Linux

Configure the NetApp NFS Server

2. Configure Lookup Information for the NFS Server

3. Configure the Data Archive Setting for the MARS Appliance

Typical Uses of the Archived Data

While the primary use of an archive is to restore the appliance in response to a catastrophic software failure, the archived data provides the following alternate uses:

Use Admin > System Maintenance > Retrieve Raw Messages to analyze historical raw messages from periods that exceed the capacity of the local database. The data returned from raw message retrieval is simply the audit message provided by the reporting device. The raw message is just the message as sent by the reporting device, such as a syslog message. For more information, see Retrieving Raw Messages.

Manually view the archived event records, which are compressed using gzip. Viewing the data in this manner is faster than retrieving raw messages from either the local database or the archive. However, the record format is more complicated than the simple raw event returned by the Retrieve Raw Messages operation. It includes all the data necessary to restore the incidents and dependent data, including the raw message and the system data required to correlate that message with the session, device type, five tuple (source IP, destination IP, protocol, source port, and destination port), and all other data points. For more information, see Format of the Archive Share Files and Access the Data Within an Archived File.

Image a standby or secondary MARS Appliance to either swap into the network in the event of a hardware failure or to access full query and report features for historical time periods. For more information, see Configuring a Standby or Secondary MARS Appliance, and Guidelines for Restoring.

Format of the Archive Share Files

The MARS archive process runs daily at 2:00 a.m., and it creates a dated directory for its data. You cannot specify a different time to archive the data.

The pnos directory is where the operating system backup is stored.

06/12/2005  11:32p      <DIR>          .
06/12/2005  11:32p      <DIR>          ..
07/09/2005  01:30a      <DIR>          pnos													<-- OS Backup Directory
07/08/2005  04:49p      <DIR>          2005-07-08		<-- Daily Data Backup Directory
07/10/2005  12:09a      <DIR>          2005-07-10
07/11/2005  12:12a      <DIR>          2005-07-11
07/12/2005  12:12a      <DIR>          2005-07-12
07/13/2005  12:16a      <DIR>          2005-07-13
07/14/2005  02:02a      <DIR>          2005-07-14
07/15/2005  02:02a      <DIR>          2005-07-15
07/16/2005  02:02a      <DIR>          2005-07-16
07/17/2005  02:02a      <DIR>          2005-07-17
07/18/2005  02:02a      <DIR>          2005-07-18
07/19/2005  02:02a      <DIR>          2005-07-19
07/19/2005  09:46p      <DIR>          2005-05-26
07/20/2005  07:16a      <DIR>          2005-05-27
07/20/2005  07:17a      <DIR>          2005-07-20
07/22/2005  12:13a      <DIR>          2005-07-22
07/21/2005  12:09a      <DIR>          2005-07-21
07/23/2005  12:15a      <DIR>          2005-07-23
               0 File(s)              0 bytes
              58 Dir(s)   4,664,180,736 bytes free

Within each daily directory, subdirectories are created for each data type. The following example identifies the directory type in the comments.

Directory of D:\MARSBackups\2005-07-08

07/08/2005  04:49p      <DIR>          .
07/08/2005  04:49p      <DIR>          ..
07/08/2005  04:49p      <DIR>          CF	<-- Configuration Data
07/08/2005  05:00p      <DIR>          IN	<-- Incident Data
07/08/2005  05:16p      <DIR>          AL	<-- Audit Logs
07/08/2005  05:16p      <DIR>          ST	<-- Statistics Data
07/08/2005  05:16p      <DIR>          RR	<-- Report Results
07/08/2005  05:49p      <DIR>          ES	<-- Raw Event Data
               0 File(s)              0 bytes
               8 Dir(s)   4,664,180,736 bytes free

The .gz filename in the raw event data directory identifies the period of time that the archived data spans in a YYYY-MM-DD-HH-MM-SS format. The filename includes the following data [dbversion]-[productversion]-[serialno]_[StartTime]_[EndTime].gz. The following examples illustrate this format:

ix-5248-524-1171238692_2007-02-12-00-04-46_2007-02-12-01-04-51.gz
rm-5248-524-1171238692_2007-02-12-00-04-46_2007-02-12-01-04-51.gz

Note Files starting with "ix" are index files and those starting with "rm" contain the raw messages.


Directory of D:\MARSBackups\2005-07-08\ES

07/08/2005  05:49p      <DIR>          .
07/08/2005  05:49p      <DIR>          ..
07/08/2005  05:49p              34,861 es-3412-342_2005-07-08-16-49-52_2005-07-08-17-49-47.gz
07/08/2005  05:49p              31,828 rm-3412-342_2005-07-08-16-49-52_2005-07-08-17-49-47.gz
07/08/2005  06:49p              49,757 es-3412-342_2005-07-08-17-49-49_2005-07-08-18-49-40.gz
07/08/2005  06:49p              48,154 rm-3412-342_2005-07-08-17-49-49_2005-07-08-18-49-40.gz
07/08/2005  07:49p              24,420 es-3412-342_2005-07-08-18-49-45_2005-07-08-19-49-52.gz
07/08/2005  07:49p              22,346 rm-3412-342_2005-07-08-18-49-45_2005-07-08-19-49-52.gz
07/08/2005  08:50p              44,839 es-3412-342_2005-07-08-19-49-47_2005-07-08-20-50-04.gz
07/08/2005  08:50p              41,534 rm-3412-342_2005-07-08-19-49-47_2005-07-08-20-50-04.gz
07/08/2005  09:50p              58,988 es-3412-342_2005-07-08-20-49-55_2005-07-08-21-50-06.gz
07/08/2005  09:50p              54,463 rm-3412-342_2005-07-08-20-49-55_2005-07-08-21-50-06.gz
07/08/2005  10:50p             130,604 es-3412-342_2005-07-08-21-49-58_2005-07-08-22-50-08.gz
07/08/2005  10:50p              85,437 rm-3412-342_2005-07-08-21-49-58_2005-07-08-22-50-08.gz
07/08/2005  11:50p             114,445 es-3412-342_2005-07-08-22-49-55_2005-07-08-23-50-10.gz
07/08/2005  11:50p              58,240 rm-3412-342_2005-07-08-22-49-55_2005-07-08-23-50-10.gz
07/09/2005  12:50a             110,556 es-3412-342_2005-07-08-23-50-02_2005-07-09-00-50-14.gz
07/09/2005  12:50a              53,977 rm-3412-342_2005-07-08-23-50-02_2005-07-09-00-50-14.gz
              16 File(s)        964,449 bytes
               2 Dir(s)   4,664,164,352 bytes free

The following is an example of the data found in the configuration data directory.

Directory of D:\MARSBackups\2005-07-08\CF

07/08/2005  04:49p      <DIR>          .
07/08/2005  04:49p      <DIR>          ..
07/08/2005  02:02a           2,575,471 cf_2005-07-08-02-02-02.pna
               1 File(s)      2,575,471 bytes
               2 Dir(s)   4,664,164,352 bytes free

Archive Intervals By Data Type

MARS archives data either daily or in near real time based on the type of data. Therefore, all the data in the MARS internal storage (local database) should be in the NFS storage as well, give or take a day's worth of specific types of data.

MARS data consists of four types:

1. configuration data, such as topology and device settings, which is archived daily

2. audit trails of MARS web interface activity and MARS report results, which are archived daily

3. MARS statistics, such as charts in Summary/Dashboard, which are archived hourly

4. dynamic and event data, such as events, sessions, and incidents, which are archived quickly so they do not tax the MARS Appliance's local storage.

Configuration data, audit trails, and statical data is written to database first. During archival time, data is written to local files and archived from those files. However, dynamic and event data is written in parallel to both the database and to local files. Therefore, even if the data has been archived, it is likely to still be in the database.

In other words, dynamic and event data is initially stored in two locations: the NFS archive and MARS database. Later, when the MARS database partition becomes full, the database purge operation occurs to make room for new events—but those events and incidents were archived prior to the purge operation.


Note Once data is purged from the MARS local database, it can not be queried. Queries and reports operate only on the data in the MARS database.


To account for temporarily unavailable NFS servers, the files for all data types are stored locally on the MARS Appliance for one day before they are purged. When you enable archiving in the web interface, you must also define the parameters for retaining the data in the NFS archive. As a result, MARS performs simple data maintenance on the NFS server by purging data outside the range specified in the Remote storage capacity in Days field of the Data Archiving page. For example, the storage capacity value is 365 days, then all data older than one year is purged from the NFS server.

Refer to Table 6-2 for the archive interval for each type of data.

Table 6-2 Archive Interval Description(4.3.1 and 5.2.4 and later)

Archive Folder and Data Type Description
Archive Interval
Max. Interval
(in minutes)
Schedule

AL: Audit log information

Once per day at 2:00 a.m.

n/a

Daily at
2 a.m.

CF: Configuration information

Once per day at 2:00 a.m.

n/a

Daily at
2 a.m.

ES: Events, sessions, and raw messages

Every 10 minutes or when 3 MB (compressed) file size is reached, whichever threshold is met first.

10 minutes

n/a

IN: Incidents

Immediately

1 minute1

n/a

RR: Report results

Once per day at 2:00 a.m.

 

n/a

ST: Statistical data/counters information

Hourly.

 

n/a

1 If event rate is higher, archive interval for real time can be shorter than Max Interval.


Configure the NFS Server on Windows

Windows Services for UNIX (WSU) allows an NFS mount to be created on a Windows file server. This option is convenient and is often useful in a lab environments or when UNIX expertise is unavailable. The following URLs support the configuration of this complimentary download from Microsoft Corporation:

Windows Services for UNIX 3.5 Download and Resources (System Requirements, Reviewer's Guide, etc.)

http://www.microsoft.com/downloads/details.aspx?FamilyID=896c9688-601b-44f1-81a4-02878ff11778&DisplayLang=en

Performance Tuning Guidelines for Microsoft Services for Network File System

http://technet.microsoft.com/en-us/library/bb463205.aspx

To install and configure the WSU 3.5 to operate with a MARS Appliance, perform the following tasks:

Install Windows Services for UNIX 3.5

Configure a Share using Windows Services for UNIX 3.5

Install Windows Services for UNIX 3.5

To configure the NFS server on a Windows server, follow these steps:


Step 1 Log in to the Windows server using an account with either local or domain-level administrative privileges.


Note If you install the services using an account without administrative privileges, the archive process fails.


Step 2 Download the Windows Services for UNIX 3.5.

Step 3 To install the Windows Services for UNIX, double-click SFU35SEL_EN.exe.

Step 4 Enter the folder where the program files should be extracted in the Unzip to folder field, and click Unzip.

We recommend defining a new folder, not using the temp folder under the local profile. The unzip process can take several minutes.

Step 5 Open the folder where you extracted the files, and double-click SfuSetup.msi.

Step 6 Click Next to continue.

The Customer Information panel appears.

Step 7 Enter values for the User name and Organization fields, and click Next.

The License and Support Information panel appears.

Step 8 Select the I accept the agreement option, and click Next.

Step 9 Select the Custom Installation option, and click Next.

Step 10 At a minimum, you must select Entire feature (including any subfeatures if any) will be installed on local hard drive for the following components Under Windows Services for UNIX in the Components list, and then click Next:

NFS (This option includes the Client for NFS and Server for NFS subfeatures.)

Authentication tools for NFS (This option includes the User Name Mapping, Server for NFS Authentication, and Server for PCNFS subfeatures.)


Note This procedure assumes that you have selected Entire feature will not be available for all components other than NFS and Authentication tools for NFS.


The Security Settings panel appears.

Step 11 Verify that the Change the default behavior to case sensitive check box is not selected, and then click Next.

As the MARS Appliance does not use a special account for NFS authentication, you do not need to change the default settings.

Step 12 The User Name Mapping panel appears.

Step 13 Verify that the Local User Name Mapping Server and Network Information Service (NIS) options are selected, and then click Next.

A second User Name Mapping panel appears.

Step 14 Enter values for the following fields, and then click Next:

Windows domain name. We recommend accepting the default value, which is the local host name.

(Optional) NIS domain name

(Optional) NIS server name

The Installation Location panel appears.

Step 15 Enter the desired installation location and click Next.

The Installing panel appears, presenting the progress of the installation. When the installation completes, the Completing the Microsoft Windows Services for UNIX Setup Wizard panel appears.

Step 16 Click Finish to complete the installation and close the Setup Wizard.

Step 17 Reboot the computer.

You have successfully installed the required NFS components. Now you must define and configure a share to be used by the MARS Appliance for backups and archiving. For more information, see Configure a Share using Windows Services for UNIX 3.5.


Configure a Share using Windows Services for UNIX 3.5

Configuring the share involves identifying the folder to share and specifying the correct permissions and access.

To configure WSU 3.5 as an NFS server for a MARS Appliance, follow these steps:


Step 1 Start Windows Explorer on the Window host where you installed WSU 3.5.

Step 2 Create the folder where you want the MARS archives to be stored.

An example folder is C:\MARSBackups.

Step 3 Right-click on the folder you created and click the NFS Sharing tab.

Step 4 Select the Share this folder option, and enter a name in the Share name field.

An example share name can be the same as the folder name, MARSBackups.

Step 5 Select the Allow Anonymous Access check box.

As the Windows server cannot directly authenticate the MARS Appliance, you must select this option.

Step 6 Click Permission.

The NFS Share Permissions dialog box appears.

Step 7 Select ALL MACHINES under Name, and then select No Access from the Type of Access list.

Step 8 Click Add.

Step 9 Enter the IP address of the MARS Appliance, and click OK.

Step 10 Select the IP address of the MARS Appliance, then select Read-Write from the Type of Access list. Ensure that ANSI is selected from the Encoding list.

Step 11 Click OK to save your changes and close the NFS Share Permissions dialog box.

Step 12 Click Apply to enable your changes.


Note If the Apply does not work, you did not reboot the server after installing WSU 3.5. To work around this issue, you must reboot the server and repeat this procedure.


Step 13 From the DOS command window, enter the following commands:

cd <PathToParentOfShareFolder>

cacls <ShareFolderName> /E /G everyone:F

These commands modify the shared folder the permissions so that Everyone has local filesystem access to the folder. Example usage:

cd C:\archive 
cacls MARSBackups /E /G everyone:F 

Step 14 Click Start > Control Panel > Administrative Tools  > Local Security Policy

Step 15 Under Local Security Policy > Security Options, double-click Network Access: Let Everyone permissions apply to anonymous users, select Enabled, and click OK.

This option equates the Anonymous user to the Everyone user.

Step 16 Configure exceptions for the required NFS ports in the Windows Firewall or other firewall application running on the server.

You have completed the NFS configuration settings for the Windows server. To enable logging for debug purposes, continue with Enable Logging of NFS Events. Otherwise, continue with Configure the Data Archive Setting for the MARS Appliance.


Enable Logging of NFS Events

For troubleshooting purposes, you can enable NFS Server logging on a Windows host that is running the Microsoft Windows Services for UNIX 3.5.

To enable NFS server logging on the Windows host, follow these steps:


Step 1 Click Start > All Programs > Services for UNIX Administration > Services for UNIX Administration.

Step 2 Under Services for UNIX, select Server for NFS.

Step 3 Specify the folder where you want the log file to appear under Log events in this file:

By default the log file appears in C:\SFU\log directory.

Step 4 Verify that all the check boxes are selected.

Step 5 Click Apply to save your changes.

Step 6 Continue with Configure the Data Archive Setting for the MARS Appliance.


Configure the NFS Server on Linux

NFS is supported natively on Linux file systems, which requires that you have a Linux box. Because a Linux file server can be built inexpensively, it is highly recommended that a file server be built and dedicated for MARS archived data.

This section presents an example configuration as guidance for configuring your NFS to archive the data for a MARS Appliance. For each MARS Appliance that you want to archive for a given NFS server, you must set up a directory on the NFS server to which the appliance can read and write. The following procedure identifies the steps required to accomplish this task.

To prepare a Linux NFS Server for archiving from a MARS Appliance, follow these steps:


Step 1 Log in to the NFS server using an account with root permissions.

Step 2 Create a directory for archiving data.

For example:

mkdir -p /archive/nameOfYourMARSBoxHere
chown -R nobody.nobody /archive
chmod -R 775 /archive


Note Mode 770 works only for MARS Appliances running the same software generation (4.x or 5.x). Use 775 to support a mixed environment of 4.x to 5.3.x software and when performing migrations from 4.x to 5.3.x. Due to difference of UID/GID between the 4.x to 5.x releases, you must allow r-x so an appliance running 5.3.x can import from files exported by a 4.x appliance.



Step 3 In the /etc/exports file, add the following line:

/archive/nameOfYourMARSBoxHere MARS_IP_Address(rw) 

Step 4 Restart the NFS service.

/etc/init.d/nfs restart


Configure the NetApp NFS Server

The NetApp NFS server differs from other Linux/UNIX NFS servers in that NetApp restricts the functionaliy of the shell environment running on the server. As such, you must use an external UNIX/Linux admininstrative host to change the permissions and ownership of the exported NFS directory.

Before You Begin

To perform the tasks in this procedure, you must configure an external Linux/UNIX admininstrative host. For information on configuring such a host, refer to the documentation for you Network Appliance server.

To prepare the NetApp NFS server so that the MARS Appliance can archive to it, follow these step:


Step 1 If you have not exported an directory on the NetApp NFS appliance, and perform the following task from the NetApp's web GUI.

a. Connect to the NetApp administrative host (http://hostname/na_admin/).

a. Click FilerView, then click NFS on the menu in the left pane.

b. If the exported directory already exists, click Manage Exports under NFS. Otherwise, click Add Export under NFS.

c. Select the following options on the NFS Export Wizard page, and click Next:

Read-Write Access

Root-Access

Security

The NFS Export Wizard - Path page appears.


Note If you are using a temporary NetApp administrative host, you can disable the host's access to the exported directory. To do so, do not select the Root-Access option. This configuration disable access by the host to the exported NFS directory.


d. Enter the path to the desired export directory in the Export Path field, and click Next.

The NFS Export Wizard - Read-Write Access page appears.

e. Click Add, and enter the IP address of the MARS Appliance in the Host to Add field, and click OK.

f. Click Add, and enter the IP address of the NetApp administrative host in the Host to Add field, click OK, and then click Next.

The NFS Export Wizard - Root Access page appears.

g. Click Add, then and enter the IP address of the NetApp appliance (or the IP address of the Linux/Unix server to serve this purpose) in the Host to Add field, click OK, and then click Next.

The NFS Export Wizard - Security page appears.

h. Select the Unix Style option, and click Next.

The NFS Export Wizard - Commit page appears.

i. Verify that the settings are correct, and then Commit.

Step 2 To change the permissions of the exported directory, enter the following commands on the NetApp administrative host:

mount NetAppIP:/PathToExport /mnt/YourMountPoint

chown nobody.nobody /mnt/YourMountPoint

chmod 775 /mnt/YourMountPoint


Note Mode 770 works only for MARS Appliances running the same software generation (4.x or 5.x). Use 775 to support a mixed environment of 4.x to 5.3.x software and when performing migrations from 4.x to 5.3.x. Due to difference of UID/GID between the 4.x to 5.x releases, you must allow r-x so an appliance running 5.3.x can import from files exported by a 4.x appliance.


Step 3 To verify that /mnt/YourMountPoint directory is writable by anyone, enter the following command:

ls -l /mnt

Step 4 To unmount the directory, enter the following command:

umount /mnt/YourMountPoint

Step 5 Configure the MARS Appliance to use the path as archiving directory as described in Configure the Data Archive Setting for the MARS Appliance.

Configure Lookup Information for the NFS Server


Note These common guidelines apply to NFS servers running on either Linux or Windows.


Many services in the current Linux system, such as ssh and the NFS server, use nslookup to obtain the hostname of the client. If the nslookup operation fails, the connection may fail or take a long time to finish the negotiation.

For the pnarchive and pnrestore operations to succeed, the NFS server must obtain the hostname of the MARS Appliance using its IP address. You can ensure that it obtains this information by doing one of the following:

Add the NFS client (MARS Appliance) info in /etc/hosts file on the NFS server. The hosts file is located at WINDOWS\system32\drivers\etc\ on Windows servers.

Add the MARS Appliance information to your DNS server.

During a typical restore process, the MARS Appliance is first re-imaged from the DVD, upgraded to the correct version of software, and then the restore operation is performed. During the DVD re-image process, the name of the appliance is changed to the factory default, which is pnmars. If you do not wish to change the name of the appliance before you attempt to restore it from the NFS server, you must ensure add an entry for pnmars to the DNS server or in the /etc/hosts file on the NFS server so that during the restore operation, the NFS server can perform an IP address-to-hostname lookup for the MARS Appliance.

After the restore operation completes, the MARS Appliance will be restored to the name saved in the archived OS package. You should have included this name already in the DNS server or /etc/host file of the NFS server. Otherwise, this archive/restore operations may not function properly.

Configure the Data Archive Setting for the MARS Appliance

You can archive the data and the system software that is running on a MARS Appliance to a remote server. This data archival includes operating system (OS) and upgrade/patch data, system configuration settings, and dynamic data, such as system logs, incidents, generated reports, and the audit events received by the appliance. The feature provides a snapshot image of the appliance.


Note While complete system configuration data is archived, the dynamic data that is archived includes only the data that is received or generated after you enable the data archive setting. Therefore, we recommend that you enable archiving before configuring your appliance to receive audit events from reporting devices.


Using archived data, you can restore your appliance in the event of a failure, as long as the data is not corrupted. In this capacity, data archiving provides an alternative to re-imaging your appliance with the Recovery DVD.

Before You Begin

You must set up the NFS server correctly to archive the appliance's data. See Configure the NFS Server on Windows or Configure the NFS Server on Linux.

You must configure the basic network settings for the appliance.

To configure the data archive settings for a given MARS Appliance, follow these steps:


Step 1 Select Admin > System Maintenance > Data Archiving.

Step 2 In the Remote Host IP field, enter the IP address of the remote NFS server or a NAS system that supports the NFS protocol.

Step 3 In the Remote Path field, enter the export path on the remote NFS server or a NAS system where you want to store the archive files.

For example, /MARSBackups would be a valid value for a Windows host with an NFS share named MARSBackups. The forward slash is required to resolve the UNC share name.

Step 4 In the Archiving Protocol field, select NFS.

No other options are available.

Step 5 In the Remote storage capacity in Days field, enter one of the following values:

The maximum number of days for which you want the archive server to retain data. The server keeps your data for the number of days previous to the current date.

The number of days of data that the archive server can maximally retain. In other words, you are identifying the upward capacity of the archive server.

Step 6 Click Start to enable archiving for this appliance.


Note After starting archiving, if you see an error message such as "invalid remote IP or path," your NFS server is not correctly configured. If you receive these messages, consult Configure the NFS Server on Windows or Configure the NFS Server on Linux.


Result: A status page appears. Click Back to return to the Data Archiving page.

Step 7 If you need to change any values on this page, enter the value and click Change.


Tip To stop archiving data, return to the Data Archiving page and click Stop.



Access the Data Within an Archived File

You can access the event data in an archived file allows to review the events contained therein. You may want to perform this task to look at a particular time range of events or to perform post processing on the data.


Tip For other options on accessing archived data, see Typical Uses of the Archived Data


To access the data within an archived file, follow these steps:


Step 1 Perform the following command at the command line interface of the archive server:

cd <achive_path>

where archive_path is the remote path value specified in Configure the Data Archive Setting for the MARS Appliance.

Step 2 To select the archive to review, enter the following command:

cd <YYYY-MM-DD>

where YYYY-MM-DD is the date that the archive file was created.

Step 3 To view the list of archive files for the selected data, enter the following command:

cd ES ls -l

Step 4 To extract the data from the archive file, enter the following command:

gunzip <filename>

where filename is the name of the file to extract. The list of available files are based on a timestamp for when they were created.

Step 5 To view the file's contents, enter the following command:

vi <filename>

You can use any text editor or run scripts against the data in these files. However, you should not change the contents of these zipped files or leave extracted data or additional files in the archive folders. MARS cannot process new or extracted files when performing a restore operation.


Troubleshooting Data Archiving

Table 6-3 identifies possible errors and likely causes and solutions.

Table 6-3 Error Table for Archive Server and MARS Integration

Error/Symptom
Workaround/Solution

Connection to remote archive server fails! (archive server IP: <address>, exported path: /<archive_server_path>)

CS-MARS appliance cannot connect to the remote archive server that is set up for archiving the configuration and event data.

Please verify that the connection of the archive server at IP: '<address>' to the CS-MARS appliance is OK and CS-MARS appliance has the write permission to the exported path: '/<archive_server_path>' on the archive server!

The connection between the MARS Appliance and the archive server has failed. Verify the route is allowed between the two devices (ping the archive server from the MARS Appliance CLI) and verify the archive server is running. Verify that the exported path value is correct under Admin > System Maintenance > Data Archiving.

Note You will receive an e-mail periodically until the NFS connection issue is resolved. For 5.3.2/5.3.3, two e-mail notification are sent when a write problem occurs, followed by a new message every 24 hours until the issue is resolved. For 5.3.4, it is one e-mail every two minutes until resolved.

Configure exceptions for required NFS ports in any firewall or port blocking software running on the archive server.


Recovery Management

MARS Appliance functionality includes two procedures that you can perform using the MARS Appliance Recovery DVD-ROM. The approach you should take to recover your appliance depends upon whether or not you have archived data that you want to recover as well. Two decisions affect how you will recover your MARS Appliance:

Re-Image a Global Controller or Local Controller. The procedure for recovering an appliance is unique to the role that the appliance has in the STM system. Global Controllers require an additional operation on each monitored Local Controller.

Archived Data. If you have been archiving data for the appliance that you wish to recover, there is an additional step following recovery of the appliance.


Caution The recovery process erases the MARS Appliance hard disk drive. You permanently lose all configuration and event data that you have not previously archived or backed up. If possible, write down your license key before you re-image the appliance. You must provide this license key during the initial configuration following any re-image operation, and it is not restored as part of archived data.

The procedures, detailed in this section, are as follows:

Recovering a Lost Administrative Password

Downloading and Burning a Recovery DVD

Recovery the MARS Operating System

Re-Imaging a Local Controller

Re-Imaging a Global Controller

Restoring Archived Data after Re-Imaging a MARS Appliance

Recovering a Lost Administrative Password

If you lose the password associated with the pnadmin account, you cannot recover the password. You must re-image the appliance, which resets the password to the factory defaults, as described in Re-Imaging a Local Controller, and Re-Imaging a Global Controller. If you have configured the MARS Appliance to archive data, as described in Configuring and Performing Appliance Data Backups, you can also recover the configuration and event data using the procedure in Restoring Archived Data after Re-Imaging a MARS Appliance.

Downloading and Burning a Recovery DVD

If you do not have the MARS Appliance Recovery DVD-ROM that shipped with your MARS Appliance or you want to use a new image to expedite the post recovery upgrade process, you can download the current recovery image from the Cisco.com software download pages dedicated to MARS. You can access these pages at the following URLs, assuming you have a valid Cisco.com account and that you have registered your SMARTnet contract number for your MARS Appliance.

Top-level download page:

http://www.cisco.com/go/mars/

And then click the Download Software link in the Support box on the right side of the MARS product home page.

Result: The Download Software page loads.

From this top-level page, you can select CS-MARS Recovery Software.

After you download the ISO image, for example, csmars-4.1.1.iso, you must burn that file on to a DVD-ROM. The files are typically 1.42 GB or larger.

The following guidelines are defined:

Use DVD+R or DVD+RW (DVD-R is not supported) and the correct media for either of those standards.

Do not burn the DVD at a speed higher than 4X.

To make a bootable DVD, you must burn the *.iso file onto the DVD using the bootable ISO DVD format; just copying the file to DVD does not make it bootable. Do not copy the *.iso file to a DVD; instead, you must extract it onto the DVD using your burner software. Most DVD burner software has a burn image function that extracts the files and makes the DVD bootable.

Recovery the MARS Operating System

For MARS 110, 210, GC2, and their variant models, the MARS operating system (OS) is stored separate from the MARS application and event data. It is stored on a flash disk-on-module (DOM) drive in the appliance. With the OS and application separation, if the MARS application hangs due to a RAID failure, you can login from a remote host and still retrieve log and trace data to assist in identifying the root cause of the failure.

The flash drive corrupts when, for example, system libraries or executable files are missing or are the wrong sizes as reported during a consistency checks or when the previous configuration is lost. When a corruption occurs, you will see symptoms like a failure to boot or to deploy the previous configuration, not able to execute certain commands, failures during the file system consistency check, or errors reporting missing files.

If the flash becomes corrupted, you can restore the OS using a Recovery DVD. For information on creating a Recovery DVD, see Downloading and Burning a Recovery DVD. The recovery operation restores the MARS OS without prompting for installation option information, such as the model or role (Global Controller vs. Local Controller). The flash drive is also stores the system configuration data (IP addresses, DNS configuration settings, host name, and license file). During an OS recovery, the daily backup of the configuration data is copied from the hard drive to the flash drive so you configuration can be reapplied, eliminating any appliance configuration or licensing.

Before You Begin

Ensure that the release number of the Recovery DVD matches the operating system running on your appliance. Issues may result if a DVD of an earlier release is used to recover a appliance running a newer release. The DVD does not checks the versions to prevent this issue.

During the OS recovery operation, the system configuration data is copied from the hard drive to the flash drive. The system configuration data is created as part of the daily backup operation and is created nightly at 2:00 A.M. If your appliance has not been running long enough to back up the system configuration, then the OS is restored but the configuration is not.

If you changed your system network settings (DNS, IP address, or hostname) after the last nightly backup, you must manually (using the ifconfig, hostname, and the dns commands) correct the settings once the OS recovery operation completes.

To recovery the operating system for your MARS Appliance, follow these steps:


Step 1 Connect your monitor to the MARS Appliance's VGA port and your keyboard to the PS/2 keyboard port. (To view a diagram of the MARS Appliance VGA and serial ports, refer to the appropriate model in Hardware Descriptions—MARS 25R, 25, 55, 110R, 110, 210, GC2R, and GC2, page 1-4.)

Step 2 Disconnect any connected network cables from the eth0 and eth1 ports.

Step 3 Put the Recovery DVD in the MARS Appliance DVD-ROM drive.

Step 4 Do one of the following:

Log in to the MARS Appliance as pnadmin and reboot the system using the reboot command

Power cycle the MARS Appliance

Result: The following message displays on the console:

Please Choose A MARS Model To Install...
1. Distributed Mars - Local Controller
2. Distributed Mars - Global Controller
3. Mars Operating System Recovery
4. Quit

Step 5 Using the arrow keys, select 3. Mars Operating System Recovery at the Recover menu and press Enter.

Result: The OS binary download to the appliance begins. This process takes approximately 15 minutes. After the image download is complete, the Recovery DVD is ejected and the following message appears on the console:

Please remove the installation CD and press Reboot to finish the installation.

Step 6 Remove the Recovery DVD from the MARS Appliance.

Step 7 Press Enter to restart the MARS Appliance.

Result: The MARS Appliance reboots and synchronizes the configuration information between the flash drive and the hard drive.

Step 8 Reconnect any network cables to the eth0 and eth1 ports.

Because the OS recovery does not affect configuration data or event data, the system should be accessible with no further configuration requirements.


Re-Imaging a Local Controller

Use the MARS Appliance Recovery DVD-ROM to re-image the Local Controller if necessary. This operation destroys all data and installs a new image. In addition to preparing the device and later restoring any archived date, you must also perform three time-consuming appliance recovery phases:

Image downloading from the CD (about 30 minutes)

Image installation after the download (about 90 minutes)

Basic system configuration (about 5 minutes)


Caution Performing this procedure destroys all data stored on the MARS Appliance.

Before You Begin

You must provide the license file during the initial configuration following the re-image operation.

To re-image your Local Controller, follow these steps:


Step 1 Connect your monitor to the MARS Appliance VGA port and your keyboard to the PS/2 keyboard port. (To view a diagram of the MARS Appliance VGA and serial ports, refer to the appropriate model in Hardware Descriptions—MARS 25R, 25, 55, 110R, 110, 210, GC2R, and GC2, page 1-4.)

Step 2 Disconnect any connected network cables from the eth0 and eth1 ports.

Step 3 Put the Recovery DVD in the MARS Appliance DVD-ROM drive.

Step 4 Do one of the following:

Log in to the MARS Appliance as pnadmin and reboot the system using the reboot command

Power cycle the MARS Appliance

Result: The following message displays on the console:

Please Choose A MARS Model To Install...
1. Distributed Mars - Local Controller
2. Distributed Mars - Global Controller
3. Mars Operating System Recovery
4. Quit

Step 5 Using the arrow keys, select 1. Distributed MARS — Local Controller at the Recover menu and press Enter.

a. If you are re-imaging a MARS 110R or 110, the following message appears on the console. Otherwise, continue with Step 6.

Please Choose Which MARS 110 Model To Install...
1.	MARS110
2.	MARS110R
3.	Quit

b. Using the arrow keys, select the proper model based on the license you purchased and press Enter.

Result: The image download to the appliance begins. This process takes approximately 15 minutes. After the image download is complete, the Recovery DVD is ejected and the following message appears on the console:

Please remove the installation CD and press Reboot to finish the installation.

Step 6 Remove the Recovery DVD from the MARS Appliance.

Step 7 Press Enter to restart the MARS Appliance.

Result: The MARS Appliance reboots, performs some configurations, including building the Oracle database. The configurations that occur after the first reboot take a significant amount of time (between an hour and an hour and a half), during which there is no feedback; this is normal system behavior.

Step 8 Reconnect any network cables to the eth0 and eth1 ports.


Note After re-imaging the appliance, you must once again perform initial configuration of the MARS Appliance. For detailed instructions, see Chapter 5, "Initial MARS Appliance Configuration."


Step 9 After the initial configuration is complete, do one of the following:

Add any devices to be monitored to the Local Controller. For more information, see User Guide for Cisco Security MARS Local Controller.

Recover the previously archived data using the procedure in Restoring Archived Data after Re-Imaging a MARS Appliance


Re-Imaging a Global Controller

Use the MARS Appliance Recovery DVD ROM to re-image the Global Controller if necessary. This operation destroys all data and installs a new image. In addition to preparing the device and later restoring any archived date, you must also perform four time-consuming appliance recovery phases:

Purge all Global Controller data from each monitored Local Controller. (See Before You Begin.)

Image downloading from the CD (about 30 minutes)

Image installation after the download (about 45 minutes)

Basic system configuration (about 5 minutes)

To re-image your Global Controller, follow these steps:


Caution Performing this procedure destroys all data stored on the MARS Appliance.

Before You Begin

You must provide the license file during the initial configuration following the re-image operation.

Before you can re-image a Global Controller, you must purge the data that the Global Controller pushed down to the Local Controllers that it monitors. For each Local Controller that is monitored by the Global Controller that you want to recover, execute the following command at the command line interface of each Local Controller.

pnreset -g

This command clears the global inspection rules and user accounts from the Local Controller, which prepares it to be managed by the re-imaged Global Controller. However, it does not remove the global user groups; instead they are renamed (appended with a date) and converted to local user groups. You can edit or delete these empty groups after the reset. Because user groups are often used as recipients for rule notifications, they are not deleted to avoid invalidating the Action definition of such rules.


Step 1 After you have executed the pnreset -g command on each Local Controller as described in Before You Begin, connect your monitor to the MARS Appliance VGA port and your keyboard to the PS/2 keyboard port. (To view a diagram of the MARS Appliance VGA and serial ports, refer to the appropriate model in Hardware Descriptions—MARS 25R, 25, 55, 110R, 110, 210, GC2R, and GC2, page 1-4.)

Step 2 Disconnect any connected network cables from the eth0 and eth1 ports.

Step 3 Put the Recovery DVD in the MARS Appliance DVD-ROM drive.

Step 4 Do one of the following:

Log in to the MARS Appliance as pnadmin and reboot the system using the reboot command

Power cycle the MARS Appliance

Result: The following message displays on the console:

Please Choose A MARS Model To Install...
1. Distributed Mars - Local Controller
3. Mars Operating System Recovery
4. Quit

Step 5 Using the arrow keys, select 2. Distributed MARS — Global Controller at the Recover menu and press Enter.

Result: The image download to the appliance begins. After the image download is complete, the Recovery DVD is ejected and the following message appears on the console:

Please remove the installation DVD and press Reboot to finish the installation.

Step 6 Remove the Recovery DVD from the MARS Appliance.

Step 7 Press Enter to restart the MARS Appliance.

Result: The MARS Appliance reboots, performs some configurations, including building the Oracle database. The configurations that occur after the first reboot take a significant amount of time, during which there is no feedback; this is normal system behavior.

Step 8 Reconnect any network cables to the eth0 and eth1 ports.


Note After re-imaging the appliance, you must once again perform initial configuration of the MARS Appliance. For detailed instructions, see Chapter 5, "Initial MARS Appliance Configuration."


Step 9 After the initial configuration is complete, do one of the following:


Note You cannot add or monitor a Local Controller using the Global Controller until the Global Controller is running the same MARS software version as the Local Controllers it will be used to monitor.


Add all Local Controllers back into the Global Controller. All devices and topology information are pulled up from each Local Controller into the Global Controller. For more information, see User Guide for Cisco Security MARS Global Controller.

(Recommended) Recover the previously archived data using the procedure described in Restoring Archived Data after Re-Imaging a MARS Appliance.


Restoring Archived Data after Re-Imaging a MARS Appliance

When you restore a MARS Appliance using archived data, you are restoring the system to match the data and configuration settings found in the archive. The configuration data includes the operating system, MARS software, license key, user accounts, passwords, and device list in effect at the time the archive was performed.


Caution The version of MARS software running on the appliance to be restored must match the version recorded in the archive. For example, if the data archive is for version 4.1.4, you must reimage the MARS Appliance to version 4.1.4, not older or newer, before using the pnrestore command to recover the system configuration and events.

For additional information on how the archives are restored, see Guidelines for Restoring.


Note If you choose to restore from your archived data, you must re-enter all devices on the Local Controller that are missing from the archive file. To restore existing cases, you must restore incident and session data. See pnrestore, for more information on types of data and restore modes.


If you have archived your data and you have recovered your MARS Appliance as described in either Re-Imaging a Local Controller, or Re-Imaging a Global Controller, perform the following steps:


Step 1 When the recovery process is complete, restore the MARS Appliance from the last archived data by executing the following command:

pnrestore -p <NFSServerIP>:/<archive_path>

Where NFSServerIP is the value specified in the Remote Host IP field and archive_path is the value specified in the Remote Path field in the settings found in the web interface at Admin > System Maintenance > Data Archiving. You must identify the NFS server by IP address, separated by a :/ and then the pathname NFSServerIP:/archive_path. For more information on these settings, see Configure the Data Archive Setting for the MARS Appliance.

Step 2 When the restore operation completes, you may need to delete, re-enter, and re-discover all the devices that are missing from the MARS archive file.


Upsizing a MARS Appliance

You can migrate, or upsize, to a different appliance model by following the same process and restrictions as configuring a standby or secondary appliance. The technique involves restoring a backup from the original, source appliance on the replacement appliance.

To restore to a different replacement appliance, you must restore to an appliance of the same model or higher. For example, you can restore an image from a MARS 20 to a MARS 20, MARS 50, MARS 100, or MARS 100e; however, you cannot restore a MARS 50 to a MARS 20. Restoring to a replacement appliance differs from restoring to the actual appliance that performed the archive.


Note This operation cannot be used to migrate from 4.3.x to 5.3.x, as the software version on the replacement appliance must match that of the software version used to create the backup image.


The following issues must be addressed when restoring to a replacement appliance:

You must purchase a new license key for the replacement appliance. Each license key is associated with the serial number of the appliance to which it is assigned.

You must enter that new license key on the restored image before you can log into the replacement appliance.

When restoring the image to the replacement appliance, you need to take the source appliance off the network or perform the operation behind a gateway that can perform NAT. When the replacement appliance comes up and you are on the same network, you receive an IP address conflict error, because the IP address assigned to the replacement appliance exactly matches that of the source appliance.

Because a single image of the complete system configuration data is archived and updated daily, no matter what period you select from an archive, the system configuration data includes the most recent changes. In other words, selecting a period that is 365 days old affects only the event data. The system configuration that is restored mirrors that of the most current archive.

For more guidance, see Guidelines for Restoring.

Configuring a Standby or Secondary MARS Appliance

You cannot run queries and reports or perform incident investigation over archived data directly. To perform any kind of investigation using archived data, you must restore that data to a MARS Appliance. Therefore, we recommend that you configure a secondary appliance for this purpose. The reason to use a separate appliance to study old data is that you must restore the period data to the appliance, and the restore re-images all configuration and event data based on the archive settings for the defined period.

To restore to a secondary appliance, you must restore to an appliance of the same model or higher. For example, you can restore an image from a MARS 20 to a MARS 20, MARS 50, MARS 100, or MARS 100e; however, you cannot restore a MARS 50 to a MARS 20. Restoring to a secondary appliance differs from restoring to the actual appliance that performed the archive. The following issues must be addressed when restoring to a secondary appliance:

You must purchase a new license key for the secondary appliance. Each license key is associated with the serial number of the appliance to which it is assigned.

You must enter that new license key on the restored image before you can log into the secondary appliance.

When restoring the image to the secondary appliance, you need to take the primary appliance off the network or perform the operation behind a gateway that can perform NAT. When the secondary appliance comes up and you are on the same network, you receive an IP address conflict error, because the IP address assigned to the secondary appliance exactly matches that of the primary.

Because a single image of the complete system configuration data is archived and updated daily, no matter what period you select from an archive, the system configuration data includes the most recent changes. In other words, selecting a period that is 365 days old affects only the event data. The system configuration that is restored mirrors that of the most current archive.

For more guidance, see Guidelines for Restoring.

Guidelines for Restoring

When you do restore to an appliance, keep in mind the following guidelines:

The version of MARS software running on the appliance to be restored must match the version recorded in the archive. For example, if the data archive is for version 4.1.4, you must reimage the MARS Appliance to version 4.1.4, not older or newer, before using the pnrestore command to recover the system configuration and events.


Caution The pnrestore command does not check to ensure that the same version requirement is met, and it will attempt to restore an incorrect version match.

All restore operations take a long time. Time varies based on the options you select. See pnrestore.

A restore of configuration data only takes less time.

A restore operation does not allow for incremental restores of event data only. It always performs a complete reimage of the harddrive in the target appliance.

All configuration information, including the license key, IP addresses, hostname, stored certificates and fingerprints, user accounts, passwords, and DNS settings, are always restored.

If restoring to an appliance other than the one that created the archive, see Configuring a Standby or Secondary MARS Appliance.

When restoring to an appliance different from the one that archived the data, you must enter the license key assigned to the serial number of the new appliance before you access the restored data.

A restore is performed from the day you specify forward until the archive dates are exhausted. The date argument of the pnrestore command should be the name of the daily data backup directory that identifies the start of the time range to be restored. See Format of the Archive Share Files.

To restore a specific range of days, we recommend temporarily moving the unwanted days at the end of the range out of the archive folder. This technique of trimming out unwanted days can also speed up the restore, although you do lose the dynamic data from those dates.

If the data contained in the selected restore range of the archive exceeds the capacity of the local database on the target MARS Appliance, the MARS Appliance automatically purges the data in the oldest partition of the local database and then resumes the restore operation. As such, you should select a reasonable range of dates when performing the restore. Nothing is gained from restoring ranges that exceed the local database limits, and the overall restore operation is slowed by the intermittent purging of the oldest partition until the most current date is restored.

Mode 5 of the pnrestore command restores from a backup in the local database; you cannot use it to restore from a NFS archive. As such, you do not need to have archiving enabled to perform this restore operation. The configuration data is backed up every night on the appliance. Beware that if you upgrade to a newer release and attempt a restore before that configuration has been backed up, the restore will fail. See pnrestore for more information on types of data and restore modes.

If a Global Controller requires re-imaging, you should perform a pnrestore operation to recover the data after it is reimaged (assuming you have archived it). This approach is recommended because:

All global data defined on the Global Controller and propagated to each managed Local Controller is not pushed back to the Global Controller, so restoring it from an archived configuration file is the only method of recovering these configuration settings and accounts.

Incidents and report results that were pushed to the Global Controller before it was reimaged are not pushed back after reimaging. When running on a Global Controller, the archive operation only archives reports, which can be restored. However, all old incidents are permanently lost on the Global Controller, as they are not archived.

Regardless of how the Global Controller is restored, re-image or restore, the Local Controllers must be cleaned of Global Controller configuration data, which is accomplished by performing a pnreset -g operation on each Local Controller.

The pnreset -g operation must completed on each Local Controller before attempting to restore the Global Controller.