Guest

Cisco Small Business Network Storage Systems

Performing Troubleshooting Procedures on the NSS4000 and NSS6000

Cisco - Performing Troubleshooting Procedures on the NSS4000 and NSS6000

Document ID: 108848

Updated: Dec 12, 2008

   Print

Introduction

This article is one in a series to assist in the setup, troubleshooting, and maintenance of Cisco Small Business products (formerly Linksys Business Series).

Refer to Cisco Technical Tips Conventions for more information on document conventions.

Q. How can I troubleshoot various issues on my NSS4000 and/or NSS6000?

A. There are several troubleshooting procedures available for various issues when using the NSS4000 and NSS6000. These are:

Preventing Data Loss

To prevent data loss, operate the unit with the proper UPS power supply. A UPS can prevent your unit from experiencing:

  • Exposure to momentary power spikes, fluctuations and dropouts that will cause internal buffer loss.

  • Reformatting and reconfiguring drives due to data loss.

  • The inability to recover data and restoring RAID since RAID will indicate the unit DEGRADED.

Note: RPS is highly recommended.

Rebooting the Unit while it is Running

Rebooting the unit will erase all previous modifications in the unit's configuration. Reboots should only take several minutes to complete.

  1. Verify if the unit is powered on.
  2. Verify if the unit is still running by checking if its LEDs are lit.
  3. Hold the RESET button. The POWER LED should blink after four seconds.
  4. When the POWER LED has blinked, release the RESET button.

Resetting the Running Network Configuration of the Unit

Resetting the unit will enable you to configure the unit to allow you to get the GUI and regain control.

  1. Power off the unit.
  2. Press the RESET button.
  3. Continue to hold the RESET button and plug power into the unit.
  4. Continue to hold the RESET button until the POWER LED blinks.
  5. When the POWER LED has blinked, release the RESET button.

    Note: The procedure does not perform a complete factory restore. All network connectivity settings are reset this includes VLANs, network filters, and others. After performing the procedure, the host name http://NASPrimaryMAC will be used by the unit. This means the PrimaryMAC is the primary MAC address of the LAN interface with no colons. The NAS configuration and previous RAID setup are retained without modification.

Getting to Know the Basic Functions of a New Unit

Check the basic functions of the unit first to determine if it is running properly. Determine the exact model number to verify the type of drives installed in the unit before troubleshooting.

  • Determine if the units are installed with an APS such as Linksys RPS1000.

  • Only drives listed in the Approved Vendor List (AVL) specification should be used.

  • Do not assume that units with pre-installed drives have not been swapped out or replaced.

  • No support is available for drives that were installed outside of the AVL specification. Linksys Customer Service may determine the use of non-supported drives that may be sufficient to void the warranty.

  • Determine if the unit is properly powered on based on the following status LEDs:

    1. LAN1 LED determines connectivity to network.

      • Blinking Green - Interface operating at 1000Mbps

      • Blinking Amber - Interface operating at 10/100Mbps

    2. Check POWER LED for status.

      • Briefly Amber - The unit is placed on a momentary status while the boot loader is in the process of loading the Operating System.

      • Solid Green - There was an error while loading the Unix Operating System.

      • Blinking Green - The Operating System is now operational.

        Note: If a Yellow light does not appear during the boot, the unit has failed and requires RMA.

    3. Check DRIVE LED for status.

      Drives may be installed and spun up but may show no LED activity. There will be no LED indication until the drives are configured.

Solving Network Connectivity Issues

Determine the best method to gain admin access to the unit by following these procedures. The Default configuration on a unit is in DHCP IP address request mode.

  1. All network cables recommended that they be connected into network switch devices before powering on the device. This can prevent synchronization problems but is not a mandatory procedure.
  2. Verify if the solid network cable is connected to the network switch.
  3. Verify network switch port has DHCP server available on same subnet.
  4. Verify if the DHCP serves addresses. Test with PC workstation on the network and release/renew address.
  5. Special consideration and care must be given to the unit to determine its configuration, particularly if it is using VLANs to manage an enterprise network.
  6. Non-managed switches (non-VLAN) require verification of end-to-end connectivity between NSS unit and DHCP server across switches.
  7. Check LAN1 LED for connectivity.
    • Solid Green - The proper network port is available.

  8. Find the unit on the network. To use the Auto-Discovery Tool, the local PC must have the internal firewall such as XP Firewall, Zone Alarm, other security tools are disabled. Further, check for special handling of the auto-discovery tool as specified below this section. If no DHCP services are available on the network, access the unit by following these instructions:
    1. Change the IP address of the PC desktop to an address in the 169.254.x.x. subnet range to match the NIC default subnet address of NSS.
    2. Access the unit via the default host name http://NASPrimaryMAC. The PrimaryMAC is the primary MAC address of the LAN interface with no colons. It is not case sensitive.
    3. Ask if the default hostname has changed. If a successful connection to the unit cannot be made, perform a unit reset.
  9. XP platforms allow Use My Network to see NSS NAS devices while VISTA platforms allow View UPnP Devices to see NSS NAS storage show up in available devices. This requires the system to have enabled UPnP in order to function. Meanwhile, Macintosh platforms allow BONJOUR to see all MAC devices through the Safari browser, particularly for OSx v10.2 and v10.3.
  10. Web support downloads provide the use of the Auto-Discovery Tool to determine the NSS unit's address for administration access.
    1. Run Auto-Discovery Tool to find IP address of the unit.
    2. Use the browser to access the GUI admin logon.
    3. Make the necessary IP address configuration changes to meet your network needs.

      Note: Potential conflicts exist when using the Auto-Discovery Tool.

      The tool allows changing the IP address of the unit through the Auto-Discovery Tool without having to perform the same function through the administrative GUI. This may cause the unit to reboot unexpectedly in the middle of changing the IP address. The LAN1 LED goes out when changing the IP address. Whenever a change occurs from a static IP address to a DHCP address, make sure the unit is rebooted in order to function properly.
      • Rebooting may also be a requirement when changing from a DHCP address to static IP address.

      • Instead of rebooting, unplug the network interface cable, wait 10 seconds then reconnect.

      These issues will be resolved in a future version upgrade of the Auto-Discovery Tool. Check the release notes to determine the resolution of this issue.

Solving Performance Issues

If the DNS configured, verify if it uses valid addresses otherwise, the performance will suffer.

  • DNS lookups will take 30 seconds to fail and impact on network performance.

Simultaneous connections may reduce performance.

  • System resources will be fully utilized with five to 10 or more users.

  • Your mileage will vary depending on protocol, data demand, and peak load.

There may be conditions where FTP access may suffer from slow response.

  • This affects FTP writes only.

  • No workaround for this currently exists. Please check the latest release notes for any changes in this status.

  • FTP reads are not affected. But if performance is slow with FTP reads, check the administrative logs for any special errors. A reboot may also be necessary to recover.

Performance is dependent on transfer protocols used.

  • Use CIFS when speed and performance are important.

The unit may hang on boot or may be unresponsive due to unique synchronization problems.

  • Ensure that the NTP server is properly configured.

  • Ensure that the time is properly set.

  • If the unit was previously joined to the ADS Domain, a failure may occur due to authentication errors. o The unit is out of network for more than five minutes which will cause key synchronization problems (i.e. Kerberos).

    • The unit's network settings were Reset, which forces the unit back into a workgroup thus the need for reconfiguration.

    • Before the unit rejoins the ADS network, reset the time by changing the clock through the time server.

Configuring RAID and Volumes for the NSS

Note: The hard drive depends on the manufacturer's notes. Look on their site to see if there is a firmware update for your particular drives.

The newer firmware guarantees support for larger drives (1TB+) here ( registered customers only) . You will need to register for an account before you can login.

  1. Refer to the HELP file in GUI to determine amount of usable storage space. Factors in the physical Mb size of raw drive, RAID type and others.
  2. Proper sequence to configure RAID and Volumes for NSS:
    1. Build a RAID array.
    2. Create volume.
    3. Create shares. There should be no spaces in the share name. Access rights are different from setting file rights.
      • Set share access rights to R or RW.

      • Set share access rights World R or World RW.

    4. Create /home/directories for local users. A window will appear to remind the user to create a home directory.
    5. Create a Users-Groups. The User-Group specifications will apply to all protocols.

      Note: When using encrypted volumes, you must know the password or risk losing all of your information.

There is no support for RAID level migration. Convert the existing data in RAID 1 and set it to RAID 5.

  • Requires complete data backup to another source.

  • Perform complete format and reconfiguration of the drives.

  • Restore the backed up data from the archive source.

These RAIDs do not have redundancy and are exposed to single disk failures and data loss:

  • JBOD or Just a Bunch of Drives

    • Operates sequentially.

    • One drive is filled up before adding data to the next drive.

    • Minimum one drive required.

    • Maximum four drives supported.

    • Format completes within several minutes.

  • RAID0

    • Minimum two drives required.

    • Maximum four drives supported.

    • Format completes within several minutes.

These RAIDs have redundancy:

  • RAID 1 or Mirrored drive set

    • Minimum of two or four drives required.

    • Configuration of three drives are not supported.

    • Synchronization may take 10 hours.

  • RAID 5 or Striped drive set

    • Synchronization may take 10 hours.

  • RAID 10 or Mirrored + Striped set

    • You cannot build with only three drives in the set.

    • Set requires four drives.

    • Synchronization may take 10 hours.

The following will allow the verification of RAID integrity (or lack thereof):

  • Must do the Input/Output drive before the system recognizes a problem with the drive.

  • Check the administrative GUI for status on RAID set.

  • The Drive LED of a specific drive with integrity failure.

    • Solid Red

  • Error Status LED or System LED

    • Solid Yellow

  • Traps transmitted via SNMP.

    • Must be enabled and configured.

Best Practices on rebuilding a RAID and associated drive replacement:

  • Drives should be "fresh" without volumes or shares configured.

  • If necessary or in doubt, delete the RAID before you can create a new RAID configuration.

  • Must be the drive of the same or greater capacity.

    • The 160Gb may not be same as 160Gb between vendors.

    • Use the same vendor/model number.

    • The replacement drive should be of a larger size if the drives are from different vendors or different model numbers.

  • GUI should recognize the new drive after it has been installed.

  • Go to the RAID page.

    • The EDIT button will show the list of drives.

    • The ADD button will integrate and rebuild.

Replacement drives provides seamless installation if they are fresh drives.

  • Partitions and existing configs are all blown out by system.

  • Drives are then integrated into specified RAID/JBOD configuration.

If the drive already contains a Linux RAID partition/configuration from another system, this will impact on the integration of drives.

  • Hopefully you will see a quick popup indicating problem with the existing format of the drive.

  • Shows up as an existing RAID or no volumes show up.

  • Solution: Delete the array before building your RAID from scratch.

Because synchronization can take as long as eight to 10 hours, it is recommended that this procedure be done overnight or during periods of low demand.

When the RAID is in degraded mode or declared a failed drive, the potential for data loss is high if a second problem occurs before the drive can be replaced. Replace failed drives immediately to ensure data integrity.

During initial RAID build or rebuild the unit allows continued data service and will permit the creation of shares, volumes, among others. Because there is no redundancy with RAID 1, 5 or 10, the potential of data loss is great if an additional 2nd drive is lost during this period.

Rebuild priority can be set to low, medium or high to optimize production data access during repairs.

Configuring multiple smaller volumes as opposed to one or two larger volumes can:

  • Support multiple different sized volumes.

  • Help you maintain specific and separate quotas on specific volumes.

  • Enable one volume to be encrypted as against a simultaneous open volume.

  • Help prevent file system corruption by spanning multiple volumes. This can be done due to the existing support and recommendations for spanning files systems across the NSS systems.

  • Prove to be more useful due to the Initial firmware releases below:

    • Aggregation mode that supports a maximum of one master.

    • Aggregation mode that supports a maximum of one slave.

    • NSS4000 v1.08

    • NSS6000 v1.09

  • Firmware release version 1.10

    • Aggregation mode that supports a maximum of one master.

    • Aggregation mode that supports a maximum of four slaves.

    • NSS4000 and NSS4000 on v1.10

  • All slave units are recommended to run with redundancy.

    • RAID1, RAID5, RAID10

Mean Time Between Failure (MTBF) ratings and specifications:

  • Refer to the disk drive manufacturer's specifications for MTBFs.

    • The WD5000YS specified at 1.2 million hours at 100% duty cycle.

  • Chassis MTBFs

    • NSS4000 at 137373 hours (15.7 years)

    • NSS6000 at 131482 hours (15.0 years)

Assigning User Access and Privileges

Possible share issues and violations:

  • After updating firmware, the user cannot access old files.

  • All the group/user specs apply to all protocols.

  • The default CIFS File Creation Attributes apply only to CIFS.

  • Share access rights are different than setting file access rights.

    • Set share access rights to R or RW.

    • Set share access rights to World R or World RW.

  • When creating Users-Groups, specifications apply across all protocols.

Note: It is important to join the NSS NAS device to NIS domain. Otherwise all file privileges of NFS rights/access will not work.

Performing Backups and Snapshot Management

The next software revision update release will provide for a cancel option when performing a backup. Refer to the latest release notes for status on implementation.

  • Backup will impact on performance.

    • Perform backups at night or off-peak periods.

  • Minimize performance impact by implementing incremental backup strategies.

    • The file level lock does not need a disk block level lock.

    • The entire file must be free in order to successfully perform a backup.

    • The whole file is backed up if any portion is updated or changed.

    • These are compressed backups at the moment but will be uncompressed in later releases.

Backups can be saved:

  • Locally on a different volume on the same NSS unit.

  • Locally on a 2nd RAID on the same NSS unit.

  • To remote file servers.

    • Must be Windows compatible.

  • Distributed across another NSS unit.

The Winrar or Winzip 11.0 tools can read through compressed backups.

Third party backup software can be used to save PC workstation data or hard drive image to the NSS NAS device.

  • This provides centralized and single storage point for all PC administrative data operations.

Snapshot management allows users to access archives for quick data or file recovery. Access to archive backups is limited to the system administrator through the administration GUI. Moving data from backups may take significant time to restore.

  • Setting up snapshot management.

    • From the moment a snapshot is committed, the NSS unit starts cataloguing changes to files.

    • To preserve the integrity of the snapshot, writes to the share are blocked.

    • Blocks to share, however, are limited to a few seconds.

    • This is only available only on the NSS6000.

  • Snapshot reserve

    • Create the snapshot and define it.

    • Specify how much reserve to use for the volume.

    • Reserve requirements are based on the length of snapshot interval and the number of changes that occur to files during that interval.

    • Rough guideline to maybe start with 20% reserve to accommodate catalogue changes.

    • Allocated reserve space is not available for other user data.

  • Getting access to the snapshot.

    • It can be done through any share available on the NSS unit.

    • It is indicated by the display of an extra share in the list.

    • The source share shows up as the ShareName while the reserve share shows up as ShareName_snapshot.

    • You will see the share and the exact view of the snapshot when it was taken.

All snapshot sizes are 1024 Mb in administrative user GUIs.

Related Information

Updated: Dec 12, 2008
Document ID: 108848