The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the recommended steps to troubleshoot a Cisco Secure Firewall Threat Defense (FTD) that is unresponsive on 1xxx, 12xx, 21xx, 31xx, 41xx, 42xx, and 93xx hardware platforms.
Cisco recommends that you have knowledge of these topics:
Cisco recommends that you have knowledge of these topics:
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, ensure that you understand the potential impact of any command.
In some cases, a Cisco FTD device can become unresponsive. Typical symptoms include:
Note that, depending on the situation, some of them are not going to be present. For example, you could have transit traffic going through, but only management access is not working.
This section covers the recommended steps and actions that you need to take. You can provide this information to Cisco TAC for further analysis.
Take a video or a picture of the front panel LEDs. Here are some examples where all the LEDs are clearly visible:
In the next photo, the SYS LED indicates a device problem:
You can consult the hardware guide of your device model to get additional information about the LED, for example:
Model |
LED info |
1010 |
|
1100 |
|
1210CE, 1210CP, 1220CX |
|
1230, 1240, 1250 |
|
2100 |
|
3100 |
|
4110, 4120, 4140, 4150 |
|
4112, 4115, 4125, 4145 |
|
4200 |
|
9300 |
Take a video or a picture of the LEDs at the rear panel, for example:
If you don’t see any power LEDs on:
Verify if the fans in the back of the appliance are running.
Verify if there is any noise or smell coming from the device.
Make sure that the console and management ports are properly connected. If the problem is only on the management port, try to change the SFP (whenever applicable) and the network cable.
Try to ping (ICMP) the management IP of the device.
Check the port status of the adjacent devices, for example:
switch# show interface description | i FW-4215-1
Gi7/1 up up FW-4215-1 ETH1/1
Gi7/2 up up FW-4215-1 ETH1/2
Gi7/3 up up FW-4215-1 MGMT
In case of a high availability (HA) or a cluster setup, collect a troubleshoot bundle from the peer device(s).
Attach a laptop to the console port and copy any messages shown. Try to press the up/down keyboard keys or the PageUp to see all the messages on the screen.
With a laptop attached to the console port:
Note that if the device was not gracefully shut down and the device was operational (the front panel LEDs were on), the cold reboot can cause a database corruption. If the cold reboot brings up the device, collect a troubleshoot bundle and contact Cisco TAC.
If the device recovers and is managed by an FMC, navigate to System> Health > Monitor, and select the device. Focus on the highlighted graphs to understand what the status of the device was before getting unresponsive (for example, high memory, high CPU, high disk utilization, and so on).
Non-working scenario (4100):
FW4100# show server storage
Server 1/1:
RAID Controller 1:
Type: SATA
Vendor: Cisco Systems Inc
Model: FPR4K-PT-01
Serial: JAD12345678
HW Revision:
PCI Addr: 00:31.2
Raid Support:
OOB Interface Supported: No
Rebuild Rate: N/A
Controller Status: Unknown
Local Disk 1:
Vendor: Micron
Model: 5300 MTFD
Serial: MSA123456AB
HW Rev:
Operability: N/A
Presence: Missing <-----
Size (MB): 200000
Drive State: Online
Power State: Active
Link Speed: 6 Gbps
Device Type: SSD
Local Disk Config Definition:
Mode: NO RAID
Description:
Protect Configuration: No
Sample output from 3100 where disk is operational:
FW3105# show server storage
Server 1/1:
Disk Controller 1:
Type: SOFTRAID
Vendor: Cisco Systems Inc
Model: FPR_SOFTRAID
HW Revision:
PCI Addr:
Raid Support: raid1
OOB Interface Supported: No
Rebuild Rate: N/A
Controller Status: Optimal
Local Disk 1:
Presence: Equipped
Model: SAMSUNG MZQL2960HCJR-00A07
Serial: S64FNT0AB12345
Operability: Operable <---
Size (MB): 858306
Device Type: SSD
Firmware Version: GDC5A02Q
Virtual Drive 1:
Type: Raid
Blocks: 878906048
Operability: Degraded
Presence: Equipped
Size (MB): 858306
Drive State: Degraded
Sample output from 4100 where disk is operational:
FW4125# show server storage
Server 1/1:
RAID Controller 1:
Type: SATA
Vendor: Cisco Systems Inc
Model: FPR4K-PT-01
Serial: JAD1234567
HW Revision:
PCI Addr: 00:31.2
Raid Support:
OOB Interface Supported: No
Rebuild Rate: N/A
Controller Status: Unknown
Local Disk 1:
Vendor: TOSHIBA
Model: KHK61RSE
Serial: 11BS1234567AB
HW Rev: 0
Operability: Operable
Presence: Equipped
Size (MB): 800000
Drive State: Online
Power State: Active
Link Speed: 6 Gbps
Device Type: SSD
Local Disk Config Definition:
Mode: No RAID
Description:
Protect Configuration: No
If the firewall device recovers and you would like to analyze the backend logs, generate a troubleshoot bundle and check the files mentioned in the table. Note that:
File Path in the Troubleshoot Bundle |
Description/Tips |
Available on |
FTD TS bundle: /dir-archives/var-log/messages* |
The string ‘syslog-ng shutting down’ is shown during a graceful shutdown. The string ‘syslog-ng starting up’ is shown when the device starts. |
FTD |
FTD TS bundle: /dir-archives/var-log/ASAconsole.log In case of ASA on 4100/9300, you can also find the file in the Module bundle under /opt/cisco/platform/logs/ASAconsole.log |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FTD TS bundle: /dir-archives/var-log/dmesg.log |
Look for errors, faults, failures, and so on. |
FTD |
FTD TS bundle: /dir-archives/var/log/ngfwManager.log* |
Look for errors, faults, failures, and so on. This file also contains information about HA/Cluster events. |
FTD |
FTD TS bundle: /command-outputs/LINA_troubleshoot/show_tech_output.txt |
The output of the ‘show failover history’ and 'show cluster' history' can provide additional insights of the sequence of the events. |
FTD |
FTD TS bundle: /command-outputs/ Filenames: · for CORE in `ls opt-cisco-csp-cores _ grep core`_ do file -opt-cisco-csp-cores-_{CORE}_ done.output · for CORE in `ls var-common _ grep core`_ do file var-common-_{CORE}_ done.output · for CORE in `ls var-data-cores _ grep core`_ do file -var-data-cores-_{CORE}_ done.output |
Check for potential core files (tracebacks). |
FTD |
FTD TS bundle: /dir-archives/var/log/crashinfo/snort3-crashinfo.* |
Check for Snort3 crashinfo files. |
FTD |
FTD TS bundle: /dir-archives/var/log/process_stderr.log* |
Check for Backtraces (for example Cisco bug ID CSCwh25406) |
FTD |
FTD TS bundle: /dir-archives/var/log/periodic_stats/ |
The directory contains multiple files that can provide insights for the time of the incident. |
FTD |
FPRM bundle: tech_support_brief |
Check the ‘show fault detail’ outputs. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/kern.log |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/messages* |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/mce.log The same file also exists in the module bundle (41xx, 93xx). |
This is the Machine Check Exceptions (mce) file. Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/portmgr.out |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/sysmgr/logs/kp_init.log: |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/ssp-pm.log The same file also exists in the module bundle (41xx, 93xx). |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/sma.log The same file also exists in the module bundle (41xx, 93xx). |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/heimdall.log |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/ssp-shutdown.log The same file also exists in the module bundle (41xx, 93xx). |
It contains the output of ps, top and few lines from dmesg when reboot or shutdown is initiated. Available on 1000/2100/3100/4200. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/sysmgr/sam_logs/svc_sam_dme.log* |
Look for errors, faults, failures, and so on. |
ASA, FTD |
FPRM bundle: /opt/cisco/platform/logs/sysmgr/sam_logs/svc_sam_envAG.log* |
Look for errors, faults, failures, and so on. |
ASA, FTD |
CIMC bundle (41xx, 93xx): /obfl/obfl-log* |
Look for errors, faults, failures, and so on. |
ASA, FTD |
CIMC bundle (41xx, 93xx): /CIMC1_TechSupport.tar.gz/CIMC1_TechSupport.tar/tmp/techsupport_pid*/CIMC1_TechSupport-nvram.tar.gz/CIMC1_TechSupport-nvram.tar/nv/etc/log/eng-repo/messages* |
Look for errors, faults, failures, and so on. Especially for CATERR |
ASA, FTD |
Module bundle (41xx, 93xx): /tmp/mount_media.log/mount_media.log |
Look for errors, faults, failures, and so on. |
ASA, FTD |
If a specific interface becomes unresponsive take captures on the firewall and the adjacent device. You can refer to this document for details:
Additionally, ensure that the ARP and CAM tables of the adjacent devices are properly populated.
In addition to the items mentioned above, it is highly recommended to also provide this information:
15a. If the device recovered collect a troubleshoot bundle (check step 13 for details).
15b. If the device is still unresponsive provide the this information:
15c. Approximate time (date/time) when the device became unresponsive.
15d. Approximate uptime of the device before it became unresponsive.
15e. Is this a new setup or an existing one?
15f. What was the last action performed before the device becoming unresponsive?
15g. Firewall data plane (LINA) syslogs from the time the device got unresponsive (try to get logs starting ~5 minutes before the incident). As a best practice, it is recommended to configure syslogs at level 6 (Informational).
15h. In case you have configured a syslog server on the chassis (FXOS on 4100/9300) provide the logs (starting ~5 minutes before the incident).
15i. Syslogs from the adjacent devices from the time of the incident.
15j. Topology diagram that shows the physical connections between the firewall device and the adjacent devices.
If you connect to the console and see:
Software Error: Exception during execution: [Error: Timed out communicating with DME]
Most of the times, this indicates a software problem.
Recommended Action: Contact Cisco TAC
This output is from a 4100/9300 hardware appliance where a disk-related fault is generated:
Recommended Action: Try reseating the SSD disk. If it does not help, collect chassis troubleshoot bundle and contact Cisco TAC.
Recommended Action: A power-cycle of the 4100/9300 chassis is required in order to temporarily recover from this issue. Check Cisco bug ID CSCvx99172 for details and a version that has a fix. (Field Notice: FN72077 - FPR9300 and FPR4100 Series Security Appliances - Some Appliances Might Fail to Pass Traffic After 3.2 Years of Uptime).
Low disk space on the firewall can render the device unresponsive. If the device is managed by FMC you can get health alerts like this:
Recommended Action: If you have FMC and FTD running on software 7.7.0 and higher, try to clear some disk space using the procedure documented at https://www.cisco.com/c/en/us/td/docs/security/secure-firewall/management-center/admin/770/management-center-admin-77/health-troubleshoot.html#clear-disk-space
If this is not feasible or does not help, contact Cisco TAC.
Recommended Action: Upgrade to a software release that has a fix for:
Cisco bug ID CSCwm14729 CSF 3100 series not rebooting after power outage, requiring manual power cycle.
Recommended Action: Replacement of DIMM components or replacement of the security appliance
Revision | Publish Date | Comments |
---|---|---|
1.0 |
17-Jul-2025
|
Initial Release |