This document discusses how to diagnose problems with the Cisco Domino Unified Communication Services (DUCS). This document also discusses notification related issues, (DUC) crashes/hangs, and performance issues.
There are no specific requirements for this document.
The information in this document is based on these software and hardware versions:
Cisco Unity 4.x
Cisco Unity 5.x
Cisco Unity 7.x
Cisco Unity 8.x
Refer to Cisco Technical Tips Conventions for more information on document conventions.
In order to diagnose problems with DUCS, you need to enable DUC tracing and enable console logging if not already enabled. Next, collect the console.log/log.nsf files that span the time from when the Domino server started to when the problematic issue happens. If you diagnose crashes, hangs, and performance issues, you also need to send the Notes System Diagnostic (NSD). NSD produces a log file that is automatically generated in the event of a server crash, and is stored in the data\IBM_TECHNICAL_SUPPORT directory in your Domino install directory.
Note: Cisco Unity stores voice messages in a user mail file database on the Domino server. If Domino is installed on one or more servers (never on the Cisco Unity server), all subscribers must have their Domino mailboxes on other servers. Every Domino server that houses Cisco Unity subscribers must have IBM Lotus Domino Unified Communications for Cisco installed.
Make sure to set UCLogLevel first in the notes.ini file.
0 - No logging (This is the same as having no UCLogLevel entry) 1 - Minimal logging - only Fatal and Error events are logged 2 - Normal logging - Fatal, Error and Warning events are logged 3 - Informational logging - Fatal, Error, Warning and Informational events are logged. 4 - Verbose logging - Fatal, Error, Warning, Informational and Verbose events are logged.
The default is 1, but 4 is recommended for diagnosing problems.
DUC tracing allows you to see the code paths the DUC goes through. DUC tracing is difficult to understand without having the source code. However, you can see the basic flow of functions, such as notifications that are created. Search for the error messages that might be present.
Set these notes.ini variables:
trace_uc_all=1 trace_uc_dir=<output files dir> (W32 only)
The Domino server must be restarted for changes in the ini variables to take place. Take note of the name/filename of the testing user and stop the Domino server when you want to collect the files.
If you are not sure of what .out files to collect, send them all. However, verify that the .out files you collect are from the correct span of time.
Here are some example problem type/filenames that might be generated:
Errors enabling/disabling users (send ucadminp output files)
MWI does not turn on during message delivery (router, ucevent, csumhlr, ucxmlextend)
MWI does not turn off on message open (server, ucevent, csumhlr, ucxmlextend)
Server crash/hang (send all the output files)
NSDs take a snapshot of the contents found in the memory of the Notes/Domino process. NSDs can show what processes caused a crash or hang. NSDs should fire automatically in the event of a crash, but for performance issues or hangs, manual intervention is required.
Often, the first step taken to resolve a server crash is to determine the process that crashed the server. In Domino 6 and later, the NSD file can be a good place to start. NSD gives you all current information about the state of the server, such as call stacks for all threads, memory information, and so on. In the event of a crash, an NSD log file is automatically generated by the Domino server and stored in the data\IBM_TECHNICAL_SUPPORT directory. An NSD log will have a file name with a time stamp that shows the time when the NSD was generated. For example, the Nsd_W32I_KIRANTP_2006_01_17@17_17_18.log indicates this NSD was created on January 17, 2006. When NSD runs, it attaches to each process and thread in order to dump the calls stacks. This can help you determine the cause of a server or workstation crash.
The heart of an NSD file is the stack trace section. This section provides a breakdown of the code path of each thread in an existing process traversed to put it in its current state. This is helpful to examine hang or crash situations on a server. Also, by examining the NSD file, you can find any core files generated in a Domino data directory, and can perform a base-level analysis in order to trace the final stack of calls that were made by the process that died and left behind the core. In a complex product such as Domino, a stack trace of the same type of action on two different servers can produce different results.
In the NSD file, you can perform a word search for "fatal", "panic", or "segmentation" in order to identify the executable in the failing process. By finding the process, you can see what preceded it, and hopefully determine how the crash occurred. When neither "panic" or "fatal" are found, sometimes a core dump will contain a reference to a "segmentation fault" in a function. This indicates that the process tried to access a shared memory segment that was corrupted for some reason, and will crash without calling "fatal error" or "panic".
This is a sample excerpt from an NSD file where a server process is involved in a crash:
-------------------------------------------- ### FATAL THREAD 39/83 [ nSERVER:07c0: 2764] ### FP=0743f548, PC=60197cf3, SP=0743ebd0, stksize=2424 Exception code: c0000005 (ACCESS_VIOLATION) ############################################################ @[ 1] 0x60197cf3 nnotes._Panic@4+483 (7430016,496dae76,0,496dace8) @[ 2] 0x600018a4 nnotes._OSBBlockAddr@8+148 (1153f38,2000000,743f608,1) @[ 3] 0x6000bd92 nnotes._CollectionNavigate@24+610 (0,743fc74,f,0) @[ 4] 0x600626cc nnotes._ReadEntries@68+2860 (4c5440e8,4cfb8dba,800f,1) @[ 5] 0x600b9f6f nnotes._NIFReadEntriesExt@72+351 (0,4cfb8dba,800f,1) @[ 6] 0x10032d40 nserverl._ServerReadEntries@8+1424 (0,8d0c0035,4b64b5bc,4ae46dd6) @[ 7] 0x100191fc nserverl._DbServer@8+2284 (41b0383,cb740064,0,23696f8) @[ 8] 0x1002b8c8 nserverl._WorkThreadTask@8+1576 (4711d68,0,3,563fb10) @[ 9] 0x100016cb nserverl._Scheduler@4+763 (0,563fb10,0,10ec334) @ 0x6011e5e4 nnotes._ThreadWrapper@4+212 (0,10ec334,563fb10,0)  0x77e887dd KERNEL32.GetModuleFileNameA+465 -------------------------------------------
When the failing process has been determined, you can focus on how to troubleshoot that particular process.
Run NSD Manually for Hangs and Performance Issues
In order to access the NSD help, type nsd –help. This is the common usage:
nsd -ini <notes_ini_file> -log <nsd_output_name> -dumpandkill
Make sure NSD contains callstacks, memory information, and notes.ini information before you send.
Tracing can be set up on the player with a registry setting. Complete these steps:
Go to the HKEY_CURRENT_USER/Software/Lotus/DUCS key.
Select Edit > New > DWORD Value.
Assign a name of DebugFlags then set the value to fff.
The output file is called LotusUC.csv and it is found in the Lotus data directory.
If the player crashes, NSD should run. If the player hangs, NSD can still be invoked manually.
The Cisco Support Community is a forum for you to ask and answer questions, share suggestions, and collaborate with your peers.
Refer to Cisco Technical Tips Conventions for information on conventions used in this document.