Guest

Cisco Unified Intelligent Contact Management Enterprise

Best Practice to Set ICM MDS Buffer Limits

Document ID: 66960

Updated: Sep 29, 2005

   Print

Introduction

This document describes how you can size the Message Delivery Service (MDS) buffer allocation registry in order to satisfy all your needs in a Cisco Intelligent Contact Management (ICM) / IP Contact Center (IPCC) Enterprise environment. This document also provides update and maintenance notes.

Note: This document does not apply to ICM 7.0 because the memory management facility has been changed.

Prerequisites

Requirements

Cisco recommends that you have knowledge of these topics:

  • Cisco ICM/IPCC Enterprise

Components Used

The information in this document is based on these software and hardware versions:

  • Cisco ICM Enterprise version 4.6.2, 5.x and 6.x

  • Cisco IPCC Enterprise version 4.6.2, 5.x and 6.x

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.

Conventions

Refer to Cisco Technical Tips Conventions for more information on document conventions.

Message Buffering

One MDS process runs on each side of the Cisco ICM Router and Peripheral Gateway (PG). The Node Manager (NM) process starts the MDS process. The MDS process provides a message switching function for the clients on its side of the system. The MDS process accepts messages that clients send, and delivers the messages to the relevant destinations. The MDS process uses an External Message Transport (EMT) connection to communicate with each client, which permits clients to reside on any node.

During normal system operation, MDS clients read and process messages as soon as the messages arrive. Unusual events, for example, process re-synchronization, can cause one or more clients to pause for an indeterminate period. During such periods, messages continue to arrive at the client. At such times, the messages go into the message queue of the client. When the client resumes reading input messages, on an average, the client processes messages faster than the messages arrive. Therefore, the input queue eventually shrinks to zero.

The MDS process implements a buffer management scheme. When a message is in queue, the total buffer number increases. When the client reads the message, the message leaves the queue, and the buffer number decreases. The queue size is 90% of the available buffers in the buffer pool. A high water mark that you can configure, specifies the maximum number of buffers to allot to queue messages. If a message that joins the queue causes the buffers to exceed the high water mark level, the MDS process declares a failure and stops.

The MDS process maintains a pool of message buffers. There are three sizes of pools, namely, small, medium and large. These pools accommodate various sizes of messages. The large buffer is big enough to hold a maximum size message. The system allocates message buffers from the process global memory when necessary. When the buffers are no longer necessary, the system releases the buffers back to the process global memory.

Buffer Registry

MDS Process

For the MDS process, here is the navigation path for the maximum allocated buffer registry in Cisco ICM version 4.6.2:

HKEY_LOCAL_MACHINE\SOFTWARE\GelTel\ICR\<cust_inst>\<Node>\MDS\
CurrentVersion\Process

Here is the navigation path for the maximum allocated buffer registry in Cisco ICM version 5.x and 6.x:

HKEY_LOCAL_MACHINE\SOFTWARE\Cisco Systems,Inc.\<cust_inst>\<Node>\MDS\
CurrentVersion\Process

For example, Figure 1 displays the registry key for BufferLimit and BufferMaxFree for the MDS process on PG1A in Cisco ICM/IPCC version 5.x and 6.x.

Figure 1 – MDS Process Registry for BufferLimit and BufferMaxFree

BufferLimitGuide_01.gif

MDS Client Processes

For the MDS clients, here is the navigation path for the maximum allocated buffer registry in Cisco ICM version 4.6.2:

HKEY_LOCAL_MACHINE\SOFTWARE\GelTel\ICR\<cust_inst>\<Node>\MDS\
CurrentVersion\Clients\<Client_ID>

Here is the navigation path for the maximum allocated buffer registry in Cisco ICM version 5.x and 6.x:

HKEY_LOCAL_MACHINE\SOFTWARE\Cisco Systems,Inc.\ICR\<cust_inst>\<Node>\MDS\
CurrentVersion\Clients\<Client_ID>

For example, Figure 2 displays the registry key for BufferLimit and BufferMaxFree for the pgag process on PG1A in Cisco ICM/IPCC version 5.x and 6.x.

Figure 2 – MDS Client Process Registry for BufferLimit and BufferMaxFree

BufferLimitGuide_02.gif

Retrieve the Metering Statistics

You can use the dumplog command with the /bin argument in order to obtain the buffer statistics. In order to obtain sufficient data, you must gather at least two hours' worth of data to show the statistic. In order to understand the statistics, you require at least one week of data during a high traffic period. Here is an example of the dumplog command that you can issue to collect two hours of MDS data:

C:\icm\lab60\ra\logfiles>dumplog mds /bin /hr 2

Here is the partial output of the dumplog command:


Events from September 20, 2005:
11:51:06 ra-mds MDS Process is reporting periodic overall metering statistics. 

  *** Buffer Pool Statistics ***
  Current / High / Max Allocated Buffers = 374 / 397 / 65536
  Current / High / Max Freelist (Small) = 344 / 345 / 400
  Current / High / Max Freelist (Medium) = 10 / 10 / 10
  Current / High / Max Freelist (Large) = 5 / 5 / 5
  Buffer Allocs Small / Medium / Large / Total = 18938158 / 1043172 / 4749 /
 19986079
    Allocs from Freelist Small / Medium / Large / Total = 18937799 / 1042064 /
 4742 / 19984605
  Buffer Frees Small / Medium / Large / Total = 22322177 / 1060637 / 5161 /
 23387975
    Frees to Freelist Small / Medium / Large / Total = 18938143 / 1042074 /
 4747 / 19984964
  Dups = 3401911

  *** Synchronizer Statistics ***
  Total messages ordered = 4292869
    MDS duplicates = 308
    DMP duplicates = 0
  Local low priority input msgs / bytes = 1119811 / 107490676
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 12 / 3136
  Local high priority input msgs / bytes = 848853 / 24508284
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 2 / 148
  Local medium priority input msgs / bytes = 61373 / 3017131
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 7 / 11480
  Remote low priority input msgs / bytes = 131595 / 9598544
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 15 / 2472
  Remote high priority input msgs / bytes = 6236914 / 65565092
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 8 / 228
  Remote medium priority input msgs / bytes = 318 / 52698
    Current input queue msgs / bytes = 0 / 0
    Highest input queue msgs / bytes = 3 / 7476
  Remote low priority output msgs / bytes = 1118701 / 107385640
    Current output queue msgs / bytes = 0 / 0
    Highest output queue msgs / bytes = 8 / 3136
  Remote high priority output msgs / bytes = 4301262 / 93354648
    Current output queue msgs / bytes = 0 / 0
    Highest output queue msgs / bytes = 7 / 204
  Remote medium priority output msgs / bytes = 61289 / 3012988
    Current output queue msgs / bytes = 0 / 0
    Highest output queue msgs / bytes = 5 / 7476
  Current local low priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 16 / 3168
  Current local high priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0
  Current local medium priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 7 / 11524
  Current remote low priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0
  Current remote high priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0
  Current remote medium priority ordering queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0
  Current low priority timed delivery queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 336 / 32736
  Current high priority timed delivery queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0
  Current medium priority timed delivery queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 32 / 24416
  Clock rate fast / slow / normal = 0 / 0 / 0
  Output waits / notifies = 2641679 / 2642109

  *** State Transfer Statistics ***
  Attempts / Successful completions = 11 / 11
  Bytes received / transmitted = 383710 / 1185727

11:51:06 ra-mds MDS Process is reporting periodic per-client summary meters. 

  *** Client 128 Statistics ***
  Connects / Disconnects = 0 / 0
  Messages / Bytes received from client = 0 / 0
  Messages / Bytes sent to client = 0 / 0
  Current output queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0

  ..
  ..

11:51:06 ra-mds MDS Process is reporting periodic per-client summary meters. 

  *** Client 70 Statistics ***
  Connects / Disconnects = 0 / 0
  Messages / Bytes received from client = 0 / 0
  Messages / Bytes sent to client = 0 / 0
  Current output queue msgs / bytes = 0 / 0
    Highest msgs / bytes = 0 / 0

  ..
  ..

Water Marks

The first part of the statistics represents the watermark for the buffer allocation.

Figure 3 – Buffer Pool Statistics

BufferLimitGuide_03.gif

Here are the meanings and scope of some terms this report uses:

  • Max Allocated buffers represent the number of buffers in use (see the pink rectangle in Figure 3).

  • Max Freelist (Small) represents buffers in use, which are allocated from the Small Freelist (see the green rectangle in Figure 3).

  • Max Freelist (Medium) represents buffers in use, which are allocated from the Medium Freelist (see the blue rectangle in Figure 3).

  • Max Freelist (Large) represents buffers in use, which are allocated from the Large Freelist (see the black rectangle in Figure 3).

This report presents a picture of the buffer allocation during the last hour. Use this report during a week or two to verify if the maximum allocated buffer registry is enough for the message destination. The two MDS Buffer requirements are:

  • For the MDS process

  • For the MDS clients

For ICM version 4.6.2, here is the navigation path for the maximum allocated buffer registry:

HKEY_LOCAL_MACHINE\SOFTWARE\GelTel\ICR\<cust_inst>\<Node>\MDS\
CurrentVersion\Clients\<Client_ID>

Here are the keys:

  • BufferLimit

    BufferLimit defines the maximum allocated buffer (see arrow A in Figure 1 and Figure 2).

  • BufferMaxFree

    BufferMaxFree represents the maximum allocated freelist (see arrow B in Figure 1 and Figure 2).

The most important information in the metering statistics is the value of the High Allocated buffers (see Figure 3). The target is to keep the value between 65% and 75% of the Maximum Allocated buffers. For any time during the sampled period, if the number gets higher than 75%, you must double the value in the BufferLimit.

Note: The value is always a power of two.

Buffer Allocation Error Message

When the buffer pool is empty, the process exits. The log file displays this message:

Fail: Buffer Pool Exhausted (xxxx buffers allocated).

Note: xxxx represents the number of buffers. For example, 1024, 2048, 4096 and so forth.

Use the Dumplog utility to view the log file.

Buffer Pool Exhausted: Case 1

This log provides an example of the MDS lgr process that has run out of buffer (see arrow A in Figure 4).

Figure 4 – Dumplog of MDS LGR Process

BufferLimitGuide_04.gif

Expand the current BufferLimit in order to solve the problem. However, you must then monitor the process to ensure that the error does not recur.

Buffer Pool Exhausted: Case 2

In some cases, the error message appears, but expanding the current BufferLimit does not solve the issue. This error message is just a symptom. For example, a series of logs are saved before the MDS process stops. These logs produce a report with the number of buffers allocated among the MDS clients. Usually, this number is enough for you to narrow down on some problems in the clients that do not relate to buffer allocation.

Figure 5 – Dumplog of MDS process

BufferLimitGuide_05.gif

The example in Figure 5 indicates that there are 4085 messages queued for the Open Peripheral Controller (OPC) process, and all other clients have no buffers allocated. This example demonstrates that the OPC process is the cause of the problem, and not the maximum buffer allocation size.

Update Notes

Occasionally, when you perform an upgrade or make major changes to a system, the buffer pool reaches the limit. For example, the buffer pool can reach the limit when you add peripherals. In order to prevent this issue, increase the buffer pool limits.

Before you perform an upgrade from 4.6.2 to 5.0 or 6.0, Cisco recommends you to double the BufferLimit and BufferMaxFree settings (see Figure 1). When you upgrade from 5.0 to 6.0 you do not need to double the BufferLimit settings if you doubled the settings when you upgraded from 4.6.2 to 5.0. If you are not sure about whether or not you increased the BufferLimit setting during the previous upgrade, check the buffer utilization statistics outlined in Retrieve the Metering Statistics to determine if you must increase buffers.

Note: Memory wastage is not a concern because buffers specified by BufferLimit (except those in free-lists) are not pre-allocated. In addition, the buffers are released to the system heap eventually. However, a very large BufferLimit (compared to the available system RAM) can mask the underlying communication congestion and slow down the entire system. In some situations, a better solution is to assert a process as the BufferLimit is reached, and rely on the fault-tolerance design of the system to fail over, given possible resource restrictions.

Maintenance Notes

You can monitor some BufferLimit statistics after an upgrade or during normal system maintenance. You must review these statistics before and immediately after you add additional capacity or components to a system. The MDS process records the buffer pool statistics periodically. If the High value of a particular buffer is close to the MAX, double that particular BufferLimit setting.

Related Information

Updated: Sep 29, 2005
Document ID: 66960