At the base, tables
in the CISCO-CONTACT-CENTER-APPS-MIB are indexed by the Unified CCE instance
(the instance name is a unique textual identifier that relates components that
are part of the same Unified CCE system); most are secondarily indexed by the
Component index. In a hosted deployment, there may be up to 25 instances of a
particular component installed on a single server (such as a router – one for
each customer instance in a service provider solution). This is why the Unified
CCE instance is the primary index – it is the only way to distinguish one
router from another. However, in a typical Unified CCE deployment, there is
only a single instance.
Thus, to inventory a
particular server, the NMS should query the Instance table first; then query
the Component table to assign components to an instance. Lastly, query the
Component Elmt table for the processes associated with each component.
Using the Instance
and Component indexes, the NMS can then drill down further using it to query
the component-specific instrumentation for each component installed.
component-specific table of instrumentation provides (where possible) links to
dependent components that are distributed within the solution (for example,
which Router a peripheral gateway communicates with or which Logger is the
primary for a particular Administration Server and Real-time Data Server).
CISCO-CONTACT-CENTER-APPS-MIB is structured as follows:
Figure 2. CISCO-CONTACT-CENTER-APPS-MIB Structure
The Instance table
is indexed by the instance number – a value ranging from 1 to 25.
The Component table
is indexed by Instance, and Component number that is arbitrarily assigned by
the agent; the value of the Component number could change from one run period
Element table is indexed by Instance, Component number, and Component Element
number, which is arbitrarily assigned by the agent; the value of the Component
Element number could change from one run period to another.
component-specific table of instrumentation is indexed by Component number.
From an inventory
standpoint (a network management station taking inventory of the server
itself), the Network Management Station (NMS) first polls the Instance table.
Typically, for the Unified CCE, there is only one instance. From that, the NMS
polls all components that are part of this instance. Now the NMS knows what is
installed on this server and can see what is running. For example, this is a
Unified CCE central controller and the NMS wants to know what the inbound call
rate is. With the Component entry for the Router, using the Component index of
that entry, the NMS then polls the cccaRouterCallsPerSec object within the
Router table (indexed by Instance number and Component index).
can be accomplished by drilling a little deeper. For example, assume the NMS
wants to list what PIMs are installed on PG4A. Again, poll the Instance table
to get the instance number. Using that, get all components for that instance.
Find PG4A and using the component index for PG4A, get the PG table objects for
PG4A. Then get the PIM table for PG4A that returns a list of PIMs installed.
The following figure
illustrates content for the application components installed:
Figure 3. CCCA MIB –
Component Inventory Example
Typically, for a
Unified CCE deployment, a single instance is configured. In this case, all
installed/configured components are a part of that same instance.
The Component table
comprises a list of installed Unified CCE components (for example, Router and
Element table is a list of installed processes that should be running.
Real-time status of
each component may be monitored by polling the cccaComponentTable. The status
of a Unified CCE component is derived by analyzing the collective status of
each component element (the processes) as best it can.
Element table lists all Unified CCE processes that should be executing, and
exposes the (operating system) process identifier and the current status of the
in the figure is an example, only; there can be many more processes listed in
the Component Element table.