Data loss manifests as data holes, which are one or more missing records in an historical database
There are two types of data loss: temporary
- A temporary data hole can happen during
the Logger recovery process. For example, Logger A goes down, then comes back up
and contacts Logger B to synchronize and recover historical data that was
written while it was down.
While this recovery process is going
on, the reporting database on Logger A may have temporary data holes, which
will be filled when the recovery process completes.
permanent data hole can happen during an Emergency Purge. For example, there
can be permanent data loss if an emergency purge deletes records on one Logger
that have not been sent to the other Logger or to the HDS.
It is possible to monitor and tune UnifiedCCE to minimize the occurrence
of data loss.
To protect your system, see the information on duplexed Unified CCE
fault tolerance in the
Administration Guide for Cisco
Unified ICM/Contact Center Enterprise & Hosted.
Data Retention and Backups
Another way to safeguard against
loss is to configure the amount of time that data is stored on the Logger
Central Database and in the HDS in relation to the schedule for HDS
The Central database stores data for less time than the
HDS. For example, you might store two weeks of data on the Logger and a year of
data on the HDS.
When the HDS recovers after going offline, it
retrieves all of the data on the Logger for the interval for which data is
missing from the backup. You must manually restore the rest of the data from
the last HDS backup.
The amount of data retained on the Logger
should cover, at a minimum, the time period between HDS backups. For example,
if the Logger stores data for two weeks, then you need to back up at least
every other week to ensure that you can recover all historical data.
It is possible that the
process on one of the Loggers is slow because of space issues or an overload of
the SQL Server. In this situation, the data on the Logger with the slower SQL
Server will lag in persistence of the historical data with respect to the other
Logger. This causes the HDS on the corresponding side to lag as well.
As a consequence, if both sides have an HDS set up and the same
reports are run from both HDSs, the reports might differ. This is usually a
temporary inconsistency, since the condition that causes the SQL server process
to slow might be remedied. Autogrowing of the database and load conditions
often remediate. The Loggers and the HDSs eventually catch up and are in sync.
Running the reports later will result in consistent reports.
However, it the database server runs of disk space, the situation is quite
serious and might cause data to be out of sync for a longer duration until the
problem is remedied. A permanent loss of data can occur when data is purged
from the peer Logger and never replicated on the slower side.
Scheduled Purge and Retention Settings on
The goal of the scheduled purge is to free up
database space by purging the oldest data.
several reasons for data loss during a scheduled purge:
- Retention settings on Loggers
inconsistencies and permanent data loss can occur if the number of days to
retain the data differs on the Loggers.
Assume that Logger A is
set to retain 7 days' worth of data, while Logger B is set to retain 15 days
worth of data.
If Logger B is down for 6 days, a temporary data
discrepancy exists when it is brought back up, until the Recovery process
synchronized the data from Logger A. However, if Logger B is down for 10 days,
when it comes back up, it can synchronize only the last 7 days worth of data,
based on Logger A's retention setting. Three days are lost permanently from
To avoid this situation, make sure that the retention settings are the
same on both Loggers are the same.
The data might be lost from the system
permanently, if the historical data was copied to the HDS database associated
with Logger A. Although this situation appears as a discrepancy in the reports that are
run from HDS servers that connect to side B, the system is functioning in a
predictable manner. It can be considered as an issue of perception.
purge and Peripheral Gateway failure
Peripheral Gateways (PGs) are configured, and if one of the PGs goes down for a
brief period, it is possible to lose historical data permanently.
Assume that there are three PGs in the system and
that one goes down for a day and then comes back online. When that PG comes
back online, it sends historical data for activity that occurred prior to it
If the scheduled purge mechanism activates and
determines that the oldest one hour of data needs to be purged, it is possible
that the purge will delete data that was sent by the PG after it came online
but before it was replicated to the HDS.
Permanent data loss can
occur is the HDS is down and the scheduled purge on the Logger deletes data that
has not yet been replicated to the HDS.
The emergency purge
mechanism is triggered when the Logger Central Database becomes full or reaches
a configured threshold size. Its objective is to free up space by purging data
from the historical tables so that the database has more free space than the
The emergency purge goes through each historical
table in a predefined order one at a time and purges one hour's worth of data
from the table. As data is purged from each historical table, a check is made
to verify if the free space is more than the minimum threshold value. Once
adequate space has been recovered, the emergency purge procedure stops.
Otherwise, it continues through to the next historical table and keeps looping
Permanent loss of historical data can occur if the
emergency purge removes historical data that has not yet made it to an HDS and
has also not been replicated to the peer Logger that is "down" or in the recovery
Database used percentage is displayed as a normal status
message in the replication process every few minutes. You can occasionally
monitor this value to make sure that is does not grow too often or too fast.
Emergency purge occurs when the percentage used is greater than the configured
value (usually 90%).