This document describes the various cache status flags found in a datasource's cache_status table.
What do the various cache statuses mean?
In the cache_status table, there is a Status column which indicates the status of that cached resource.
Summary of Each Status Flag
Key is actively used.
A refresh is in progress.
Attempts cache refresh.
Cache refresh failed for this key and data for these cache keys should not be used.
Cache has been cleared or is being read by a Cisco Information Server (CIS) service's client session that has not signed out and released it yet.
Key generation. This is a special row in the table with the status of 'K'. This row does not describe any cached data. Instead it is used to hold the next available cachekey value.
The server updates this row as it uses cachekey values. Currently it increments it by 1,000 each time.
General Line-By-Line Progress of a Cache Refresh Cycle
CacheRefresh: Generate Cache Key
CacheRefresh: Validate that we are the only one trying to refresh for this key
CacheRefresh: Indicate refresh in progress
CacheRefresh: Copy data
CacheRefresh: Mark cache key as active
CacheClear: Mark cache for the key as cleared
CacheClear: Garbage collection
CacheClear: Delete the data
U - Update/Insert
D - Delete
Cache can be refreshed either on demand (user clicks the Refresh button, SQL query that depends on cached view) or by schedule (trigger).
Schedule based cache refreshes are implemented with triggers. Triggers fire at the schedule and the trigger invokes cache refresh procedures in order to do cache refreshes.
Even if cache refresh is schedule (trigger) based and if it has never been refreshed and if a user request comes in either for cached view or if user asks for a refresh, the cache will be refreshed.
Cache can be cleared either by user demand or by expiration policy.
Cache clear happens in two stages a) mark data as cleared in the status table and b) Garbage Collection: delete entries for cleared data from the status table and delete data from the target table.
Cache Garbage Collection Impacts
In a cluster, due to cluster split, sometimes data might be deleted that is used by other members. "garbageCollectionDelaySeconds" is the configuration parameter designed to handle this.
If procedure caching is used and if there are a significant number of variants which are constantly refreshed, then Garbage Collection might consume significant CPU and memory. There are a couple of parameters to control this a) debug/maxConcurrentCacheGarbageCollectionJobs or debug/delayBetweenCacheGarbageCollectionJobs and b) debug/disableCacheOrphanGarbageCollection.
Each instance of a cache is owned by a cluster (if the Active Cluster is present), otherwise by a server instance (if serverid is present).