Network analytics is any process where network data is collected and analyzed to improve the performance, reliability, or security of the network.
Today, network-analytics processes can be automated, so IT staff no longer need to manually look for and troubleshoot problems.
In network analytics, a software engine compares incoming data with preprogrammed models and makes decisions that improve network operations.
The data comes from many network sources, on issues such as wireless congestion, data speed on a switch port, and the time it takes to access an application from a connected mobile device.
This data is fed into a model of how the network ideally should perform. When a data source shows subpar performance, the analytics engine provides insight into adjustments that can enhance performance.
The network analytics process is automated, so IT no longer needs to manually look for and troubleshoot problems in the network. When network analytics is deployed effectively, an operation can scale to many devices, clients, users, and applications, while improving overall user experience, without a substantial increase in operational costs.
The analytics engine, the software program where decisions are made, collects data from around the network and compares the current state with a model of optimal performance. The program suggests a change decision whenever it identifies a deviation from optimal.
Artificial intelligence simulates intelligent decision making in computers. Many sources confuse artificial intelligence with machine learning; machine learning is a subset of the many applications that result from the field of artificial intelligence.
Networking engineers often debate whether network analytics should be performed remotely, in the cloud, or locally, at the customer premises.
Placing the analytics engine in the cloud offers access to much more processing power, scale, and communication with other networks. Placing it at the premises offers better insights and remediation performance, and it minimizes the amount of big data required to backhaul to the cloud--advantages that are particularly important on larger enterprise networks.
The answer to cloud versus local analytics is, both. Machine learning and machine reasoning modules can be placed in the cloud to benefit from larger computing resources. But having the analytics engine onsite can offer large gains in performance and save big on WAN costs.
Another important variable an analytics engine considers is context. The specific circumstances in which a network anomaly occurs is the context. The same anomaly in different conditions can require very different remediation, so the analytics engine must be programmed with the many variables for different contexts in network type, service, and application.
Other contexts can be wireless interference, network congestion, service duplication, or device limitations.
The analytics engine considers the relationship among variables in the network before offering insights or remediation. The correlation among devices, applications, and services can mean that correcting one problem can lead to problems elsewhere. Correlation greatly increases the number of variables in the decision tree and adds complexity to the system.
Most analytics engines offer guidance on performance improvement through decision trees. When an analytics engine receives network data indicating subpar performance, the decision tree calculates the best network-device adjustment or reconfiguration to improve performance of that parameter.
The decision tree grows based on the number of sources for streaming telemetry and the number of options for optimizing performance in each point. Because of the complexity of processing these very large data sets in real time, analytics was previously performed only on supercomputers.
The analytics engine spots network anomalies, faults, and performance degradation by comparing the incoming streaming telemetry with a model of optimal network performance for each data source. That process produces insights into ways network performance and user experience can be improved.
Analytics engines can be improved by use of machine learning. With machine learning, the parameters in the decision tree can be improved based on experience (cognitive learning), peer comparison (prescriptive learning), or complex mathematical regressions (baselining).
Machine learning offers large increases in the accuracy of insights and remediation, because the decision trees are modified to meet the specific conditions of a network's configuration, its installed hardware and software, and its services and applications.
Machine learning is a subset of artificial intelligence, since it gives analytics engines the ability to automatically learn and improve from experience without being explicitly programmed.
When analytics engines are programmed to reason through logical steps, machine reasoning is achieved. This capability can enable an analytics engine to navigate through a number of complex decisions to solve a problem or a complex query.
Machine reasoning can enable analytics to compare multiple possible outcomes and solve for an optimal result, using the same process that a human would. This is an important complement to machine learning, when many similar outcomes are possible from many varied inputs.
The analytics engine will offer corrective actions for a given network event. This can be guided remediation, where the analytics system specifies steps to be performed by the network administrator, or closed-loop remediation, where it sends instructions directly to the automation portion of the network controller for changes to be made automatically.
Streaming telemetry reduces delays in data collection and can be information on anything from simple packet-flow numbers to complex, application-specific performance parameters. Systems that can stream more telemetry, from more sources and about more network variables, give the analytics engine better context in which to make decisions.