Low latency is the ability of a computing system or network to provide responses with minimal delay. Actual low latency metrics vary according to the use case.
In IT, latency is the time that elapses between a user request and the completion of that request. Even processes that seem instantaneous have some measurable delay. Reducing such delays has become an important business goal.
Many of the applications requiring low latency need it to improve the user experience and support customer satisfaction by helping applications run faster and more smoothly. Such applications can include those hosted in the cloud, online meeting applications, or mission-critical computation applications.
When a user, application, or system requests information from another system, that request is processed locally, then sent over the network to a server or system. There, it is processed again, and a response is formed, starting the reply transmission process for the return trip.
Along the way, and in each direction, are network components known as switches, routers, protocol changes, translators, and changes between the network cabling, fiber, and wireless transmission. At each step, tiny delays are introduced, which can add up to discernible wait times for the user.
As overall network traffic continues to grow, latency is increased for all users as the line to complete transmissions backs up and micro-latencies add up. This manifests as high latency, a frustrating delay before the webpage loading begins.
The geographical distance that data must travel can also have a significant effect. This is why edge computing, the practice of locating data and applications closer to users, is a well-known strategy for reducing latency. In some cases (see below), reducing this distance is a smart, effective way to lower network latency.
A low latency network is one that has been designed and optimized to reduce latency as much as possible. However, a low latency network can't improve latency caused by factors outside the network.
Latency jitters when it deviates unpredictably from an average; in other words, when it is low at one moment, high at the next. For some applications, this unpredictability is more problematic than high latency.
Ultra-low latency is measured in nanoseconds, while low latency is measured in milliseconds. Therefore, ultra-low latency delivers a response much faster, with fewer delays than low latency.
For new deployments, latency is improved through the use of a next-generation programmable network platform built on software-defined hardware, programmable network switches, smart network interface cards, and FPGA-based software applications.
To reduce latency in an existing network, follow the steps below:
Low latency is critical for any use case that involves high volumes of traffic over the network. This includes applications and data that reside in the data center, cloud, or edge where the networking path has become more complex, with more potential sources of latency.
Today's interactive video conference or online meeting applications require low latency. Without a discernible delay between words spoken and live video, there will not be smooth user interaction. As a result, participants will talk over each other, causing misunderstandings between them.
In gaming, transmissions from third parties, that is, other players, must be incorporated, perhaps from the other side of the world. In addition, a rapid pace of action—and interaction—has to feel live.
Low latency trading in financial markets is the practice of leveraging millisecond advantages in network speed and latency to get information quicker than other traders.
Low latency trading is a highly competitive race where trading firms compete by having the fastest infrastructure and, more importantly, locating their operations as close to the trading exchange as possible to reduce distance-based latency.
Low latency trading is often combined with algorithmic trading (called high-frequency trading), with the overall approach of programmed trading. Entire trading firms have been founded just to pursue this strategy; in fact, programmed trading is said to account for 70 percent of the daily volume on the New York Stock Exchange.
Improving latency may be a matter of making small improvements and compiling time savings in several places, which will, in turn, provide a meaningful improvement in network performance. These improvements may include:
The programmable network platform allows developers and engineers to create and deploy new feature extensions, upgrades, and custom applications to lower network latency and solve networking challenges.
The SmartNICs are FPGA-programmable and offer the flexibility and capabilities of software to support ultra-low latency and performance-based architectures.
Firmware development kits, or FDKs, provide a software development framework to add application-specific intelligence and functionality to FPGA based programmable switches and SmartNICs.
Field-programmable gate array (FPGA) devices and software can be programmed to serve specific use cases that reduce latency and solve today's demanding network challenges.
A storage area network, or SAN, is a subnetwork of data center storage devices shared across a high-speed network that allows users to access the storage from any location.
ITOM software centralizes all the tools needed to manage, provision, and monitor the performance, status, and delivery of computing, networking, and application resources.