Home / Mission-Critical Computing / Lower Latency

Lower latency refers to a minimal delay in the processing of computer data over a network connection. The lower the processing latency, the closer it approaches real-time access. A lower latency network connection is one that experiences very small delay times. Latency is the amount of time a message takes to traverse a computer network. It is typically measured in milliseconds. Any latency below 100 milliseconds (ms) is considered good, and below 50 ms is very good. Typical DSL or cable Internet connections have latencies of less than 100 ms, while satellite connections usually have latencies of 500 ms or higher. In general, LAN connections are faster and have lower latency, and WAN connections have higher latency.

Latency is affected by propagation delays, transmission delays (properties of the physical medium) and processing delays (proxy servers or network hops). The impact of latency on network throughput can be temporary (lasting a few seconds) or persistent (constant) depending on the source of the delays. Excessive latency creates bottlenecks that prevent data from filling the network pipe, thus decreasing throughput and limiting the effective bandwidth of a connection.

Lower latency is especially important in industries that rely on real-time applications and live-streaming graphics such as for banking, diagnostic imaging, navigation, stock trading, weather forecasting, collaboration, research, ticket sales, video broadcasting and online multi-player gaming. Cloud latency, or the amount of time it takes for a cloud-based service to respond to a user’s request, is an important criterion when choosing a cloud provider. Cloud latency is affected by where users connect to the cloud, which cloud data center they connect to, which network provider is used, the route of network traffic and other factors.