close banner

How Does Network Latency Impact Application Performance?

Network latency is the duration of time it takes a data packet to travel from its source to its destination across a network. In terms of user experience, network latency translates to how fast a user’s action produces a response from a network, say how quick a web page accesses and loads over the internet, or the responsiveness of an online game to the gamer’s commands.

How is network latency measured?

The measurement of network latency is called the ping rate and is typically measured in milliseconds (ms). The standard ping is measured for either Round Trip Time (RTT) or Time to First Byte (TTFB). RTT is the total time it takes for a data packet to go from the client and return from the server, and it is the standard reporting measurement. TTFB is the time it takes for the server to get the first byte of client data, and should be half the time as RTT on the same path.

How does high and low latency impact network performance?

Higher network latency means the longer it takes for data packets to travel to their destinations, and the slower the network will feel to its users. With exceptionally high latency, web pages will load slowly or not at all. It will take longer to send email messages. Large files will download and upload over minutes and hours instead of seconds. Streaming and video services will be choppy or fail to load. Push notifications on phones will be delayed or not be received at all.

Application Performance and Network Latency

Web applications and applications using network connections suffer when network latency suffers. Today, because of broadband, latency is a much bigger problem for application performance.

In the 90’s and early 2000’s, personal internet connection bandwidth was only 0.01% the speed of broadband. Files like images or web pages required much more time to download. Relatively speaking, latency times were negligible because they were too small to be noticed against the slow download speeds. Common occurrences when accessing web pages were the slow loading of images which would render pictures line by line.

With the advent of broadband, bandwidth grew wide enough to send large files at speeds vastly quicker than in the 90’s. Now, instead of image files downloading line by line, video files can be downloaded in minutes. Effectively, broadband made poor latency times more apparent by contrast. When latency interrupts large files like streaming video the user feels it more sharply because today’s user has been conditioned to expect fast access and download times.

Indeed, companies have seen significant usage and sales drawbacks when latency worsens. In the case of Bing, Microsoft’s search engine, a two-second slowdown in page load performance reduced per user revenues by 4.3%.

Increasing bandwidth does not make up for poor network latency issues either. The physical distance between the server and client is the top contributor for network latency.

What are the contributing factors for latency performance issues?

There are three fundamental characteristics of a network that impact latency. However, every network is different and each configuration should take into consideration these characteristics when addressing network performance.

Transmission Technology

Every physical point that data must pass through from the client to the server and back again impacts network latency. Every wifi router, WAN, switch, ethernet cable, or fiber optic cable can transfer data at various speeds based on its make-up, function, and the material the data passes through. The technologies with the greatest electric resistance will be the slowest section of a transmission, for instance, data traversing copper metal cables will add more to network latency than using high speed fiber optic cables.

Propagation Delay

Closely associated with transmission technology, propagation delays are ‘delays of distance’ (propagation here is how quick data can move to a certain location). Stripping away all hardware-type delays, the speed-of-light will still limit transmissions, in the case of cross-country fiber optic transmission it should take light 15-30 ms (ping test shows it takes approximately 69 ms). Data cannot propagate across the network faster than this physical limitation.

Routing, Switching and Queuing Delays

Routers and switches are designed to act as traffic controllers throughout networks, directing and optimizing the path of each data packet to prevent network congestion. When routers and switching cannot direct traffic fast enough or experience high traffic volumes, queuing delays can occur. This is when data packets are put to wait in buffers to be processed later. This waiting time is the queuing delay.

Related Terms

Bandwidth

Bandwidth is the amount of data that can simultaneously travel across a network. To illustrate, the diameter of a hose is its bandwidth, the wider the hose the more drops of water can go through the beginning and out the end in one go. Using the same analogy, network latency would be the time it takes for the first drop of water to go from the beginning to the end of the hose.