Latency Special: Latency rates 2012
Feature

Latency Special: Latency rates 2012

In a rare attempt to establish some public benchmarks, Capacity has commissioned Renesys, the internet monitoring specialist, to calculate latencies between the world’s 10 largest centres of economic activity. As public data dries up, Richard Irving reports on the findings.

If ever there were proof of the speed at which the race to zero is nearing its end game, then it can be found in the sudden paucity of public data on latency rates.

There was a time when operators would proudly vaunt their connectivity speeds. But that has lead to an "arms-race" scramble as competitors fight to slice milliseconds – and increasingly, microseconds – off the routes of rival providers. Now, the dynamic is changing.

The law of diminishing returns is starting to weigh heavily on some key routes. On the one hand, the incremental improvements in latency are getting smaller – and thus easier for rivals to exceed as each kink on a route is straightened out; on the other, the investment required to leapfrog ahead of competitors is getting ever larger.

In short, to make latency rates public at this late stage in the race, is to risk ruling yourself out of contention for the winners podium. And yet the relative value of latency has never been more critical. Away from the deep-pocketed world of high frequency trading, low latency can be a murky business.

As Jim Cowie, chief technology officer of Renesys, explains, most enterprise customers who use IP backbones are not obsessed with millisecond-perfect performance, instead viewing the internet as a utility capable of carrying their traffic worldwide for pennies on the dollar, compared to traditional leased line alternatives. "The cost savings are real, but to take advantage of them the enterprise buyer really has to keep their eye on performance", Cowie explains.

Although the race to zero is effectively defined by the ultra-low latency business vertical, Capacity specifically chose to establish benchmarks on an IP backbone because this area of the market is more relevant to most business users. As the data illustrates, internet customers might experience latencies as much as three times slower than the optimal rate implied by the speed of light and well over double those typically offered on ultra-low latency routes.

Some of the problems are endemic to any WAN project: the slower speed of light in fibre versus vacuum, for example, or the suboptimal physical paths taken by submarine cables as they wend their way around continents. Others are unique to the internet’s cheapest-to-deliver model, which can tempt users to overlook factors like variability and stability.

Oversubscribed links can create unpredictable congestion, causing round trip times on a popular route to spike at the busiest times of day. Sudden shifts in latency can also occur due to instability in the underlying routing table, as providers decide to change their preferred paths to each other.

"That doesn’t add a lot of latency from a web user’s perspective – maybe 10-20ms– but for an enterprise customer who’s expecting the fastest direct path, it can be really puzzling", says Cowie.




 

How we calculated the data

To get these numbers, Renesys gathered end-to-end traces from more than 75 vantage points worldwide over a period of a week.

The firm picked the most popular route, defined by the number of customers using it at any given time.

By geolocating the source, destination, and intermediate hops visited by these traces, a picture emerged of some common round-trip latency patterns that might be experienced by real internet customers in the world’s major centres of economic activity.




 

Gift this article