Latency Special: Investigation into the markets and technologies driving low latency
Feature

Latency Special: Investigation into the markets and technologies driving low latency

Why is faster actually more desirable? Richard Irving investigates the market conditions and technologies driving the worldwide thirst for low latency services.

How do you put a value on low latency? The speed-crazed financial sector, which relies on ultra-low latency networks to drive low-risk high-volume trading profits, reckons that a single millisecond (ms) advantage could be worth upwards of $100 million a year. But a far more interesting number comes from Amazon. The world’s biggest e-tailer estimates that every millisecond delay to its online service costs the group $4.8 million in lost sales.

Scale that up to latencies relevant to Amazon’s CloudFront content delivery network, and the number shoots up to $500 million. This dawning realisation has led the company to invest heavily in low latency initiatives. The latest will speed up the time it takes to transmit dynamic web pages, including video streaming, by anything from around 45ms to 65ms – a saving in terms of lost sales of anything from roughly $200-$300 million.

It’s not just Amazon that is developing an obsession for speed. According to Kerstin Dinklage, senior VP of sales and business development at VTL WaveNet, the drive to cut latencies is coming from every corner of the wholesale market: "Over the last year, we have seen a very marked increase in the number of customers specifically looking for low latency connectivity. Three years ago, people didn’t really ask what the round trip delay on their network was. Now they want detailed Service Level Agreements (SLAs) setting out very specific tolerances. It’s a big change."

But it may not necessarily be a welcome change. In days gone by, a network was judged on its uptime, prompting engineers to incorporate security, resiliency and redundancy into their designs to ensure that their systems ran smoothly and efficiently. Some of that redundancy has now been sacrificed in the drive to slash connectivity speeds. Moreover, the often brutal "race to zero" mentality that has defined network evolution in the financial services sector, is starting to rear its head in other markets, with an added complication that while customers expect lower and lower latencies, they are not necessarily willing to pay for it.

As Jonathan Wright, vice president of service provider sales at Interoute, points out, customers no longer look upon low latency as a luxury: "Low latency is always desirable. The question is whether it is financially viable. If you can offer low latency on a route and compete against slower providers at the same price points, then you will always win the business."

It is perhaps testament to the number of real-time applications that are cementing themselves into the fabric of every day life that super-fast connectivity is such an important factor in overall network performance these days.

In particular, the proliferation of video services such as video conferencing and telepresencing and indeed the move towards unified communications, where firms are looking to bring together real-time voice, video and data, are combining to thrust the issue of latency centre stage.

And connectivity speeds become even more important as companies start to think about moving these vital applications into a centralised cloud environment. "Video is driving the agenda – and the problem with video is that the more bandwidth you throw at it, the more it consumes, which naturally tends to slow everything down anyway", explains John Hammond, vice president of Business Network Services at NTT Europe.



Latency and the cloud

Measuring latency around cloud-based applications is notoriously tricky, which is perhaps one reason why network developers have shied away from any marketing push that puts ultra-fast connectivity in the spotlight. In the financial services sector, latency is relatively easy to calculate because most links are point-to-point along virtual private lines. But the end points on a cloud application may not necessarily be fixed - that is, after all, the whole point: users could be sitting on the end of a high-speed fibre line in the middle of a dense metro, or hanging tenuously on the end of a satellite uplink in the middle of the African veldt.

Perhaps just as importantly, cloud systems tend to be shared architectures, either because they run more than one application, or because they are accessed by more than one set of users and that makes analysing latency performance a challenge.

Nevertheless, companies who take for granted that latency-sensitive business applications will work just as well in the cloud as they did on a local network do so at their peril, warns Hammond. "Bear in mind that a lot of networks were never designed with these technologies in mind, so when you start to move applications over to the cloud, you can put considerable strain on your network, especially at the access points to the cloud."

Sten Nordell, chief technology officer at Swedish network provider Transmode, agrees: "Little consideration is being given to how some of these applications behave in what is essentially a completely different environment. We know of one very popular enterprise resource planning (ERP) service which crashes every time you try to run it as a cloud service – it was designed to interface with a low latency local network and it just can’t cope with the cloud – the latency and jitter makes the application think that it has lost connectivity."

Just about every global enterprise is looking to switch mission-critical business applications to the cloud and, for them,

low latency is clearly becoming a priority. Indeed, companies can alleviate some of the pressures on their creaking corporate networks by migrating high performance applications straight to the cloud as quickly as possible.

But more importantly, argues Nordell, there will come along a new generation of companies whose business model will be based entirely in the cloud, and for them, low latency will be vital: "Companies need to develop specialist cloud access networks rather than rely on best-effort Ethernet links and they need to do it now", he warns.

That means shifting the focus away from looking at how fast traffic is flowing inside the cloud – essentially from data centre to data centre - to the way in which it enters and leaves the cloud. "We’re finding that the stress points are by and large on the access side", says NTT’s Hammond. "A lot of organisations have no idea how poorly these stretches of their network are performing." But considering how much money is invested in these applications, how much a company’s staff might be dependent on them, and indeed how much revenue might be flowing through to the bottom line from them, you quickly start to appreciate just how pressing it is that companies sort these latency problems out, he adds.

In some instances, that might mean going back to the drawing board: "The key to driving down latency in cloud access networks, is to get operators to realise that their ultimate focus should be on delivering the application to the end user as fast as possible, not in making sure that every node in the network has access to everything, everywhere", explains Nordell.

But in other instances, it might be simply a question of using acceleration technologies to push some business critical applications in and out of the cloud quicker than others. "The challenge facing service providers is to provide enhanced application aware services to ensure that the end user experience is of the highest quality, both across corporate networks and cloud-based applications. We are beginning to see a lot of work of this type coming through, and that will only increase throughout the next year", says Hammond.



The next generation of latency

Another key area where low latency is fast becoming critical is in the roll-out of next-generation mobile networks.

If the high frequency trading market is worth $1 billion to wireless providers, then the deployment of LTE mobile backhaul is worth at least 10 times that. From a network infrastructure point of view, the impetus is to bolster the number of smaller cell sites on the backhaul. Currently, it is estimated that there are around 300,000 small cell sites in the US.

That is expected to triple over the next two to three years as network operators provision for the explosion in mobile traffic that is set to swamp networks. Here to, the twin spectres of distance and capacity will require operators to drive latencies considerably lower. "Running fibre to cell sites might be the dream. But it is precisely that – a dream", says NeXXcom’s Jay Lawrence. "The priority is to create a faster technology that is cost effective and that requires service providers to embrace radio networks wholesale."

Many of the applications that are being designed to run on 4G phones, such as real-time gaming and video streaming require low latencies in order to meet quality of experience benchmarks. Moreover, backhaul networks have their work cut out to match the significant improvements in connectivity that are likely to come through at the radio interface. In some instances, the round trip delay between the device and the core network can be as little as 10ms on an LTE network – one tenth that of the latency on a 3G offering.

"Most of our customers who are rolling out LTE networks have significant concerns about latency on the backhaul", admits Transmode’s Nordell, "and every development in LTE will make latency a bigger issue."

The good news is that LTE networks have been designed to cope with an expected surge in demand for capacity. The bad news, is that latencies will always be dictated by yet-to-be-invented applications. If mobile M2M, for example, takes off as expected and a large number of devices swamp the network with demands for real-time data, then network operators may at some point be forced to sacrifice latency in order to bolster capacity.

Moreover, LTE is a packet network, so there may eventually come a time when infrastructure providers oversell the backbone and operators implement a queuing dynamic. Right now, that’s a dirty word but once network operators start adding a delay to some packets, then latency will shoot right to the top of the list of priorities.

"Right now oversubscription and queuing are not prevalent in the provider networks. But as LTE networks mature and operators become more sophisticated with traffic management, prioritising real-time low-latency traffic will become an important consideration in backhaul networks", an insider at Zayo says.

Gift this article