Upgrading IP infrastructure - The winners and losers in the race to 100G
Feature

Upgrading IP infrastructure - The winners and losers in the race to 100G

The single biggest overhaul of IP infrastructure for a generation is upon us. Richard Irving looks at the winners and losers on the road to 100G.

 


Over the next few months, Verizon will open up a new front in the billion-dollar reboot of America’s IP infrastructure when it finalises plans to roll out its trail-blazing 100G network to a clutch of key metro areas.

The move, which insiders say is scheduled for the turn of the year, builds on the telecoms giant’s ground-breaking efforts to establish a long-haul backbone capable of delivering 100 Gigabites (G) of data a second and highlights the speed at which service providers are moving to implement a new generation of networks that can accommodate the relentless demand for bandwidth.

For now, Verizon is staying tight-lipped about details of the deployment. Speaking exclusively to Capacity magazine, Mike Millegan, president of Verizon Global Wholesale would say only that the carrier’s metro plans are part of a wider programme that will also include upgrades to several ultra-long-haul routes: “Clearly, we’ve still got a lot of exciting work to do - this programme is very much driven by customer demand and it is that demand that will help the organisation finalise the next phase of our 100G roll-out.”

Verizon, which operates one of the largest IP networks in the world, has been something of a “poster child” for 100G technology. In 2007, it tested the first 100G connection on a live network and last year it installed the world’s first fully compatible 100G link on a 750km stretch of fibre between Paris and Frankfurt.

But rivals are catching up. Scarcely a week goes by without a service provider racking up a new milestone of sorts in the sector. 



A trickle turning to a flood


Last month, for example, it was the turn of Beltelecom, the state-owned Belarusian incumbent to announce a new “first” - a 1,200km span linking Grodno and Vitebsk with an ultra-broad 100G connection.

The operator joins a host of more than 50 carriers, including BT, China Telecom, France Telecom-Orange, Portugal Telecom and TelstraClear to announce 100G network upgrades in recent weeks. And this rush of announcements, say analysts, is but a trickle compared to the deluge that is expected to come over the next few months.

Driving this migration is a desperate need to squeeze more efficiency out of existing fibre. By pushing at the limits of coherent technology, which can pick out signals amid a wash of other background noise, network developers have been able to pack 10 times as much information onto a single fibre wavelength as they were able to before, making 100G fast, efficient and cost effective.

“Service providers decided a long time ago that coherent technology was the future”, explains Steve Alexander, chief technology officer for Ciena, the network equipment maker. “Right now, it’s just a question of figuring out how to migrate your network over to it. The age of 100G has officially arrived.”

According to Infonetics Research, sales of 100G equipment are already growing at a faster rate than that for legacy technologies such as 40G and 10G. The specialist telecoms consultancy estimates that suppliers will sell up to 6,000 100G ports (connectors) worth an estimated $360 million this year – almost three times as many as for the whole of 2011 and by 2013, it expects sales to triple again to more than $1 billion.

By 2014, the agency forecasts, 100G revenues will outstrip those for 40G and by 2016 it expects 100G to be the dominant long-haul protocol. Even then, network capacity could struggle to keep pace with demand: Cisco estimates that global IP networks will be asked to carry around 60 Exabytes of video data by 2015 – that’s equivalent to around 10 times the number of words ever spoken by the entire sum of human kind.

In all, says Cisco, around 1 Zettabyte of digital data, equivalent to 250 billion DVDs, will course through global networks in 2015 alone and by 2018 the figure will rise to a truly mind-boggling 7 Zettabytes.

While video content will undoubtedly drive capacity demands in the future, other users are pushing the boundaries of capacity today. The New York Stock Exchange, for example, processes 22.4 billion digital messages every trading day – around 7.5 times more than the number of internet searches that Google handles over the same period.

The stock exchange’s network can already cope with 2.87 Terabytes (T) of data a second across a variety of circuits, including 13 recently upgraded 100G wavelengths, but it desperately needs more bandwidth – and fast.

Elsewhere, major developments in scientific research are also soaking up capacity. The Large Hadron Collider at Cern, for example, is churning out 25 Petabytes of data every year – and expects to generate a thousand times more in the next two to three years as scientists accelerate their efforts to unlock the mysteries of the “Big Bang”.

According to some estimates, it would take physicists at the Collider around 275 days to transfer the data that it produces on a 10G Ethernet wavelength. At 100G, that drops to around 27.5 days. In the research and education sector, the need to upgrade systems architecture is not just paramount, it is vital to drive scientific endeavor forward.

Even large enterprises are beginning to consider their upgrade options, as Luc Ceuppens, vice president for product marketing at Juniper Networks explains: “When you start looking at some of the truly global enterprises, they often operate bigger IP backbones than those run by the incumbent telcos of some smaller European countries – they are service providers in their own right and they are driven by the same economic challenges.”



The false narrative


In all, analysts estimate that network operators will spend billions of dollars on 100G upgrades in the next two to three years. But while this is a big number, it does not represent a new number.

Andrew Schmitt, a senior analyst at Infonetics explains: “The explosion in demand for bandwidth capacity is a false narrative – it implies that revenues are growing at the same rate as demand – i.e. at the rate of 30-40% a year.”

In fact, growth in capital expenditure among the top seven incumbents is either flat or in decline, Schmitt notes. The reason carriers are embracing 100G, he argues, has less to do with the crunch in capacity and more to do with the need to squeeze greater efficiencies out of their networks. “This is all about the need to carry more traffic for the same cost per bit.” The hard lesson that the telco industry learned 10 years ago was that capex is inextricably tied to revenues, he adds. “If revenues aren’t surging, then neither is capex.

No one is going to go out and buy excess capacity if they can’t make money on it. That’s why the reboot does not represent a huge explosion in spending – it’s existing budget that is being redirected into new technology.”

This determination to make every single dollar of capex sweat, manifests itself in the way service providers are approaching the migration to 100G. “The big value proposition right now is in being able to harvest wavelengths by taking link aggregation out of the network”, Juniper’s Ceuppens explains.

Assume that IP traffic demands force a provider to aggregate eight individual 10G wavelengths into an 80G connection along a specific route: “All of a sudden, you are burning eight wavelengths at a time when spectrum is in short supply and the costs of lighting up a new fibre are prohibitive”, Ceuppens says.

By upgrading one wavelength to 100G, it becomes possible to reclaim, or “harvest” the remaining seven wavelengths for future use. The priority for carriers is to devise a strategy that allows them to invest in an upgrade incrementally and not be pushed into a root-and-branch overhaul.

By staging the investment, carriers can achieve a huge increase in spectral efficiency without the need for any lumpy capex. “You can upgrade an entire network in this way on a pay-as-you-grow basis,” Ceuppens explains.

Of course, the economics only stack up provided the cost of a single 100G wavelength is competitive against that of an equivalent number of 10G wavelengths. Hitherto, telco finance directors have lived by a relatively simple rule of thumb: they will sign off an upgrade provided it delivers four times the capacity for 2.5 times the additional cost.

But while 100G equipment continues to command a so-called “early adopter” premium in the market, observers from inside the industry say the price of 10G equipment is falling by around 15% a year.

In the words of Geoff Bennett, a director of solutions and technology at Infinera, this dynamic makes trying to build a financial case for 100G like trying to catch a falling knife: “The most popular long-haul wavelength speed is still 10G, and analysts say that it will stay that way for a while yet.”

To truly appreciate the benefits of 100G, he maintains, service providers must have as much insight into opex as capex: “Opex dominates the long- term cost of ownership of an optical network, not capex. Buying what looks like a cheap 10G solution today could turn out to be a long-term cash drain for a service provider.”

Unless carriers have a very good handle on their opex, they may never see that and therefore never make a solid financial case for an upgrade.



Land grab


Perhaps the biggest impact of the 100G reboot will be among equipment vendors. So far, the market has largely been the domain of Alcatel-Lucent and Ciena and this has arguably deterred some big carriers such as AT&T from taking the plunge to embrace 100G fully.

By the end of the year, however, more than 10 systems vendors will have off-the-shelf 100G platforms ready to go including Fujitsu, Huawei, NEC and NSN. This will drive prices down and help 100G gain further traction.

The stakes are particularly high for equipment vendors, because the transition to coherent technology marks a once-in-a-generation shift that will prompt many carriers to rethink their relationships with suppliers at a grass roots level. “Essentially, Alcatel-Lucent and Ciena have an opportunity to make a huge land grab,” says Infonetics’ Schmitt. “They are considerably ahead of the competition and they are using this to press home their advantage.

We may only be talking about a few 100G ports for now, but this is a huge play that will lead to substantially greater business in the future.” For some vendors, Schmitt adds, the march to 100G could prove to be the end of the road: None of the equipment manufacturers are making much money, he warns, and yet there is still a lot of work to be done to get us beyond 100G.

“If you look at the router market as a guide, there’s roughly twice the number of optical equipment suppliers than the market can reasonably support. We need to see some pretty hefty rationalisation to get to the point where component suppliers have the necessary scale to make big investments in R&D.”

Alcatel-Lucent and Ciena are leading the field, the analyst says, while Cisco, Infinera and Huawei are making good ground from a late start. From there on, Schmitt warns, the outlook is uncertain. “There are a lot of old dinosaurs roaming the network equipment landscape and 100G is a meteor heading their way.”



Intelligent networks


The laws of physics notwithstanding, network engineers are confident they can divide up channels in a fibre to deliver far greater bit rates still. When carriers might need them – or afford them – is another matter.

As Dominic Elliot, service provider chief technology officer at Cisco says, the financial metrics of 100G are very persuasive: “As a line rate, I think 100G is going to be around for a very long time – long term, the economics are very good and history tells us that they get better every day.”

If the pressing problem of the last few years has been how to make more capacity, then the issue for the current crop of network architects is how to make that capacity more intelligent so that it works better in today’s highly-connected world.

As Ciena’s Steve Alexander points out, processors are getting faster, storage is getting denser and capacity is getting bigger: “Carriers are just starting to get the idea that they can be more than a facilitator that connects point A to point B.”

If you can devise your network as a programmable platform so that you can ramp up capacity, or storage, or processing power as and when a customer needs it, then all of a sudden you are not selling a connection, but a slice of infrastructure, he says. “That’s the big game-changer, because that’s the sort of ecosystem that you can build new companies off.”  

Gift this article