Thinking ahead
News

Thinking ahead

The capacity of backbone networks needs to scale fast to meet future needs. We examine the challenges and opportunities of moving to a 100G world.

The world of data communications is changing fast, with demands of the information age being driven by a new generation of corporate and consumer applications and services. These surging bandwidth needs are creating major challenges for service providers and raising questions for their customers.

A backbone network that supports 10G optical and Ethernet ports, until recently, would have seemed well set for all eventualities. After all, it’s not so very long ago that a 1G Ethernet connection over MPLS was the cutting edge in data transport speeds. But service providers are already having to think well ahead of 10G, and formulate strategies now so that they can continue to satisfy customer requirements, reduce the cost per bit, improve network performance and augment backbone capacity. Today, many service providers are evaluating vendors of 40G and 100G equipment with hopes of aggressively upgrading network core in the near future.

There are voices around the industry that favour an intermediate step of 40G today, moving to 100G when needs dictate. In some cases we see 40G being used within a data centre, with 100G finding its way into the backbone network infrastructure. Services based on 40G may be entering a growth phase, but already appear insufficient for the network core of tomorrow. An issue with deploying 40G on a limited basis is that, while appearing to fill the gap before the world is fully


“The upgrade path must make sense, if the 4x10G model is more cost effective, many service providers will forgo 40G backhaul network deployments.”
Lucy Cross, Director, Product Development, Shaw Business





100G-ready, it actually creates an additional replacement cycle where such a thing is neither desirable nor needed. A move from 10G to 40G to 100G in short order would represent too much replacement of key infrastructure in too short a timescale for operators, and would also put pressure on vendor product development cycles.

By no means do all equipment vendors have 100G product





ranges ready to go, and the standards required to make a 100G world work seamlessly were recently finalised and have only just started to be adopted for use. A preferable stopgap measure that gives effective 40G capacity, but without replacing endless routers, is a 4x10G model. A complaint heard around the industry is that 40G is more expensive than 4x10G. The hope is that 100G will be better equipped to compete with 10G deployment. The upgrade path must make economical sense, if the 4x10G model is more cost effective, many service providers will forgo 40G backhaul network deployments. A big incentive to upgrade to 100G is to drive the cost per bit down. At a transponder level, service providers are evaluating moving to 100G if the cost is 6x10G or less. A preferable step for many is to work at standardising the various flavours of 100G currently in existence, with 40G only being deployed for limited niche uses until 100G is commercially available on a widespread basis. In the short term, where possible, it is better to maximise the potential of 10G throughputs, and allow energies to be focussed on making 100G a reality. 

Role for the future 

So whose job is it to make this 100G world a reality, and what are the main challenges and considerations involved in getting there? “We’re already seeing moves from 10G to 40G and 100G, and it won’t be long before standards bodies are looking to the next evolution in data rate,” says Lucy Cross, Director, Product Development, Shaw Business. “It’s our job to develop our high-speed networks and support communications to a


So what is driving all this bandwidth demand?

It is customary to think of consumers with their iPhones and iPads, IPTV and social networking, as the big drivers of bandwidth demand, putting pressure right the way up the value chain on the backbone networks that have to ferry this data over wide areas. But enterprises too are now imposing major demands on backbone infrastructures.

“Organisations everywhere are looking at off-site storage and aggregation of backend computing into a data centre with very high availability,” says Cross of Shaw. “They are also running converged networks in their efforts to extend all corporate applications to remote offices, unifying all the different streams of communication in the office over the carrier networks. Video conferencing will also consume a lot of bandwidth, which is typically dedicated and high priority in nature. With 30 minutes of video content, a network is being asked to handle the same volume of data that it would once have experienced over 30 days. In addition, analysts estimate that business’ video demand will experience 7x growth over the next five years.”

Other major bandwidth consumers include the use of FiberChannel over Ethernet (FCoE) for links between disaster recovery sites and head offices. So too is the centralising of computing infrastructure at a single location so that it can be managed by IT staff, as the site becomes a major hub within an enterprise network.

An IT manager for a company centred in Calgary, says Cross, will typically be looking for a centralised control point for their applications: “By centralising in this way, they need a 10G connection, or faster, to interface with other locations. They need also to back up data to a data centre. Storing backups of everything that’s on head office servers remotely involves a tremendous amount of data capacity between sites.”

The sheer number of corporate applications on a network is a key driver in itself, as is the evolution of server capacity and computing capabilities to match this application growth: “Servers not long ago were somewhat rudimentary by today’s standards,” says Cross. “But with the rapid adoption of new applications comes the requirement for servers to be very powerful and multi-functional, driving in turn the need for bandwidth capacity.”

Also important is the way in which carriers are strategically deploying large bandwidth links between data centres to accommodate the intensive nature of data transport to such sites.

“Carriers will deploy large bandwidth within their backbone networks; they are the early adopters,” says Cross. “A natural progression then takes place where pricing and availability of networking and computing infrastructure comes to a reasonable level, and the adoption rate goes through the roof. This happened






with 10Mb, 100Mb and Gig-E. In 15 years, we went from 1.5Mb being a significant commercial connection to a remote site, or to the internet, to Gig-E being commonly available.”

The debate over the upward migration of data transmission speeds is not something happening only at network operator level. IT managers are under pressure to increase speeds within their own network infrastructures, driven by applications deployments with high bandwidth needs. Now they are beginning to look at something different – connecting their head office servers over high bandwidth links to data centres, and making links between those data centres and remote offices.

“Not all that long ago, enterprises thought they had enough bandwidth to last for years, but not with demand going through the roof,” says Cross. “That’s because of the increase in the sheer number of applications that a corporate runs, with plenty containing high-resolution imagery or large quantities of data with stringent requirements around latency. Now enterprises are looking for a partner to help deploy these over their network. They want, for example, to combine a storage area network together with a WAN and LAN on the one link, where once they used to be separate.”

Shaw, she says, launched a 10G solution many years ago, has recently carried out limited deployment of 40G, and is already trialling 100G: “Our 10G capability has been in place for a long time, and now demand is there for it at the commercial level. End users are starting to ask for it as a service. We’re working with both data centres and corporations. Now it’s a matter of bridging the two worlds.”




world that is hungry to share information. Service providers not only build high bandwidth transmission links for internal use, they offer and deliver such services to large enterprise, government and other carrier network providers.” Wavelengths are an option for enterprises requiring 10G or 100G from their service provider, adds Cross: “One reason for this is that a carrier’s shared MPLS network, which is likely to be supporting cloud services, private data services, telephony, internet, video, VPN and a number of other services on aggregate, might be running at 10G or 100G in the backbone.


Contact
Lucy Cross, Director, Product Development, Shaw Business

Phone +1 403 716 6033

lucy.cross@sjrb.ca

www.shaw.ca/sbs 

Gift this article