The architecture debate
Feature

The architecture debate

The re-engineering of next-generation networks could lead to flatter and more intelligent networks. It could also have a big impact on the cost of ownership.

Telco networks are the nervous system of modern society, after energy, the second most critical element supporting the way we live today, according to Stephan Scholtz, CTO at Nokia Siemens Networks (NSN). That should be good news for carriers: owning and operating a network is a valuable business. But the bad news is that the global nervous system is expected to carry an ever-increasing amount of traffic for which modern society is prepared to pay an ever-decreasing amount of money.

NSN estimates that, by 2015, the world’s networks will be carrying a zettabyte of IP traffic a year but intense and growing competition will ensure that the price paid for network bandwidth will continue to decline. “We expect to see a significant lowering in the rate of price decline but there will continue to be pressure on pricing,” comments John Hayduk, CTO and SVP operations and engineering at Tata Communications.

Capacity-hungry IP services

At the same time, operators are rolling out and supporting new capacity-hungry IP services which need to be delivered with excellent and seamless customer experience across different network and device types. For an IPTV roll-out, for example, 10Mbps access is too slow when there is the possibility that members of a single household may want to view multiple HD channels simultaneously, while also using voice and internet access services. Operators need to invest in “an order of magnitude higher capacity out to the edge and the impact of this ripples through the entire network,” points out Glen Hunt, Current Analysis’ principal analyst, carrier infrastructure.

In addition to upgrading their networks on all fronts to support explosive growth in bandwidth demand, operators also need to invest in managing the proliferating variety of service types, both fixed and mobile, with a finer-grained quality of service. And all this against a backdrop of uncertainty: telcos are battling for position in the next-generation services value chain and it is unclear how far the services revenue they secure will cover the ongoing investments they need to make in their networks. While finding new service revenues is an important part of any operator’s next-generation network strategy, another imperative is “removing cost by a factor of several hundredfold over the next five to seven years,” Scholtz suggests.

“It’s not so much about growing capacity as about growing capacity profitably,” Hayduk remarks. Tata is seeing a 60% to 70% year-on-year growth in IP transit traffic. Its network is already up to 1.2Tb and the carrier needs to add 500Gb to 600Gb of capacity each year to keep up with demand. “It’s becoming more and more difficult to manage upgrades in a cost-effective manner with the same number or fewer people. We need more easily managed, multi-functional and power-efficient boxes with denser line rates so we don’t have to add an endless amount of equipment to the network and manage it,” Hayduk says.

Falling capital costs

The capital cost of upgrading a next-generation network is tumbling thanks to silicon economics and supply chain efficiencies. NGN technology is becoming cheaper both to buy and to implement. As Interoute’s Jonathan Wright, director, wholesale products, points out, at the end of 2009, the company deployed an 8,000km network consisting of 160 10Gb waves in four months. Nine years ago, Wright was working for another carrier which took 18 months to deliver a very much smaller – both in terms of size and footprint – pan-European network. Interoute can buy as much capacity on two opto-electronic chips today as was available on a full rack two years ago. Since 10Gb ports are standard on backbone routers, the carrier is able to manage higher amounts of traffic with the same piece of kit, bringing down the amount of capex needed per unit of bandwidth.

Hayduk confirms that 40Gb and 100Gb optical transmission will help cut costs and Level 3 is looking forward to 100Gb wavelengths to overcome load balancing and resiliency costs associated with managing a network built on 10Gb waves. Between its two largest city pairs, Level 3 has 30 10Gb wavelengths running in parallel as one bundle of capacity. “We will be able to drop the number of links in our bundles by a factor of 10 when 100Gb wavelengths become available,” points out Andrew Dugan, Level 3’s SVP network architecture and engineering. Higher density line cards will also help: Alcatel-Lucent recently announced the industry’s first 100Gb line card for its multi-service aggregation platform aimed at reducing the cost of managing high volumes of video traffic.

Is it enough?

But Moore’s Law-based technology improvements may not be enough to bail operators out as the tidal wave of IP-based services keeps coming. Human resources and ongoing operational costs set the floor for how far operators can drop their prices and cheaper, more capable kit doesn’t necessarily translate into being able to run a highly profitable network. “It costs the same amount of money whether an employee is testing an E1 circuit or a 10Gb service,” Wright points out. “We need to move the basis for our pricing from an allocated capex model to an operational cost model and reduce operational costs.”

Stu Elby, Verizon’s VP of network architecture, agrees: “Over the past 10 years, we’ve come to the conclusion that the upfront capital cost of the network is almost not important. Sure, we want to negotiate the lowest price with our vendors but when it comes to the total cost of ownership of the network, only a small piece of this is capex. We need to reduce the cost and complexity of the network and its ongoing operational cost over a longer time period – the five to 20 years a piece of equipment will be in the network.”

Back to basics

Which is why operators are going back to the drawing board to re-engineer IP networks that Luc Ceuppens, senior director at Juniper Networks, describes as “now around 10 years old and designed for a time when no one expected them to grow so fast”. This fundamental rearchitecting of the NGN has two basic goals, both concerned with increasing the manageability of very high scale networks and the services that run over them, thereby decreasing opex. The first goal is the more sophisticated and cost-effective distribution of intelligence within the network and the second is the “flattening” of the network, stripping out unnecessary levels of equipment, both between the IP and transport layers and within the IP layer itself. While there is overall agreement about the desirability of these objectives, there are, unsurprisingly, differences of opinion as to how to achieve them. For a number of reasons, 2010 is likely to be the year that the IP versus packet optical transport (POT) debate which has been simmering under the surface within many carrier and vendor organisations will take centre stage as telcos seek to establish the optimal packet optical transport architecure for the next evolution of the NGN.

Packet optical transport

Views of what the packet optical transport network will look like are heavily influenced by whether a vendor or carrier organisation specialises in the transport or IP layer. Some carriers’ transport departments are resisting the idea of bringing Layer 3 complexity, including traffic awareness and flexible support for the set of IP services, which range from the very small – much smaller than private line services – to the very large, into the transport layer. But Juniper and carriers themselves argue that this is inevitable. Hunt comments: “The same intelligence will need to exist in the optical backhaul and the routed network. We’re already seeing Ethernet interfaces on optical gear so that the optical network is traffic-aware and can support classes of service. There also needs to be common operations, administration and maintenance (OAM) between the two domains so they can be managed together, end-to-end.” As Ceuppens points out, the separation between IP and the transport layer is merely an accident of history. “In over 25 years, the separation between transport and services hasn’t changed. We’ve always run packet networks inefficiently, from the days of X.25 over TDM. Now we’ve moved on to run MPLS over WDM the same way but we need to find a way for the two to co-exist more efficiently.”

Some telcos, notably KPN and TDC, have already dismantled the historic separation between their transport and IP engineering groups: some Tier 2 and Tier 3 players have never had separate organisations. Vendor organisations that play in both worlds are also reorganising to support packet optical transport evolution. However, Ceuppens expects different flavours of POT technology to emerge depending on a vendor’s background, which he characterises as “Big O, Little P” or “Big P, Little O”. Stephan Rettenberger, VP marketing for Adva Optical, is an example of the former approach, arguing that the network needs to become more automated with more data traversing it at wavelength level, “not consuming resources in the more expensive, processing layer of the network stack”.

A common transport layer

The argument for bringing packets and photons together in a common transport layer is one of managing very high volumes of traffic cost-effectively. Verizon has carried out studies that show that power consumption is one of the company’s largest operational costs and that routers are the hungriest users of power in terms of watts/Gbps. “There is a big step function down to L2 Ethernet and the optical switch has the lowest power consumption,” Elby says. “So it is advantageous to us if we can do more at the optical layer on a lower cost, lower complexity platform. We know that in general, 25% to 30% of the traffic across our backbone routers is transit traffic and we don’t want that tying up an expensive, power-hungry box. If it doesn’t need to use the capabilities of the IP router, it shouldn’t be there – this is traffic we are trying to push down to our ROADM-enabled optical layer, using an intelligent optical routing protocol to push the right traffic at this level from New York to Los Angeles, for example.”

In order to decide which traffic should be routed at which level, Alcatel-Lucent says operators need an intelligent unified management plane. “The network will need smarts in the converged services edge which is tightly coupled to the IP and optical layers,” explains Manish Gulyani, VP portfolio strategy, for Alcatel-Lucent’s high leverage networks initiative. “IP boxes should be able to signal to the optical layer using a standard protocol asking it to set up bandwidth.”

A unified management plane

Alcatel-Lucent’s unified management plane will be based on the MPLS-TP standard due to be ratified in mid 2010. MPLS-TP is a cut-down version of MPLS designed to keep transport departments happy. MPLS-TP provides commonality between the IP and optical domains from an OAM perspective (errors, alerts, alarms), with common management and provisioning to follow. Juniper, however, argues that to continue with MPLS at the IP layer and MPLS-TP at an optical layer alongside the optical data plane, OTN, merely complicates network management and duplicates functionality. Juniper suggests that the MPLS and OTN need to merge and that MPLS should become the new transport layer with an IP services data plane on top at Layer 3. Verizon is implementing IMS at this level and Elby says of Verizon’s PON architecture – whatever it ends up looking like – “There will be investment and learning round putting this in place but ultimately, operating the transport layer will become like operating the IP layer and the IP layer itself will become a pretty automated network.”

Not surprisingly, Cisco is fighting a rearguard action in support of the router, especially at the edge of the network, where Mike Capuano, director of service provider marketing for Cisco comments: “There are misconceptions round router pricing. The pricing for Ethernet edge routers is dropping fast and with 40Gbit and 100Gbit options coming on stream, the gap will close rapidly.” Large carriers may not want to push IP transit traffic through large and expensive core routers but if that traffic consists of content, and especially video content, on its way to eyeballs at the edge, they would rather it didn’t touch the core network at all. This means putting more intelligence – read more routers – at the edge, although ideally those routers should be multi-functional to keep costs low.

Distributed intelligence

Cisco argues that unless operators distribute intelligence effectively in their networks they won’t be able participate in the potentially lucrative opportunity for QoS-differentiated delivery of services to customers, the so-called two-sided business model where service developers pay operators to deliver services with high quality and operators also get a cut of the customer revenue. Cisco is forecasting that 90% of all IP traffic will be video-based by 2013, so cost-effective management of the network will depend on being able to handle video appropriately. Cisco has integrated a video monitoring capability into its line cards so an Ethernet box with 16x10GigE ports can also monitor and manage the video traffic flowing through it, Capuano points out. “You won’t get that level of visibility in the optical layer,” he says.

Cisco is also putting caching and processing capabilities into its edge router to support operators’ content delivery and cloud services. “We’ve always advocated that if customers want private line wavelengths or sub-wavelengths, it makes sense to bypass the router and go through ROADMs or OTN switches. But for IP services, carriers such as NTT are aggressively distributing intelligence in its network as close to the user as it can get,” Capuano says.

Verizon and BT are heading in the same direction as NTT, although they see the provisioning of intelligence at the network edge as a means of flattening the network elsewhere. Verizon has an edge router at every head end in its FTTH network but “we are being judicious about where we invest in expensive routers in the two or three aggregation tiers between the edge and the core,” Elby says. “This is where the optical bypass strategy comes into play. The edge routers are not autonomous but will be managed by a ‘master brain’ in the core using IMS. In the future, we will have a more centralised traffic management and control infrastructure that speaks over standard IMS interfaces to policy enforcement points at the edge.”

And flatter networks

BT agrees that growing demand for bandwidth in the access network means “hauling traffic back to the core doesn’t make sense,” comments Tim Hubbard, head of wholesale data solutions at BT. BT is distributing intelligence – switching, content caching and local turnaround – “as near to the edge of its network as is economically viable,” according to Hubbard, so as to curtail router growth in the core. In BT’s case, this means network intelligence will sit out in its 1,100 aggregation nodes, rather than in its 5,500 very small sites or local exchanges. The number of aggregation nodes may be consolidated to a smaller number over time.

Hubbard maintains that cheaper, multi-functional equipment is making such an architecture possible – for example, Ethernet and broadband remote access server (BRAS) functions are now appearing together in the same box, so instead of implementing BRAS in 20 core sites and backhauling traffic to them, BRAS function can co-exist in the 800 Ethernet PoPs BT plans to have rolled out by March 2010. Similarly, the ability to cache content close to the edge is the basis for BT’s content delivery network offer due in 2010. BT will start by providing caching facilities in a few aggregation sites but can extend this to all sites as customer demand grows. The same principle will be used to handle the rise in mobile data traffic anticipated as UK mobile operators roll out LTE. “The key thing is to have a network architecture that can flex and scale, taking advantage of the new technologies in the industry as they come along so that we can maintain our cost-competitiveness and provide a positive customer experience,” Hubbard says.

Light at the end of the tunnel?

Because there are already new technologies queueing up on the horizon. BT is using GPON technology in its access fibre roll-out but already next-generation optical access (NGOA) is on its radar. NSN has stopped investing in GPON as it believes NGOA is the more sustainable technology longer-term. Due in 2012/2013, NGOA will support 1Gbit dedicated access bandwidth in both directions over a distance of up to 100km without the need for DSLAMs or aggregation points. “The fixed access network is going to disappear. In major cities, there will only need to be one central point for network intelligence,” Scholtz predicts.

The NGOA story supports Adva Optical’s vision of the future. “As more and more fibre is available close to end points and a scalable transport technology provides longer reach, operators can bypass local exchanges and consolidate their central offices,” Rettenberger says. “This is a long-term evolution but large European incumbents have already started down this road.” Rettenberger anticipates carriers using separate wavelengths in future to ensure physical security between customers, rather than needing to multiplex wavelengths, eliminating many of the arguments round the management of packet optical transport networks that are bubbling up today. The future of networks and potentially the salvation of carriers, is light – and lots of it.

Gift this article