INNOVATION BUSINESS BRIEFING 2013: Bigger, Better, Faster, More

06 December 2013 | Richard Irving


What might life be like for network providers in the 2020’s? With only two US presidential elections, seven Super Bowls and 12 total solar eclipses to go before we find out, Richard Irving asks where innovation is taking the wholesale sector.



The year 2020 promises to be somewhat busy. Depending on who you chose to believe, 5G will hit the airwaves (Huawei), there will be 50 billion connections tapped into global networks (Ericsson) and everyone on earth will be connected to the internet. Yes, everyone, including the 60% currently not able to go online (Google).

But if the beginning of a new decade in telecoms, still seven years off, brings with it the burden of expectation, consider this: it is still a little less than seven years since the late Steve Jobs lit the touchpaper to the data revolution with the launch of the iPhone.

In less than the time it takes to qualify as an architect, the iDevice has evolved from a first-generation phone with a screen resolution of just 320x480 pixels to a suite of gadgets – the iPad and iPad mini included – that typically boast twice as many pixels as a fully provisioned High Definition television.

And this is just the beginning. Later this year, Panasonic is expected to launch a 22-inch tablet that will support Ultra-HD technology, or so-called 4K resolution (equivalent to 2,160 lines of 4,096 pixels), though this in turn could be swiftly usurped by Super Hi-Vision, an 8K format that is roughly 16 times sharper than plain old HD. There are around 33.2 megapixels in a Super Hi-Vision screen and that, as engineers trialling the technology at the BBC’s Natural History unit are finding, can swallow up an awful lot of content.

HD is in the eye of the beholder
In essence, display technology is locked into a do-or-die race to encapsulate virtual perfection. This is important to network operators, because the frenetic pace of innovation in a segment that unquestionably drives growth in video traffic may ultimately be defined only by the limitations of the human eye. Cisco, the network equipment maker, does not venture forth as far as 2020 with its crystal ball, but its widely trusted traffic growth forecasts, published every year, suggest that video traffic, fuelled in part by the rapid development of new screen technologies, will continue to redefine network demands.

Cisco already notes that video accounts for a shade more than half of all consumer internet traffic, but by 2017 that figure – including video-on-demand, internet video and peer-to-peer file sharing – could surge to as much as 90% of total volume. Every second, the firm forecasts, around one million minutes of video content will flicker across the world’s networks.

In total, Cisco estimates that more than 1.4 zettabytes of data will pulse through global internet infrastructures by 2017. To put that figure in context, a zettabyte is equivalent to one trillion gigabytes and represents enough digital information to fill 110 billion iPads. Placed one on top of another, that number would stretch to the moon and back, still leaving enough iPads to circumnavigate the earth.

Put another way, Cisco estimates that by 2017, networks will have to handle as much information in a single month as they carried during the whole of 2007, the year that the iPhone was born. Mindboggling as these numbers are, they are relevant because they underline the sheer scale of the immediate challenge facing those in the telecoms industry charged with driving innovation.

By 2017, Cisco estimates that half of the world’s projected population, or 3.6 billion people, will be online, and that there will be as many as 19 billion connections tapping into global networks, up from 12 billion today.

That in itself is a modest forecast. An equally accepted study by rival equipment manufacturer Ericsson suggests that the number of connections – fuelled by an explosion in machine-to-machine technology such as smart metering and connected cars – could push closer to 50 billion.

Think of a number
These are of course, just forecasts, and subject to the same “un-forecastable” disruptive events that gave us the iPhone in the first place. But they form the basis of an industry dialogue that recognises, if nothing else, that demand for bandwidth is more likely than not to continue growing at the current frenetic rate of around 30% a year.

The challenge is how to meet this phenomenal demand and still make money. Hossein Moiin, EVP of innovation and technology and a member of the executive board of Nokia Solutions and Networks (NSN), thinks he has the answer. Or rather, he has a vision.

“Once you understand the dynamics driving demand, you can get down to the business of defining the requirements of future networks,” he explains. By 2020, he says, mobile broadband network operators should be looking to deliver 1GB of personalised data to each and every one of their customers – and deliver it profitably.

That equates to roughly 60 times the average data consumption per user per day that we currently see in most mature mobile broadband markets. While it sounds like a big number, it is certainly the case that ever more powerful devices are spawning a plethora of new applications guaranteed to test the limits of the network.

The iPhone is a case in point. Mobile web browsing only emerged as an application in itself as recently as 2007, with the launch of Apple’s first-generation handset. By last year, it accounted for more than 50% of all video traffic.

Moreover, Moiin’s forecasts chime with some official expectations. For example, Ofcom – the UK regulator not known for its wildly bullish predictions – believes that personal mobile data usage could surge even beyond 2020, increasing 80-fold between now and 2030.

“By 2020, people might well demand mobile networks that will allow them to broadcast live video feeds from their glasses to thousands of other users in real time,” suggests Moiin.

That is just one application that we currently dare to imagine, thanks to the work of Google’s highly secretive laboratory at a brace of red brick buildings about half a mile from the search engine’s main campus in Mountain View, California. Factor in Cisco’s expectations for a near-doubling in connections in the near-to-medium term, as the machine-to-machine market gathers momentum, along with the applications that the so-called “internet of things” might ultimately foster, and the prospect of capacity constraints looms like an insurmountable barrier.

Moreover, many applications of the future are likely to be cloud-based, or at least rely on content stored in the cloud. Even assuming that as little as one third of all digital information is stored in the cloud by 2020, the strain on networks trying to provide access to and from all that data will be vast.

“Clearly, we need to find ways to radically push the capacity and data rates of mobile networks into new dimensions if we are going to be able to handle such traffic predictions,” Moiin concedes.

Don’t Panic!
That is going to require considerable creative thinking. Until now, says Moiin, end users have learned how to use mobile broadband networks to enrich their lives. But in the future, the boot will be on the other foot. “End users will effectively teach networks and devices how to evolve according to their lifestyles,” he says.

As a provider of network equipment, the newly revamped NSN obviously has an interest in talking up the infrastructure needs of network operators. But there is considerable substance, both to Moiin’s aspirations for the digital world of the 2020s, and his recommendations as to how we might get there.

The NSN futurist has unveiled a six-point roadmap to help articulate the industry’s efforts to rise to the demands being placed on it. These include: bolstering capacity 1,000-fold; cutting network latencies to single-digit milliseconds; teaching networks to be self-aware; personalising the network experience; reinventing the business of telephony for the cloud; and doing all this without boosting energy requirements.

Network capacity is undeniably the highest priority, and the trickiest problem to solve. After all, would not a 1,000-fold increase in capacity stretch Shannon’s Law – the rule that effectively defines how much data can travel along a single strand of fibre – way beyond breaking point?

The need for speed
Not necessarily, says Infinera’s Geoff Bennett. At least, not if you have faith in the next generation of physicists and their ability to push back the boundaries of optronics still further.

“Based on the technologies that we have now, fibre capacity is bound to run out sooner or later. But if the evolution of coherent detection systems has taught us anything, it is that there is plenty of scope to develop new technologies that will help us squeeze more capacity out of the fibre that we currently have,” he suggests.

Coherent detection systems essentially allow engineers to cram up to ten times as many signals on the same piece of fibre by using all the gaps in the light waves. And a good thing it is too, because service providers refuse to forget how much blood was spilled in the “build-it-and-they-will-come” roll-out of fibre networks at the turn of the current century and continue to resist any move to rip it all out of the ground and start afresh with a new technology until every fibre alternative has been well and truly exhausted.

So once engineers reach the limits of what they can currently do with optronics, they might well start to explore new technologies in electronics.

“We are already developing even more sophisticated modulation techniques to get more and more capacity on the fibre, but we are losing reach on the signal as a result,” explains Bennett.

Essentially network developers are looking for better electronic switching equipment that can cost-effectively regenerate the signal, so that what they lose in distance with the new modulation techniques, they can gain back through better amplification.

Of course, there is also the option to scalp more spectrum from other users. NSN is testing a new system that mops up assigned but unused spectrum from other players, such as broadcasters, the defence sector and the police and emergency services. The spectrum can be quickly clawed back on an as-needed basis.

The next generation
Then there is the whole issue of 5G, which could well be live by 2020, even though no one is yet quite sure what exactly it might look like. A clutch of big industry players, including Telefónica, Samsung and AIRCOM, are backing a £35 million research effort at the University of Surrey in an effort to lay the groundwork, while Huawei – still smarting from political moves to squeeze it out of the US telecoms infrastructure market on grounds of national security – is devoting more than 200 scientists to a project to deliver 5G by 2020.

Huawei CEO Ken Hu has gone further than most of his peers in defining 5G as a network technology that can offer mobile broadband speeds of anything more than 100 times that of 4G. “Like innovation, acceleration is a key characteristic of the digital society,” he says.

Tong Wen, the man charged with leading Huawei’s 5G development, says that it is one of the group’s top priorities, adding that the company is working with developers from 20 of the world’s top research institutes on the project. “5G won’t be a replacement for 4G, or even 3G,” he says. Instead it will attempt to address the internet of things, to embrace what he calls “immersive connectivity” in the network.

“There’s still a long way for 5G to go. It’s not a single technology, it’s an entire ecosystem,” he explains.

Ultra-fast and ultra-broad capacity is one thing, but significant innovation in other parts of the network is also required. For one thing, latencies will need to drop drastically. According to data published recently by TeliaSonera, the majority of mobile networks today boast latencies of between 200-500 milliseconds (ms). But real-time voice and video links require latencies of 100ms or less, while advanced applications – like cloud gaming or Google’s new driverless car – could need latencies of less than 10ms.

Close to the edge
Because speed is a function of the distance that the signal travels, it is likely that time-critical services and applications of the future will need to be shifted closer to the edge of the network. In teaching networks to be self-aware and capable of personalisation, Moiin is in effect embracing the principles of software defined networking (SDN), a philosophy that seeks to pull all the intelligence out of network hardware and place it in the hands of an all-powerful computer “controller”, that is limited in scope only by the imaginations of the developers writing the code to drive it.

Most network operators appear content to sit on the sidelines while the industry wrangles over standards and platform protocols. So while sweeping integrations of SDN are very few and far between, there is perhaps scope to embrace the principles on a less ambitious level. As Infinera’s Bennett explains, speed is just one piece of the puzzle – network operators have to be able to carry all this data around their networks at the lowest possible cost per bit, and they have to be able to respond rapidly to bring new capacity into play as and when their end customers need it.

“It’s all about how you provision connectivity at the lowest possible cost. Service providers need to be able to do that horizontally – from carrier to carrier – and vertically, from the access layer, through the aggregation layer, right down the stack to the optical layer, and back up again at the other end. If they can automate that provisioning process, they will prove themselves to be both cost-effective and responsive, and those are two key cornerstones on which future innovation will rest.”