The new network analytics: Making the unpredictable predictable again

21 November 2018 | Kevin Macaluso

Cover

Kevin Macaluso

Blog Author | general manager, deepfield, Nokia

Cover

In the not-too-distant past, network operators had the insight to understand what was traversing their networks. This not only helped them better engineer and plan for capacity, but it also gave them a plethora of data which helped them create service packages to encourage upgrades and ultimately reduce churn.

It was possible, of course, because these networks, whether voice, data or cable television, were dedicated to specific kinds of content and traffic. This gave operators enough insight to stay ahead of bottlenecks with higher capacity for peak periods, or to get creative with custom packages based on the known patterns of its customers.

However, as we move increasingly to converged networks, especially with the move to 5G, what’s actually traversing the network is more of an unknown. This is compounded by the fact that up to 80% of traffic is over-the-top video, and a large percentage is also encrypted. Even the sources of content are constantly changing. News and video sites spring up every day, while gamers and other users host their own streams with new networks, such as Twitch, hosting gaming content that attracts hundreds of thousands of users.

Such an enormous range of unpredictable demand makes it extremely challenging for network planners. And if you add in the escalating DDoS attacks and fast growth of IoT, the potential exists for bandwidth contention and a deluge of customer complaints. Dedicating more bandwidth can work, but it’s hit and miss, and very expensive. And it doesn’t solve the challenge of how to build more attractive service offerings based on the behavior of customers.

What is required are analytics that can give the operator knowledge of what is happening on its network and the power to do something about it. Deep packet inspection (DPI) has been one proposed solution, but it can’t help with encrypted content and, for low-level usage data, it’s too expensive to deploy generally. Fortunately, a lack of data is not the real issue. The challenge is gathering all the different sources of data, and sorting and analysing it in order to generate actionable insights.

It is now possible to map out the entire internet using only publicly available data of the same type that Google collects. Data from DNS requests and telemetry from internet endpoints can be correlated with a host of other data sources. Over time, analytics programs can catalog the development of everything happening on the network, mapping flows from source to destination and creating historically rich analysis. These techniques are able to identify up to 90% of the traffic, encrypted or not.

With this holistic view of the entire network in real-time, operators can adjust their resources more precisely. Do they need to buy or exchange capacity with a peering network? Are their peering points optimally situated? Are content caches efficiently distributed and do they have enough?  They can also troubleshoot other service impacting issues ranging from traffic blackholing to simple configuration errors. Having a historical baseline is critical to spotting anomalies.

There are other advantages to having detailed, holistic analytics. Being able to identify subscribers who have cut the cord allows operators to make competitive offers to encourage users to sign up again. Keeping track of which applications are trending also provides operators with the opportunity to market micro-services to those users. And if the operator has the ability to understand the traffic patterns of specific content platforms, they can market premium QoS services to gold-level users.

Some heavy enterprise users might buy this kind of analytic capability for their own operations, but many may prefer to get this information as part of a larger managed service offering. These kinds of analytics can give insight into branch office performance, how well private and public cloud services are performing and a host of other information that could help operators customise and upsell new enterprise service offerings.

On the security front, understanding the traffic flows across the network in granular detail also makes it possible to recognise and respond to DDoS attacks more efficiently. Understanding IP flows allows operators to instruct edge routers to block up to 90% of the problem traffic even before it enters the network, instead of sending all suspected traffic to expensive scrubbers. In this way, they can also affordably offer security to all customers and not only to a few very large enterprise customers.

The need for these new data analytics capabilities is only growing as IoT traffic rapidly expands. And with 5G, the convergence of both fixed and mobile networks will only increase the unpredictability. Knowing precisely what is traversing the network will put operators back in control and help them offer better services and more compelling personalised offers to both enterprise and consumer customers. Fortunately, for operators the technology exists to help them get back to the business of meeting customer demand and maintaining a healthy bottom line.