Securing the Edge
Feature

Securing the Edge

Edge networking.jpg

As organisations continue to embrace edge computing a new approach to cybersecurity needs to be taken.

If enterprises are to take full advantage of the benefits that edge computing offers, they need to identify, assess and prevent the new threats that arise. This might mean applying the same security principles to a new environment, or it might mean implementing something completely new.

But before any discussion on edge computing, it’s first helpful to define exactly what “the edge” is. While the definition has evolved over the years, Capacity's sources all agreed in principle the edge is a location whereby computing power is closer to an end device.

Liz Green, EMEA advisory & cyber lead at Dell made the distinction between near edge and far edge.

Far edge is remote, perhaps a device in the field, such as a network substation, a car charging facility or a part in a factory. Near edge is still referred to as edge, but tends to be a smaller data centre, such as one that might be found to support a warehouse, a large retail site or a manufacturing plant.

How does edge computing change the security landscape?

“Edge computing comes with significant security concerns. These primarily stem from the novel attack surfaces edge topologies create as well as the fact that every edge device connected to a system creates a wider attack surface,” Anthony Leigh, manager, systems engineering – channel at cybersecurity solutions and services firm, Fortinet, tells Capacity.

Moving away from securing a central location, the rise of remote work, new far-edge deployments of IoT infrastructure and distributed computing power means security teams now have far more physical devices connecting to their network that they need to protect.

This has required a shift in the way that these teams think about security, from single-point network security to securing everything.

“Traditional security focuses around securing an organisations headquarters, data centres, branch offices and cloud,” Deryck Mitchelson, global CISO at cyber security firm CheckPoint says.

Accessing applications from anywhere, when those applications are distributed across public and private cloud or a software as a service-based model requires a new way to think about security.

“You should be able to access your services anywhere from any device, because the security protocols you have in place should be able to authenticate at a user level” Mitchelson explained.

Distributed computing power also means more distributed data, which can create a challenging environment for an organisation to operate within.

“If someone is attempting to distribute an application all over the world, they're having to manage the data sprawl across all those resources and manage all the operating systems if they're deploying hardware,” Tom Gorup, vice president of security services at content delivery network Edgio tells Capacity.

“It's one thing to make it highly available and highly distributed. It's another thing to make it secure, because that software also becomes another avenue of approach,” Gorup says.

This isn’t something a traditional security team will have expertise in or even necessarily the technology to support them.

Previously, a far edge industrial site could be protected with a ring fence firewall, which would do enough.

“As more and more devices in these edge environments have become connected in their own right to their own systems for predictive maintenance or other use cases, it's become much more complicated than the attack surface area has increased,” Green says.

The more complicated, distributed landscape generated by edge computing, is coinciding or directly creating an increase in the threat level that organisations face.

While some of these threats are new, many existing challenges are prevalent as well.

Repeat Threats

“It almost feels like, as an industry, we like to punish ourselves with the same problems over and over again,” Nathan Howe, global vice president, innovations at cloud security firm Zscaler tells Capacity.

One way this gestates is through inter-routable networks, which Howe says are becoming increasing popular.

“We add a new edge service and then need to open up an inbound listener and allow anyone to connect to it via a network route. After realising its an insecure path, the response is to put firewalls in place to protect that listener. This is not an efficient way of dealing with security,” Howe explains.

The more threats arise and the more complex those threats become, the more efficiency really matters. For example, having a consolidated view of the different end points of the network can help organisations monitor, identify and respond to threats in one place.

As data is more distributed across a wider network, this consolidated view is essential to understanding where you are being attacked.

“Edgio has developed a platform that makes sure data is highly distributed, but is all sent back to a central location,” Gorup explains.

“This way they're not having to worry about which location is being attacked because it’s clearly defined for them, and it can even suggest the mitigating steps that can be taken to help protect.”

Without this data being monitored in one place, Gorup says that organisations may not even know that a remote location is even being attacked.

Once the data is stored and viewable in one place, an additional level of efficiency can be achieved by deploying AI and machine learning techniques.

This is especially important for Checkpoint’s Mitchelson:

“One of the threats I see is edge computing generating so much data, it's saturating security operations teams”.

The volume of false positives and alerts being generated make it hard for security operations teams to identify and contextualise threats without these tools in place. Especially as bad actors can use AI to generate more sophisticated and frequent Ddos, phishing and zero day attacks, this is almost becoming a non-negotiable factor in protection.

Zero day attacks in particular, where vulnerabilities in a code are identified before a developer has become aware of it, are especially dangerous in an edge environment.

Zero day threats are resolved and prevented by patch’s being deployed. “But patching is difficult when you have to write new code and deploy it all over the world,” Gorup says.

This is another reason why a consolidated data platform is useful for an organisation trying to protect itself, as virtual patching can use a web application firewall to block the exploit while the patch is being deployed.

From Zero day to Zero Trust

Zero trust is another core change in the approach to cyber security that is being in part driven by edge computing and its IoT use cases.

At a high level, zero trust is simply removing the implicit trust of legacy networks, removing clients and servers from the same network subset.

As many IoT devices are not high-performing computing systems, they are incapable of implementing their own security principles.

“The notion of a zero trust solution means that any workload can be connected with any initiator, in any edge, so long as both sides of the connection are verified, the risk-controlled, and then only ephemerally connected,” Howe explains.

“Using Zero Trust principals is a must.” Leigh from Fortinet agrees.

Leigh argues that the endpoint has never been more important so looking to protect these areas is critical.

For example, using solutions like network access control can prevent movement inside a network once a threat actor has been discovered.

This shrinks the attack surface to make it harder for the threat actors and automates how a system responds to reduce the window of exposure.

“Zero trust is really a list of principles that organisations are trying to adopt, and they’re never going to be able to do everything at once,” Dell’s Green says on the matter.

While Green believes that most organisations have made good strides towards device and user verification, in other areas of the zero trust principles, they still have some way to go.

"When it comes to identity and access management, network segmentation, understanding what the key services and data sets are and making sure they can stay available, there’s still work to be done.”

A zero trust approach is being implemented in new deployments from inception, but many IoT and far edge applications deployed today are still running on legacy networks. As a result efforts are being made to transition to this new way of working.

As the edge continues to evolve and data continues to be generated further from core networks, security will no doubt continue to evolve.

But the key will always remain to be visibility of data and working as efficiently as possible.

Gift this article