Cybersecurity and the duality of AI
News

Cybersecurity and the duality of AI

AI_Cybersecurity5.png

Artificial intelligence (AI) presents a unique proposition to the cybersecurity industry - given its ability to be used by cybercriminals but also by enterprises to combat that very threat.

According to Matan Hart, VP of research at Tenable, the industry is in an arms race and the power dynamic will swing back and forth.

“Both good and bad will be able to speed their processes to build and adopt new tools and ultimately gain an advantage over the other,” he says.

Generative AI advances the entire cybersecurity space, he adds, but based on past technological breakthroughs, bad actors will continue to adapt.

“Defenders must acknowledge that those changes are inevitable, and they must adopt the technology and embrace new work practices to stay relevant.”

Generative AI is highly effective at translating extremely complex machine code into understandable sentences. That, in Hart’s view, is both good and bad for cyber defenders.

As ChatGPT could be used against threat actors, for example, ‘cybercriminal A’ (as we will refer to for this example) uses ChatGPT-generated content in a spam email.

“Cyber do-gooders can take that content to ChatGPT and the tool will confirm it created the content on behalf of cybercriminal A,” Hart says.

“Some cybercriminals may shy away from the tool because of this, but most will attempt to find a workaround to protect themselves.”

For security teams and organisations, this is the same as any other new technology that enters the arena.

Hart says there is a learning curve but ultimately, the same rules apply. As defenders, the industry must work to understand data infrastructure to determine where the greatest risks lie and then take steps to reduce the risk.

Advanced intelligence

Dan Shiebler, head of machine learning at Abnormal Security believes that given the sophistication of cybercriminals in their tactics, cybersecurity solutions must advance in intelligence as well.

Cyberattack detection systems can benefit from incorporating large language models, learning the “normal behaviour of users in the organisation and then detecting deviations from the norm, which may indicate a threat actor’s social engineering attempts.

“Improved detection models based on AI can help organizations out-innovate cybercriminals and offer the best possible defence against even the most sophisticated attacks,” he says.

Hart echoes this view, believing generative AI to be a “supercharger for cyber defenders”.

For him, the truth is that generative AI like Google Translate and how you use it will affect the outcome.

“Generative AI doesn’t necessarily offer new protection, rather it augments the security team’s efforts by speeding response time, enabling faster decision making and tackling mundane, repeatable tasks.

For example, he says, we anticipate generative AI to be implemented into tools to quickly review code, accelerate incident response and extract security analytics.

Hart that researchers are currently working to make the world safer by finding ways to harness AI tools for good.

Whether that is analysing malicious code, assisting in playbook creation for incident response or providing the commands and queries necessary to assess a broad swathe of digital infrastructure.

“For the past several months, Tenable’s research team has been analysing how large language models can be used in both offensive and defensive capabilities as detailed in our latest report ‘How Generative AI is Changing Security Research’.”

Banning AI?

Italy recently made the decision to ban ChatGPT, setting a precedent that could see other countries follow suit.

Companies such as Samsung have already taken a similar decision, with the South Korean vendor banning the use of AI after staff were found misusing the technology.

However, Dr Ilia Kolochenko, founder of ImmuniWeb and a member of the Europol Data Protection Experts Network believes banning AI would be a “pretty bad” idea.

“First, your business competitors may outperform you by taking advantage of modern AI technologies – namely generative AI like ChatGPT – to intelligently automate various tasks and processes, to reduce their operational costs, and to eventually offer more competitive products and services on the global market,” he says.

“Second, as many researchers have demonstrated, by restricting or banning something, you may merely increase people’s interest to try the Forbidden fruit.”

Kolochenko says that after several years of the pandemic, countless employees still have uncontrolled access to sensitive corporate data from their personal or so-called “hybrid” devices that cannot be monitored by the employer.

Such devices will likely be used to silently access ChatGPT and even to purposely enter some confidential or regulated data, therein to test how the system behaves.

Therefore, he believes, corporations should train their employees and educate them about the risks and opportunities presented by generative AI.

“Acceptable use policies (UAP) shall be developed and promulgated across the employees, while monitoring of third-party AI services can be implemented by corporate data loss protection (DLP) system or cloud access security broker (CASB) already widely deployed for other purposes.”

An uncertain AI future

Looking forward, Shiebler says that as “perfection is paramount” in several professions, it is likely to see many jobs changing substantially as a result of increased AI, rather than disappear altogether.

Tasks involving moving data from one location or format to another are likely to become automated. Tasks that require face-to-face interaction and personal accountability are likely to remain in human hands.

Although AI systems have improved dramatically over the last two decades, they have historically been constrained by their reliance on massive amounts of data.

“Teaching an AI system to play Go or chess requires showing the system enormous numbers of games, far more than a human would ever see,” Shiebler says.

He adds that recent advances in foundational models are changing this. Modern AI systems like GPT4 can adapt their base understanding of the world to new challenges based on a few examples or instructions.

“However, these systems lack the ability to learn continuously from their environment and this kind of memory plasticity is what enables living beings to accomplish long and complex tasks, something that modern AI systems struggle with.”

For example, Shiebler says, selling a product requires communicating with multiple people, remembering the context of their responses over days, weeks or months and synthesising this into a long-term strategy.

AI systems today would struggle with this, but within the next 10-20 years we are likely to see AI systems grow to manage more and more abstract challenges.

“Unfortunately, this will also allow AI systems to commit more nefarious tasks, such as online fraud and scams,” Shiebler says.

Cybercriminals are already very sophisticated when it comes to things like business email compromise, vendor invoice fraud, and executive impersonations – with AI in their arsenal, we can expect them to become even more criminally savvy.

As a result, cybersecurity systems will need to become more sophisticated to combat fully autonomous adversaries that can change and adapt to overcome defences.

Gift this article