Are the UK and US doing enough to tackle issues around AI safety?
Big Interview

Are the UK and US doing enough to tackle issues around AI safety?

casey ellis.png

Casey Ellis, Founder and chief strategy officer at Bugcrowd has his say ahead of a meeting with MPs to discuss the implication of AI for cybersecurity

The question of whether the UK and the US have taken enough steps to tackle AI safety is “pretty nuanced” and depends on who you ask according to Ellis.

In his opinion, both governments have been responsive to the rapid onset of potential positive and negative impacts of commoditised access to Generative AI and have acknowledged that it has very quickly become an issue of retail politics.

Ellis’ comments come as the All-Party Parliamentary Group on cybersecurity will meet to discuss the implications of AI for cybersecurity on May 22 at the House of Commons.

MPs will hear expert evidence from Ellis and Dave Gerry of Bugcrowd on the changing nature of cybersecurity threats in the AI era and the emerging issue of AI safety – the potential societal harms that could occur through the accidental or deliberate misuse of AI.

Ellis, a former hacker who now advises national security agencies including the US Department of Defence, the Pentagon, and UK and Australian intelligence communities believes there has been a growing focus from both government and private sectors on AI safety and regulation in the US.

“Aside from the Biden Executive Order, The government has initiated several projects through agencies aimed at addressing AI-related risks, focusing on issues like bias, privacy, and national security,” he says.

“The National Institute of Standards and Technology (NIST) is setting up standards and guidelines for AI, and there's the AI Risk Management Framework, which aims to help manage AI risks comprehensively.”

This is all in addition to the Biden AI Safety Executive Order and the consideration of AI in the National Cyber Strategy out of the Whitehouse, as well as multiple bills and initiatives in progress in the House and Senate.

In the UK, there’s been notable progress too, Ellis says. The government has set up the Centre for Data Ethics and Innovation (CDEI) to research and offer advice on how to handle AI.

“The UK generally tries to adapt existing laws to better suit the new challenges posed by AI technologies, which some think is a smart, practical way to handle rapid technological changes.

“However, like in the US, there are concerns that these regulatory updates aren't keeping up quickly enough with the advancements in AI.”

Both countries see the importance of AI safety and governance, but there’s a lot of debate about whether they’re doing enough, and whether they’re doing it fast enough.

“As AI continues to grow and become more embedded in our lives, the real test will be how these regulations hold up,” Ellis says.

The novel flaws

Ellis says AI is novel in that many of its security implications are “merely an acceleration of existing security problems, many of which are being worked on and addressed by other non-AI specific standards”.

As for the security risks posed by AI itself, several new classes of vulnerability have been discovered in the past 18 months.

“As for government concerns, there are real safety and security concerns that should be addressed at the policy and governance level.

“It's also important to point out that the combination of a historical "AI boogeyman" narrative and the inherent accessibility of AI to the layperson (i.e. the voter) means that the entire voting population transitioned into a world where they are forced to have an opinion on AI safety and security very suddenly, and this sort of phenomenon invariably forces a political response,” Ellis says.

As more companies increase their use of AI, people are questioning the extent to which human biases have made their way into AI systems.

“The first thing about bias in a Generative AI context is to understand that, if the training data is human-generated, that bias will always exist,” Ellis says.

A common misconception about model bias, Ellis says, is that the goal is to completely eliminate it.

“Humans are inherently biased, often in very subtle ways, and our output ultimately creates these models. Understanding where it exists and mitigating it should be the main goal for most applications.”

As for checking bias, Ellis thinks there are two broad approaches: Quantitative and qualitative. Quantitative bias testing involves sending 1,000s of requests to LLMs and looking for statistically significant indicators of bias across the resultant data set.

Qualitative testing, meanwhile, involves attempting to manipulate an LLM to be biased in a way that it “clearly should be” according to Ellis.

“As for metrics, those will almost invariably be use-case specific and defined by context, but over time I expect that we'll see this turn into more of a generalised science.”

With all that said, is the pace of AI development something to be worried about?

Ellis says yes, but his definition of “worry” is non-standard.

“I think it's prudent to consider the risks of anything that's inherently powerful and fast-moving, which means I worry about protecting against the potential negative impacts of this type of technology whilst embracing and, frankly, being very excited by all of the potential upside,” he says.

Ellis does offer some advice to organisations trying to work out how to manage AI safety, though. He notes that for most businesses, ‘having AI’ is like ‘having a website’ in 1997.

“In the latter example, there was tremendous hype, a fair bit of snake oil and grifting, and generally a lot of distractions from sound business decision-making - that said, having a website did go on to transform just about every aspect of the business environment,” he says.

“I believe that AI is following a similar trajectory, which means the important thing for businesses and Boards is a) not to ignore, while b) not getting caught up in the hype. This applies to the risks of AI as much as it does to the upsides.”

Gift this article