The agreement says both countries will work together on developing "robust" methods for evaluating the safety of AI tools and the systems that underpin them.
It is the first bilateral agreement of its kind. Michelle Donelan, the UK’s tech minister labelled AI as "the defining technology challenge of our generation" and added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023.
Capacity rounds up industry reaction to the agreement.
Eleanor Watson, IEEE member, AI ethics engineer and AI Faculty at Singularity University (and one of the first signatories for the Future of Life Institute’s Open Letter on AI) said: “Hopefully, this will provide a chance to build upon the foundations already laid.
Watson believes that as ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us and to meaningfully choose where we want to go from here.
“The responsible future of AI requires vision, foresight and courageous leadership that upholds ethical integrity in the face of more expedient options.
“Explainable AI, which focuses on making machine learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society. That’s because both regulators and the public will demand the ability to contest algorithmic decision-making. While these subfields offer exciting avenues for technical innovation, they also address growing societal and ethical concerns surrounding machine learning.”
Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre said that AI has significantly evolved in recent years, with applications in almost every business sector, noting that it is expected to see a 37.3% annual growth rate from 2023 to 2030.
“However, there are some barriers preventing organisations and individuals from adopting AI, such as a lack of skilled individuals, complexity of AI systems, lack of governance and fear of job replacement.
“AI is growing faster than ever before – and is already being tested and employed in sectors including education, healthcare, transportation and data security.
“As such, it’s time that the Government, tech leaders and academia work together to establish standards for the safe, responsible development of AI-based systems. This way, AI can be used to its full potential for the collective benefit of humanity."
Kevin Cochrane, chief marketing officer at Vultr believes the signing is a welcome development.
"Back in the dawn of digital advertising, the world moved too slowly to ensure the safe and responsible use of personal data," he says.
According to Cochrane, this agreement is a proactive effort to avoid making the same mistake with AI while allowing the head and body of AI to move in unison. He adds that the UK is home to some of the world’s foremost AI specialists, while the US is home to the companies themselves that operate at the bleeding edge of innovation.
"It’s much more than a simple marriage of convenience."
“This agreement effectively gives CISOs a framework to operate from that ensures the safe, responsible and secure use of personal data. Laying the groundwork to protect individual rights while creating guidelines that foster innovation safely.
“The dollar amounts dedicated to these institutions are a bit of a red herring. What this boils down to is a quality versus quantity debate, with it being much more important to have a governing body comprised of members with the correct expertise than reaching an arbitrary number of experts.
“On the question of whether this is the right way forward or if stronger legislation is more prudent, ultimately AI as a technology moves too quickly to effectively legislate. At least in its current form. This agreement allows for the creation of guidelines which can be updated as developments emerge rather than attempting to labour through legislative bureaucracy with every development.”