Home » UK and US agree to collaborate on AI safety tests

UK and US agree to collaborate on AI safety tests

The UK and US have agreed to jointly develop AI safety tests, following previous commitments to collaborate on the rapidly developing technology.

The two countries have signed a memorandum of understanding, solidifying an agreement to build a common approach to AI safety tests.

The tests, which will be developed by the AI Safety Institutes of the two nations, will inform policymakers on when and how to enact legislation to regulate AI.

The respective AI safety bodies will remain separate. However, the UK government has confirmed its intention to perform at least one joint testing exercise on a publicly accessible model.

The agreement will also see resources and expertise shared between the UK and the US.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” said Technology Secretary Michelle Donelan.

“Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”

The memorandum, signed by Donelan and US Commerce Secretary Gina Raimondo, follows the signing of the Bletchley Declaration in November. The declaration was signed by the UK, and the US – along with 26 other states – at the AI Safety Summit as a joint commitment to collaborate on mitigating the risks of the technology.

“This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” Raimondo said.

“Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

Earlier this year, the UK signed a set of agreements with Canada to collaborate on AI research and compute infrastructure development.