BLITCHLEY, England – Under a new agreement, “like-minded governments” will be able to test eight leading technology companies’ artificial intelligence models before launching them, British Prime Minister Rishi Sunak said on Thursday.
At the conclusion of the two-day AI Summit in Bletchley Park on Thursday, Sunak announced the agreement signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the United States and the United Kingdom to test leading companies. “Artificial intelligence models.
“So far, the only people testing the safety of new AI models are the companies developing them. That has to change,” Sunak said to a room full of journalists.
“Today, governments and like-minded AI companies reached a historic agreement. Together we will work to test the safety of new AI models before they are launched… This is made possible by the decision I made with Vice President Kamala Harris of the British and US governments to create world-leading institutes in AI safety has the public sector’s capabilities to test the most advanced frontier models.
Sunak said the eight companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI – had agreed to “deepen” the access already given to the Frontier AI Taskforce, which is the flagship of the new institute. Access is currently granted on a voluntary basis, although under its executive order, the US government has established binding requirements for the surrender of certain safety information.
Sunak also announced further details about the agreement reached with countries yesterday to establish an international advisory committee on AI frontier risks.
Modeled on the Intergovernmental Panel on Climate Change (IPCC), it will be made up of representatives from the 28 countries participating in the summit. The British government said that it would provide secretarial support to her.
The committee will also support academic Yoshua Bengio in producing a State of the Science report on the risks and capabilities of frontier AI. The report will not make policy recommendations, but is designed to inform international and national policy-making. It will be published before the next safety summit in South Korea in the first half of next year.
“Infuriatingly humble analyst. Bacon maven. Proud food specialist. Certified reader. Avid writer. Zombie advocate. Incurable problem solver.”