The Biden administration has confirmed plans to convene a global summit on artificial intelligence (AI) security, as it seeks to advance global cooperation towards a more secure development of new technologies.
The meeting will take place on 20-21 November in San Francisco, with Commerce Secretary Gina Raimondo and Secretary of State Antony Blinken hosting the event. This will be just after the US Presidential election on 5 November.
The news follows Raimondo’s announcement of the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, with the initiative aiming to accelerate the advancement of the science of AI safety.
The summit will bring together officials and technical experts from the International Network of AI Safety Institutes, who will gather to discuss key focus areas and how to foster collaboration on global AI security challenges.
The summit is expected to lay the groundwork for deeper technical cooperation ahead of the AI Action Summit in Paris, which is scheduled for February 2024. The Departments will also invite experts from international civil society, academia, and industry to join portions of the event to help inform the work of the Network and ensure a robust view of the latest developments in the field of AI.
“AI is the defining technology of our generation. With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever. That includes close, thoughtful coordination with our allies and like-minded partners,” Raimondo said.
Member countries participating include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States.
“We’re still trying to figure out exactly who else might come in terms of scientists,” Raimondo commented on current attendees, as countries at the forefront of the AI evolution, including China, do not currently appear to be represented at the event.
The initiative comes amid growing concerns about generative AI, whose potential has sparked fears surrounding its misuse, particularly in disinformation campaigns and cybersecurity threats, driving needs for further regulation.
Last week, the US Commerce Department introduced stricter reporting requirements for advanced AI developers and cloud computing providers, to ensure safety remains a priority in the midst of advancements.
The first ever AI Safety Summit was held at Bletchley Park in the UK in November 2023, with attendees including UK Prime Minister Rishi Sunak, US Vice President Kamala Harris, as well as high-profile figures, nations, and AI organisations.
The UK Summit resulted in the ‘Bletchley Declaration’, the first international declaration on AI where attendees agreed AI posed a potentially catastrophic risk to humanity and proposed a series of steps to mitigate risks.
The second AI Safety Summit took place in South Korea in May 2024, with 16 companies at the forefront of AI development agreeing to commit to a more secure technology roll-out.
Raimondo recently told the Associated Press that the steady increase in AI-generated forgeries and technology guardrails to ensure security are among the most pressing discussions.
“We’re going to think about how we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors,” Raimondo told the news service. “Because if we keep a lid on the risks, it’s incredible to think about what we could achieve.”
“I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism,” she said. “Every country in the world ought to be able to agree that those are bad things, and we ought to be able to work together to prevent them.”
Recent Stories