The AI Security Institute, a research organisation within the UK Government’s Department for Science, Innovation, and Technology (DSIT), has formed an international coalition to ensure the safe development of AI technology.
The project, which will tackle issues around human control, security and ensuring AI systems behave predictably as designed, comes as the government says today’s methods for controlling AI are likely to be insufficient for more capable future systems as the technology continues to develop.
Science, innovation and technology secretary Peter Kyle said that advanced AI systems are already exceeding human performance in some areas, and the responsible development of AI needs a co-ordinated global approach.
“AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests,” he added. “This is at the heart of the work the Institute has been leading since day one – safeguarding our national security and ensuring the British public are protected from the most serious risks AI could pose as the technology becomes more and more advanced.”
The project will see the institute work with a number of partners including the Canadian AI Safety Institute, Canadian Institute for Advanced Research (CIFAR), Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).
The government said that AI alignment is a “crucial” part of AI research according to industry and academia, describing it as one of the “most urgent” technical challenges that the world currently faces.
The project will fund cutting-edge research into AI alignment, including ways to make sure AI systems continue to follow goals as the technology becomes more capable and finding techniques to ensure AI systems remain transparent and responsive to human oversight.
The coalition will be guided by an advisory board including Shafi Goldwasser full professor at Université de Montréal and founder and scientific advisor of Mila - Quebec AI Institute, Zico Kolter, professor and head of machine learning department at Carnegie Mellon University and Sydney Levine, research scientist at Google DeepMind.
The project is backed by a fund of over £15 million from the government, which it said will position the UK as a world leader in AI.
Through the project, researchers across a variety of disciplines from computer sciences to cognitive science can apply for grant funding of up to £1 million, while AWS has allocated up to £5 million in cloud computing credits to enable technical experiments beyond typical academic reach.
Additionally, investment will be provided by private funders to accelerate commercial alignment solutions.
Recent Stories