Four of artificial intelligence’s most influential players – OpenAI, Microsoft, Google, Anthropic and DeepMind – have created a new industry body to oversee “safe and responsible” development of AI tech.
The group, dubbed the Frontier Model Forum, said that it aims to help advance AI safety research to promote responsible development of frontier models and minimise potential risks; identify safety best practices for frontier models; share knowledge with policymakers, academics, civil society, and others to advance responsible AI development; and support efforts to leverage AI to address society’s biggest challenges.
Membership to the body is open to organisations that develop and deploy frontier models, Demonstrate strong commitment to frontier model safety, including through technical and institutional approaches, and are willing to contribute to advancing the Frontier Model Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
Commenting on the body’s creation, Brad Smith, vice chair and president of Microsoft said: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
The news comes at a time of heightened scrutiny for companies in the AI space, and after big tech companies – including the Forum’s four founder members – met with US president Joe Biden and agreed to new AI safeguards. This included commitments to including watermarking for AI content which will make it easier for people to spot misleading material like deepfakes.
The European Union meanwhile is introducing an AI act which represents the first comprehensive legal framework for artificial intelligence technologies.
Recent Stories