ChatGPT developer and leading AI firms agree to voluntary AI safety rules

Seven leading AI firms, including ChatGPT developer OpenAI, Amazon, Google, Meta, and Microsoft, have agreed to a set of voluntary rules designed to reinforce the safety, security, and trustworthiness of AI technology.

While the process has been coordinated by president Biden and the White House, OpenAI described the move as an important step in advancing meaningful AI governance "around the world".

The rules comes after EU tech chief Margrethe Vestager said in June that the US and EU should press the AI industry to adopt a voluntary code of conduct "within months" to provide safeguards as new laws are developed.

Other companies that have agreed to the commitments include San Francisco-based AI safety and research company Anthropic and personal AI chatbot start-up Inflection.

Under the voluntary guidelines, the companies are committing to internal and external security testing of their AI systems before release.

They have also agreed to share information across the industry and with governments, civil society, and academia on managing AI risks, as well as to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

Additionally, the seven BigTech firms have pledged to facilitate third-party discovery and reporting of vulnerabilities in their AI systems, as well as to developing "robust technical mechanisms" to make sure users know when content is AI generated, such as a watermarking system.

Further to this, they will in future publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use and have said they will prioritise research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.

Finally, the firms have promised to develop and deploy advanced AI systems to "help address society’s greatest challenges".

“Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion,” said Anna Makanju, vice president of global affairs, OpenAI. “This announcement is part of our ongoing collaboration with governments, civil society organisations and others around the world to advance AI governance.”

The new rules come amidst a slew of concerns about the speed at which the generative AI technology is being developed.

Earlier this year, billionaire Elon Musk advocated for a pause of at least six months on the training of "powerful AI systems" in a bid to curtail the potential risks to society.

A letter issued by the Future of Life Institute which garnered over 1,000 signatures, including from Apple co-founder Steve Wozniak, philosopher Yuval Noah Harari, and Emma Bluemke, Centre for the Governance of AI, PhD Engineering, University of Oxford, warned against the risk of economic and political disruptions by “human-competitive” AI systems.

But an open letter published this week by BCS, The Chartered Institute for IT and signed by 1300 people, said that government and industry should recognise AI as a "force of good" rather than an "existential threat to humanity".

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.