Tech leaders convened in Washington this week for a closed-door meeting described as an AI safety forum.
The meeting was a dialogue between lawmakers and representatives of big tech firms with the likes of Meta chief executive Mark Zuckerberg, Sundar Pikhai, chief executive of Google’s parent company Alphabet, and Twitter, now known as X, owner Elon Musk in attendance.
According to Reuters, the discussion was based around how AI could be developed in a responsible manner, with Musk arguing that “it’s important to have a referee” in regard to AI safety.
A regulator would "ensure that companies take actions that are safe and in the interest of the general public”, he added.
During a recent interview with US news channel MSNBC, Democrat senator Elizabeth Warren lambasted the behind-closed-doors nature of the meeting.
She said its privacy could function as a breeding ground for biased legislations and favour the interests of the tech elite who aim to "continue to dominate and make money".
OpenAI, Microsoft, Google, Anthropic and DeepMind recently created a new industry body to oversee “safe and responsible” development of AI tech.
Dubbed the Frontier Model Forum, the group said that it aims to help advance AI safety research to promote responsible development of frontier models and minimise potential risks; identify safety best practices for frontier models; share knowledge with policymakers, academics, civil society, and others to advance responsible AI development; and support efforts to leverage AI to address society’s biggest challenges.
Recent Stories