Numerous current and former staff from prominent artificial intelligence companies, including OpenAI and Google DeepMind, have penned an open letter highlighting concerns about safety oversight within the industry.
The letter, signed by 13 individuals and first reported on by the New York Times, advocates for a "right to warn about artificial intelligence" and urges increased protection for whistleblowers.
The signatories argue that AI companies possess substantial non-public information regarding the capabilities, limitations, and potential risks associated with their systems. However, they assert that these companies currently have minimal obligations to share such information with governments or civil society organisations. The letter states, "We do not think they can all be relied upon to share it voluntarily."
It calls for greater transparency and accountability measures, including provisions that would allow employees to anonymously report concerns to board members without fear of retribution.
OpenAI defended its practices, stating that it provides avenues for reporting issues and does not release new technology without appropriate safeguards. Google is yet to comment on the letter.
The letter acknowledges that concerns over the potential harms of AI have existed for decades, but the recent boom has intensified these fears. While AI companies have publicly committed to developing the technology safely, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create new ones.
The signatories argue that, without effective government oversight, current and former employees are among the few who can hold AI companies accountable to the public. However, they claim that broad confidentiality agreements often prevent them from voicing their concerns.
The open letter follows recent high-profile resignations at OpenAI, with a former safety researcher alleging that the company had abandoned a culture of safety in favour of "shiny products".
Recent Stories