OpenAI's secret AI designs stolen in 2023 hack, new report reveals

A hacker infiltrated OpenAI's internal messaging systems in early 2023, stealing details about the design of the company's artificial intelligence technologies, according to a report by The New York Times.

The breach, which was not publicly disclosed at the time, has raised concerns about the security of cutting-edge AI technology and the potential for foreign adversaries to gain access to sensitive information.

The incident, reportedly revealed to employees during an all-hands meeting in April 2023, involved a hacker accessing an online forum where OpenAI staff discussed the company's latest technologies. While the intruder did not breach the systems housing OpenAI's AI builds, they managed to obtain details from employee discussions about the design of the company's AI technologies.

OpenAI executives, believing the hacker to be a private individual with no known ties to foreign governments, decided against informing law enforcement agencies or making the breach public. This decision was based on the fact that no customer or partner information had been compromised.

However, the incident reportedly sparked internal debates about the company's security measures and the potential risks posed by foreign adversaries such as China. Leopold Aschenbrenner, a former OpenAI technical programme manager, raised concerns about the company's ability to protect its secrets from theft by foreign actors. Aschenbrenner, who claims he was dismissed for leaking information, sent a memo to OpenAI's board arguing that the company was not doing enough to prevent such security breaches.

The revelation comes amid growing concerns about the national security implications of advanced AI technologies. While current AI systems are primarily used for work and research, there are fears that they could eventually pose risks to national security if obtained by hostile foreign powers.

OpenAI has since taken steps to bolster its security measures. The company recently established a Safety and Security Committee to explore risk management for future technologies. Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command, has been appointed to both this committee and OpenAI's board of directors.

Matt Knight, OpenAI's head of security, stated, "We started investing in security years before ChatGPT. We're on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience."



Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.