OpenAI's secret AI designs stolen in 2023 hack, new report reveals

A hacker infiltrated OpenAI's internal messaging systems in early 2023, stealing details about the design of the company's artificial intelligence technologies, according to a report by The New York Times.

The breach, which was not publicly disclosed at the time, has raised concerns about the security of cutting-edge AI technology and the potential for foreign adversaries to gain access to sensitive information.

The incident, reportedly revealed to employees during an all-hands meeting in April 2023, involved a hacker accessing an online forum where OpenAI staff discussed the company's latest technologies. While the intruder did not breach the systems housing OpenAI's AI builds, they managed to obtain details from employee discussions about the design of the company's AI technologies.

OpenAI executives, believing the hacker to be a private individual with no known ties to foreign governments, decided against informing law enforcement agencies or making the breach public. This decision was based on the fact that no customer or partner information had been compromised.

However, the incident reportedly sparked internal debates about the company's security measures and the potential risks posed by foreign adversaries such as China. Leopold Aschenbrenner, a former OpenAI technical programme manager, raised concerns about the company's ability to protect its secrets from theft by foreign actors. Aschenbrenner, who claims he was dismissed for leaking information, sent a memo to OpenAI's board arguing that the company was not doing enough to prevent such security breaches.

The revelation comes amid growing concerns about the national security implications of advanced AI technologies. While current AI systems are primarily used for work and research, there are fears that they could eventually pose risks to national security if obtained by hostile foreign powers.

OpenAI has since taken steps to bolster its security measures. The company recently established a Safety and Security Committee to explore risk management for future technologies. Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command, has been appointed to both this committee and OpenAI's board of directors.

Matt Knight, OpenAI's head of security, stated, "We started investing in security years before ChatGPT. We're on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience."



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.