Former employees of OpenAI are urging the attorneys general of California and Delaware to intervene as the organization seeks to change its structure from a nonprofit to a for-profit enterprise. Their main concern is the potential implications of this shift, especially if OpenAI succeeds in developing artificial intelligence that surpasses human capabilities but operates without accountability to its original public mission.
Page Hedley, a former policy adviser at OpenAI, expressed his worries about who will ultimately control and own this powerful technology. He, along with nine other ex-employees and backed by notable advocates including three Nobel Prize winners, sent a letter to California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings. They are asking these officials to protect OpenAI’s charitable mission and halt the impending structural changes.
OpenAI has responded, stating that any modifications are aimed at ensuring AI technology serves the public good. They plan to maintain a nonprofit division while also creating a public benefit corporation, similar to other AI companies in the industry. The organization reassured that the success of its for-profit sector would continue to support its nonprofit goals.
This letter marks the second appeal to state officials within the month. The previous one came from a group of labor leaders and nonprofits focused on safeguarding OpenAI’s considerable charitable resources. Attorney General Jennings has indicated her willingness to review any significant transactions to protect public interests, while Bonta’s office has sought further information on the matter.
OpenAI was founded by its current CEO Sam Altman and Elon Musk, among others, as a nonprofit dedicated to responsibly advancing artificial general intelligence (AGI) for the benefit of humanity. With OpenAI’s valuation now at an impressive $300 billion and its ChatGPT product attracting around 400 million users weekly, significant challenges to its governance structure remain. These challenges include a lawsuit from Musk, who claims that the organization has strayed from its foundational principles.
The letter’s signatories include respected economists and AI pioneers. Some of them have shown support for Musk’s legal action, while others are cautious, given his own competing AI venture. They emphasize the importance of OpenAI adhering to its mission rather than focusing solely on enriching investors.
Conflicts regarding OpenAI’s mission have been ongoing for years, leading to notable departures within the company. Hedley, who worked there in its early days, recalled that safety considerations seemed to be diminishing, especially as the AI landscape became more competitive. He expressed concern that the new corporate structure might intensify pressures to prioritize rapid development over safety.
Former employees fear that without the nonprofit oversight, critical safeguards, such as a clause requiring OpenAI to assist other organizations nearing the creation of superior AI, could be at risk. They warn that the organization could develop technologies that may pose significant risks to society without appropriate accountability.
In the eyes of these former employees, retaining control under a nonprofit model is essential for ensuring that any advances in AI serve the best interests of humanity rather than simply generating profits.


