A coalition of former OpenAI employees, backed by Nobel laureates and AI experts, is urging the Attorneys General of California and Delaware to block the ChatGPT maker's planned for-profit restructuring. They fear this shift will compromise the company's original mission to develop AI for the benefit of humanity, potentially leading to the technology being controlled by entities prioritizing profit over public safety.
Concerns Over Safety and Accountability
The former employees are concerned that OpenAI's ambition to create AI surpassing human capabilities, combined with a shift away from its nonprofit mission, could have severe consequences. Page Hedley, a former policy and ethics advisor at OpenAI, expressed concern about who will control the technology once it is created. The group suggests that OpenAI has been cutting corners on safety testing and rushing product releases to stay ahead of competitors, a trend that worries them as the technology becomes more powerful. Steven Adler, an OpenAI safety researcher who recently departed, cited "dangerous capability evaluations" as a reason for his exit, echoing concerns about the rapid pace of AI development and the lack of solutions for AI alignment.
Undermining Governance Safeguards
The open letter from the former employees argues that the restructuring into a Public Benefit Corporation (PBC) would dismantle the governance safeguards that OpenAI originally championed. They highlight that the proposed restructuring would transfer control away from the nonprofit entity—whose primary fiduciary duty is to humanity—to a for-profit board whose directors would be partly beholden to shareholder interests. The authors detail specific safeguards currently in place that would be undermined or eliminated. The group also believes that current and former employees are hampered in speaking about their concerns because of "broad confidentiality agreements". They say companies bear almost no pressure to share information with governments or the public, effectively rendering world-changing technology in the hands of a few people.
OpenAI's Response and Counterarguments
OpenAI has responded to these concerns by stating that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI". The company claims its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and Elon Musk's xAI, but will still preserve a nonprofit arm. OpenAI argues that this structure will ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling them to achieve their mission. CEO Sam Altman has stated the board decided to keep the for-profit arm under the control of its nonprofit parent organization. However, critics remain skeptical, arguing that the for-profit structure would inevitably prioritize shareholder returns over the public good.
Calls for Investigation and Regulatory Action
The former OpenAI employees are asking the Attorneys General of California and Delaware to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. They urge the AGs to investigate the proposed changes and ensure that governance structures prioritizing public benefit over private gain remain intact. California Attorney General Rob Bonta's office has previously sought more information from OpenAI regarding its business plans. Delaware Attorney General Kathy Jennings said her office would "review any such transaction to ensure that the public's interests are adequately protected". A coalition of California nonprofits, foundations, and labor groups have also raised concerns about OpenAI's plans, urging Attorney General Bonta to halt the restructuring.
Industry Context and Safety Concerns
OpenAI's internal conflicts over safety have been previously documented. There are concerns that OpenAI has prioritized product development and commercial success over a cautious approach to AI safety. Some reports indicate that safety researchers have left the company due to these concerns. Recently, OpenAI indefinitely delayed the release of its anticipated open-source AI model, citing unresolved safety concerns and the need for further risk assessments. CEO Sam Altman stated that the model will not be released until additional safety tests are carried out and high-risk areas are thoroughly reviewed. This decision reflects ongoing industry debates over the balance between open access and responsible AI development.