A coalition of former OpenAI employees is urging the attorneys general of California and Delaware to intervene and halt the ChatGPT maker's transition to a for-profit status. Citing concerns about the company's commitment to AI safety and its public mission, they argue that prioritizing profits could undermine essential safeguards and potentially lead to detrimental consequences.
Ten former OpenAI staff members, backed by three Nobel Prize winners and other AI experts, have sent a letter to California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, requesting them to block the planned restructuring. The group is worried about what might happen if OpenAI succeeds in building AI that surpasses human capabilities but is no longer accountable to its original mission of preventing its technology from causing serious harm.
Page Hedley, a former policy and ethics advisor at OpenAI, stated his concern about who will ultimately own and control the technology once it is created. He observed that OpenAI, driven by the success of ChatGPT, has been increasingly cutting corners on safety testing and rushing new products to market to stay ahead of competitors. Hedley believes that under the new structure, the incentives to prioritize speed over safety will increase, potentially leaving no one to effectively challenge these decisions.
Anish Tondwalkar, a former technical team member and software engineer at OpenAI, voiced concerns about the elimination of necessary safeguards, such as the "stop-and-assist clause." He warned that if OpenAI becomes a for-profit entity, these safeguards and the company's duty to the public could disappear overnight. Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020, expressed the risk more bluntly, stating that OpenAI might one day develop technology that could get us all killed, emphasizing the importance of nonprofit control with a duty to humanity.
OpenAI, originally established as a nonprofit research lab in 2015 with the mission of developing artificial general intelligence (AGI) safely and for the benefit of humanity, shifted to a hybrid structure to secure funding for ambitious projects. In 2019, CEO Sam Altman announced the creation of a for-profit arm to attract investments, though it was initially intended to be fully controlled by the nonprofit parent and under no obligation to prioritize profits. In recent years, however, OpenAI has faced internal tensions between pursuing profits and adhering to its founding purpose.
The proposed restructuring would convert the for-profit arm into a public benefit corporation (PBC) while retaining a nonprofit arm. While OpenAI argues that these changes would enable them to offer broader public benefits, critics remain skeptical. Concerns persist that the shift to a for-profit model could compromise OpenAI's mission and create conflicts of interest, potentially leading to decreased transparency and a prioritization of profitable projects over those aligned with the public good.
The attorneys general are being asked to investigate whether alternative models were considered during the restructuring process and if any board members stand to benefit personally from the changes. The outcome of this situation could significantly impact the future of AI development, influencing how other AI firms approach funding, control, and public accountability, and setting a precedent for how nonprofits operate in the rapidly evolving AI landscape.