OpenAI is implementing a mandatory ID verification process for developers seeking access to its most advanced AI models through its API. This significant shift in policy aims to enhance platform security, mitigate misuse, and address growing concerns about intellectual property theft and the potential for AI tools to be exploited for harmful activities. The new verification system, referred to as "Verified Organisation" status, will require developers to furnish a government-issued ID from one of the countries supported by OpenAI's API. Each ID can only verify one organization every 90 days, and OpenAI specifies that not all organizations will qualify for verification.
The primary goal of this initiative is to ensure the responsible use of AI tools and prevent malicious actors from exploiting the platform. OpenAI has stated that a small minority of developers have intentionally misused its APIs, violating usage policies. The ID verification process is intended to mitigate unsafe AI usage while continuing to make advanced models available to the broader developer community. This move is also a response to increasing global tensions regarding the misapplication of AI. OpenAI had previously blocked ChatGPT user accounts with potential ties to China and North Korea, alleging they leveraged the AI chatbot for covert surveillance and disinformation campaigns. The company withdrew its AI services from China in 2024.
Concerns about intellectual property theft have also played a significant role in OpenAI's decision. Recently, OpenAI accused Chinese AI startup DeepSeek of violating its terms of use by exfiltrating vast amounts of data through its API for training its own AI models. The company investigated whether DeepSeek illegally accessed large amounts of data through OpenAI's API in late 2024, potentially using it to train models in violation of OpenAI's terms. Microsoft, OpenAI's partner, reportedly blocked accounts believed to be associated with DeepSeek last year after suspecting that the startup had violated OpenAI's terms of service.
The introduction of ID verification follows reports of attempts to misuse OpenAI's technologies. Besides the DeepSeek incident, groups allegedly linked to North Korea have also attempted to exploit OpenAI's models for unauthorized purposes. In February 2025, OpenAI banned several Chinese accounts for using ChatGPT in activities related to social media monitoring. These incidents highlight OpenAI's concerns about the use of its platform for unauthorized or potentially harmful purposes.
While the move towards mandatory ID verification has been met with support from those advocating for increased security, it also raises questions about accessibility and user privacy. The potential impact of OpenAI's mandatory ID verification on user privacy is unclear due to the lack of clarity and transparency around the verification process, including how long the uploaded IDs of users will be stored on its servers. Balancing the need for safety with the principles of open access and innovation remains a complex challenge for the AI industry.
OpenAI has partnered with identity verification firm Persona to facilitate the ID verification process, enabling automated screenings across 225 countries and territories with minimal latency. The company has not specified a timeline for the full rollout of the ID verification requirement but indicates that it will be a prerequisite for accessing its most advanced models. As OpenAI continues to develop and release more powerful AI models, the company emphasizes its commitment to responsible deployment and user accountability. The implementation of identity verification is a step towards ensuring that its technologies are used ethically and in accordance with established guidelines.