Microsoft President Brad Smith has restricted employees from using the Chinese AI model DeepSeek, citing serious data security concerns and the potential for the spread of Chinese propaganda. This decision highlights the growing tensions in the global artificial intelligence landscape and marks a significant step in how major tech companies are approaching AI tools developed by foreign entities, particularly those with ties to China.
Smith announced the ban during a Senate hearing, explaining that Microsoft is not allowing the DeepSeek application on its internal systems or its app store. He emphasized the risk of sensitive data potentially ending up in the hands of a foreign government, a risk Microsoft is unwilling to take. Smith specifically mentioned concerns about data going back to China. DeepSeek's privacy policy indicates that user data is stored on servers within China, where the government can request data sharing from tech companies.
The concerns extend beyond data privacy to include the potential for DeepSeek's responses to be influenced by Chinese propaganda. There are worries that AI systems like DeepSeek might reflect or amplify state-aligned narratives, especially when developed in countries with strong governmental oversight of their technology sectors. DeepSeek has been observed to avoid or conceal responses that are critical of China.
Microsoft's decision aligns with a broader trend of increasing caution surrounding AI tools with international connections. Several organizations and governments have already banned DeepSeek, including NASA, the U.S. Navy, the Pentagon, and various government agencies. These bans often cite similar concerns about data privacy, censorship, and the potential for security vulnerabilities. For example, New York State banned DeepSeek from all government devices, citing serious worries about privacy and censorship.
Despite the ban on employee use of the DeepSeek app, Microsoft has not entirely cut ties with the company. Microsoft offers the DeepSeek R1 model on its Azure cloud service, but this offering is carefully controlled to eliminate security concerns. The company says that all content was carefully reviewed to avoid any risks.
Microsoft's move underscores the importance of securing AI systems, a task that the company acknowledges will never be complete. The company's AI Red Team, a specialized group dedicated to stress-testing AI models for security vulnerabilities, biases, and adversarial threats, plays a critical role in ensuring the safety and ethical use of Microsoft AI. Microsoft adheres to its Responsible AI Principles, which include fairness, accountability, inclusivity, transparency, and privacy.
Microsoft is also addressing AI security concerns through its Security Copilot, a generative AI-powered assistant for cybersecurity professionals. Security Copilot combines advanced GPT-4 models from OpenAI with Microsoft's expertise, global threat intelligence, and comprehensive security products. Microsoft has also released cybersecurity-focused AI agents that autonomously assist security teams with tasks such as phishing and security alert triage, vulnerability monitoring, and threat intelligence curation. These tools are built following responsible AI principles and with the Microsoft Secure Future Initiative framework's guardrails.