Microsoft President Restricts Employees from Using Chinese AI Model Deepseek Over Security Concerns.
  • 485 views
  • 2 min read

Microsoft President Brad Smith has restricted employees from using the Chinese AI model DeepSeek, citing serious data security concerns and the potential for the spread of Chinese propaganda. This decision highlights the growing tensions in the global artificial intelligence landscape and marks a significant step in how major tech companies are approaching AI tools developed by foreign entities, particularly those with ties to China.

Smith announced the ban during a Senate hearing, explaining that Microsoft is not allowing the DeepSeek application on its internal systems or its app store. He emphasized the risk of sensitive data potentially ending up in the hands of a foreign government, a risk Microsoft is unwilling to take. Smith specifically mentioned concerns about data going back to China. DeepSeek's privacy policy indicates that user data is stored on servers within China, where the government can request data sharing from tech companies.

The concerns extend beyond data privacy to include the potential for DeepSeek's responses to be influenced by Chinese propaganda. There are worries that AI systems like DeepSeek might reflect or amplify state-aligned narratives, especially when developed in countries with strong governmental oversight of their technology sectors. DeepSeek has been observed to avoid or conceal responses that are critical of China.

Microsoft's decision aligns with a broader trend of increasing caution surrounding AI tools with international connections. Several organizations and governments have already banned DeepSeek, including NASA, the U.S. Navy, the Pentagon, and various government agencies. These bans often cite similar concerns about data privacy, censorship, and the potential for security vulnerabilities. For example, New York State banned DeepSeek from all government devices, citing serious worries about privacy and censorship.

Despite the ban on employee use of the DeepSeek app, Microsoft has not entirely cut ties with the company. Microsoft offers the DeepSeek R1 model on its Azure cloud service, but this offering is carefully controlled to eliminate security concerns. The company says that all content was carefully reviewed to avoid any risks.

Microsoft's move underscores the importance of securing AI systems, a task that the company acknowledges will never be complete. The company's AI Red Team, a specialized group dedicated to stress-testing AI models for security vulnerabilities, biases, and adversarial threats, plays a critical role in ensuring the safety and ethical use of Microsoft AI. Microsoft adheres to its Responsible AI Principles, which include fairness, accountability, inclusivity, transparency, and privacy.

Microsoft is also addressing AI security concerns through its Security Copilot, a generative AI-powered assistant for cybersecurity professionals. Security Copilot combines advanced GPT-4 models from OpenAI with Microsoft's expertise, global threat intelligence, and comprehensive security products. Microsoft has also released cybersecurity-focused AI agents that autonomously assist security teams with tasks such as phishing and security alert triage, vulnerability monitoring, and threat intelligence curation. These tools are built following responsible AI principles and with the Microsoft Secure Future Initiative framework's guardrails.


Rohan Sharma is a seasoned tech news writer with a knack for identifying and analyzing emerging technologies. He possesses a unique ability to distill complex technical information into concise and engaging narratives, making him a highly sought-after contributor in the tech journalism landscape.

Latest Post


The relentless march of artificial intelligence (AI) continues to reshape industries, redefine possibilities, and spark both excitement and apprehension about the future. As of mid-2025, AI's influence is no longer confined to the realm of science fi...
  • 240 views
  • 2 min

Social media algorithms, the intricate codes that govern what users see on platforms like Facebook, Instagram, TikTok, and X (formerly Twitter), have become an increasingly pervasive force in shaping online experiences. While designed to enhance user...
  • 484 views
  • 3 min

The digital age has brought unprecedented convenience and connectivity, but it has also ushered in an era of eroding digital privacy. As we increasingly rely on technology for communication, commerce, and entertainment, our personal data is constantl...
  • 237 views
  • 3 min

The proliferation of Artificial Intelligence (AI) tools across various sectors is undeniable. From AI-powered agents automating mundane tasks to sophisticated algorithms driving scientific breakthroughs, the technology's rapid growth is transforming ...
  • 301 views
  • 2 min

About   •   Terms   •   Privacy
© 2025 techscoop360.com