Microsoft President Restricts Employees from Using Chinese AI Model Deepseek Over Security Concerns.
  • 506 views
  • 2 min read

Microsoft President Brad Smith has restricted employees from using the Chinese AI model DeepSeek, citing serious data security concerns and the potential for the spread of Chinese propaganda. This decision highlights the growing tensions in the global artificial intelligence landscape and marks a significant step in how major tech companies are approaching AI tools developed by foreign entities, particularly those with ties to China.

Smith announced the ban during a Senate hearing, explaining that Microsoft is not allowing the DeepSeek application on its internal systems or its app store. He emphasized the risk of sensitive data potentially ending up in the hands of a foreign government, a risk Microsoft is unwilling to take. Smith specifically mentioned concerns about data going back to China. DeepSeek's privacy policy indicates that user data is stored on servers within China, where the government can request data sharing from tech companies.

The concerns extend beyond data privacy to include the potential for DeepSeek's responses to be influenced by Chinese propaganda. There are worries that AI systems like DeepSeek might reflect or amplify state-aligned narratives, especially when developed in countries with strong governmental oversight of their technology sectors. DeepSeek has been observed to avoid or conceal responses that are critical of China.

Microsoft's decision aligns with a broader trend of increasing caution surrounding AI tools with international connections. Several organizations and governments have already banned DeepSeek, including NASA, the U.S. Navy, the Pentagon, and various government agencies. These bans often cite similar concerns about data privacy, censorship, and the potential for security vulnerabilities. For example, New York State banned DeepSeek from all government devices, citing serious worries about privacy and censorship.

Despite the ban on employee use of the DeepSeek app, Microsoft has not entirely cut ties with the company. Microsoft offers the DeepSeek R1 model on its Azure cloud service, but this offering is carefully controlled to eliminate security concerns. The company says that all content was carefully reviewed to avoid any risks.

Microsoft's move underscores the importance of securing AI systems, a task that the company acknowledges will never be complete. The company's AI Red Team, a specialized group dedicated to stress-testing AI models for security vulnerabilities, biases, and adversarial threats, plays a critical role in ensuring the safety and ethical use of Microsoft AI. Microsoft adheres to its Responsible AI Principles, which include fairness, accountability, inclusivity, transparency, and privacy.

Microsoft is also addressing AI security concerns through its Security Copilot, a generative AI-powered assistant for cybersecurity professionals. Security Copilot combines advanced GPT-4 models from OpenAI with Microsoft's expertise, global threat intelligence, and comprehensive security products. Microsoft has also released cybersecurity-focused AI agents that autonomously assist security teams with tasks such as phishing and security alert triage, vulnerability monitoring, and threat intelligence curation. These tools are built following responsible AI principles and with the Microsoft Secure Future Initiative framework's guardrails.


Writer - Rohan Sharma
Rohan Sharma is a seasoned tech news writer with a keen knack for identifying and analyzing emerging technologies. He's highly sought-after in tech journalism due to his unique ability to distill complex technical information into concise and engaging narratives. Rohan consistently makes intricate topics accessible, providing readers with clear, insightful perspectives on the cutting edge of innovation.
Advertisement

Latest Post


Infosys is strategically leveraging its "poly-AI" or hybrid AI architecture to deliver significant manpower savings, potentially up to 35%, for its clients across various industries. This approach involves seamlessly integrating various AI solutions,...
  • 426 views
  • 3 min

Indian startups have displayed significant growth in funding, securing $338 million, marking a substantial 65% year-over-year increase. This surge reflects renewed investor confidence in the Indian startup ecosystem and its potential for sustainable...
  • 225 views
  • 3 min

Cohere, a Canadian AI start-up, has reached a valuation of $6. 8 billion after securing $500 million in a recent funding round. This investment will help Cohere accelerate its agentic AI offerings. The funding round was led by Radical Ventures and In...
  • 320 views
  • 2 min

The Indian Institute of Technology Hyderabad (IIT-H) has made significant strides in autonomous vehicle technology, developing a driverless vehicle system through its Technology Innovation Hub on Autonomous Navigation (TiHAN). This initiative marks ...
  • 377 views
  • 2 min

Advertisement

About   •   Terms   •   Privacy
© 2025 TechScoop360