South Korean Intelligence Agency Alleges Excessive Data Collection by DeepSeek
  • 270 views
  • 2 min read

South Korea's National Intelligence Service (NIS) has recently flagged the Chinese AI application DeepSeek for excessive data collection, raising significant privacy and security concerns. The NIS alleges that DeepSeek gathers an extensive amount of personal data from its users, including chat logs and keystroke patterns, which are then stored on servers in China. This data, according to the NIS, is accessible to advertisers without providing users an opt-out option and could potentially be accessed by the Chinese government under local laws.

The NIS's concerns extend beyond the volume of data collected. They also point to discrepancies in the app's responses to sensitive questions depending on the language used. For example, when asked about the origin of kimchi in Korean, DeepSeek reportedly identifies it as Korean, but when asked in Chinese, it attributes its origin to China. Furthermore, the agency claims that DeepSeek censors political topics, such as the 1989 Tiananmen Square crackdown, by suggesting users change the subject.

These allegations have prompted several South Korean government ministries and municipal offices to block access to DeepSeek due to security breach concerns. The NIS has issued an official notice urging government agencies to take security precautions when using the AI application. South Korea has also suspended new downloads of DeepSeek. Australia and Taiwan have also taken similar actions, restricting or warning against the use of DeepSeek.

The concerns raised by the South Korean intelligence agency highlight a broader debate surrounding the data practices of AI applications, particularly those originating from countries with different legal and political systems. The NIS emphasizes that unlike other generative AI services, DeepSeek collects keyboard input patterns that can identify individuals and communicate with Chinese companies' servers. This level of data collection is deemed unnecessary for a simple chatbot and raises the risk of misuse.

In response to these concerns, China's foreign ministry has stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. However, critics argue that Chinese tech firms operate under the shadow of state influence and are obligated to hand over data to the government upon request. This raises the possibility of data being weaponized, used to build profiles on foreign officials, business leaders, journalists, and dissidents, or to manipulate public opinion.

DeepSeek has emerged as a significant player in the AI landscape, challenging established US tech giants with its low-cost, high-performance language models. Founded in 2023, the company has rapidly gained international attention. DeepSeek's AI models have been praised for their efficiency and ability to compete with models like OpenAI's GPT-4, while using fewer resources. The company's focus on research and open-source models has made AI technology more accessible to developers and businesses. DeepSeek offers various services for its models, including a web interface, mobile application, and API access.

However, this rapid rise has also brought increased scrutiny. In Europe, Italy's data protection authority has questioned DeepSeek's data practices, and regulators in Belgium, France, and Ireland are also examining the company. These investigations focus on GDPR compliance, data handling procedures, and the potential transfer of user data to servers in China.

The DeepSeek situation underscores the complex challenges of balancing technological innovation with data privacy and national security. As AI continues to evolve and become more integrated into daily life, governments and organizations must carefully consider the risks and benefits of using these technologies, particularly when dealing with companies operating under different legal frameworks. Transparency, ethical data practices, and robust security measures are essential to ensure that AI is used responsibly and does not compromise individual privacy or national security.


Aditi Sharma is a seasoned tech news writer with a keen interest in the social impact of technology. She is known for her ability to connect technology with the human experience and provide readers with valuable insights into the social implications of the digital age.

Latest Post


Sony has recently increased the price of its PlayStation 5 console in several key markets, citing a "challenging economic environment" as the primary driver. This decision, which impacts regions including Europe, the UK, Australia, and New Zealand, r...
  • 466 views
  • 3 min

Intel Corporation has announced a definitive agreement to sell a 51% stake in its Altera business to Silver Lake, a global technology investment firm, for $8. 75 billion. This move aims to establish Altera as an operationally independent entity and th...
  • 442 views
  • 2 min

Meta is set to recommence training its artificial intelligence (AI) models using public data from adult users across its platforms in the European Union. This decision comes after a pause of nearly a year, prompted by data protection concerns raised ...
  • 498 views
  • 2 min

Nvidia is embarking on a significant shift in its manufacturing strategy, bringing the production of its advanced AI chips and supercomputers to the United States for the first time. This move marks a major milestone for the company and a potential t...
  • 161 views
  • 2 min

  • 174 views
  • 3 min

About   •   Terms   •   Privacy
© 2025 techscoop360.com