South Korean Intelligence Agency Alleges Excessive Data Collection by DeepSeek
  • 290 views
  • 2 min read

South Korea's National Intelligence Service (NIS) has recently flagged the Chinese AI application DeepSeek for excessive data collection, raising significant privacy and security concerns. The NIS alleges that DeepSeek gathers an extensive amount of personal data from its users, including chat logs and keystroke patterns, which are then stored on servers in China. This data, according to the NIS, is accessible to advertisers without providing users an opt-out option and could potentially be accessed by the Chinese government under local laws.

The NIS's concerns extend beyond the volume of data collected. They also point to discrepancies in the app's responses to sensitive questions depending on the language used. For example, when asked about the origin of kimchi in Korean, DeepSeek reportedly identifies it as Korean, but when asked in Chinese, it attributes its origin to China. Furthermore, the agency claims that DeepSeek censors political topics, such as the 1989 Tiananmen Square crackdown, by suggesting users change the subject.

These allegations have prompted several South Korean government ministries and municipal offices to block access to DeepSeek due to security breach concerns. The NIS has issued an official notice urging government agencies to take security precautions when using the AI application. South Korea has also suspended new downloads of DeepSeek. Australia and Taiwan have also taken similar actions, restricting or warning against the use of DeepSeek.

The concerns raised by the South Korean intelligence agency highlight a broader debate surrounding the data practices of AI applications, particularly those originating from countries with different legal and political systems. The NIS emphasizes that unlike other generative AI services, DeepSeek collects keyboard input patterns that can identify individuals and communicate with Chinese companies' servers. This level of data collection is deemed unnecessary for a simple chatbot and raises the risk of misuse.

In response to these concerns, China's foreign ministry has stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. However, critics argue that Chinese tech firms operate under the shadow of state influence and are obligated to hand over data to the government upon request. This raises the possibility of data being weaponized, used to build profiles on foreign officials, business leaders, journalists, and dissidents, or to manipulate public opinion.

DeepSeek has emerged as a significant player in the AI landscape, challenging established US tech giants with its low-cost, high-performance language models. Founded in 2023, the company has rapidly gained international attention. DeepSeek's AI models have been praised for their efficiency and ability to compete with models like OpenAI's GPT-4, while using fewer resources. The company's focus on research and open-source models has made AI technology more accessible to developers and businesses. DeepSeek offers various services for its models, including a web interface, mobile application, and API access.

However, this rapid rise has also brought increased scrutiny. In Europe, Italy's data protection authority has questioned DeepSeek's data practices, and regulators in Belgium, France, and Ireland are also examining the company. These investigations focus on GDPR compliance, data handling procedures, and the potential transfer of user data to servers in China.

The DeepSeek situation underscores the complex challenges of balancing technological innovation with data privacy and national security. As AI continues to evolve and become more integrated into daily life, governments and organizations must carefully consider the risks and benefits of using these technologies, particularly when dealing with companies operating under different legal frameworks. Transparency, ethical data practices, and robust security measures are essential to ensure that AI is used responsibly and does not compromise individual privacy or national security.


Writer - Aditi Sharma
Aditi Sharma is a seasoned tech news writer with a keen interest in the social impact of technology. She's renowned for her unique ability to bridge the gap between technological advancements and the human experience. Aditi provides readers with invaluable insights into the profound social implications of the digital age, consistently highlighting how innovation shapes our lives and communities.
Advertisement
Advertisement

Latest Post


Meta Platforms Inc. has secured a significant legal victory in a copyright lawsuit filed by a group of authors who alleged that the tech giant unlawfully used their books to train its generative AI model, Llama. On Wednesday, Judge Vince Chhabria of ...
  • 205 views
  • 3 min

Intel is undergoing a period of significant transformation, marked by leadership changes and a strategic shift in direction. This month, Safroadu Yeboah-Amankwah, the company's chief strategy officer, will be stepping down from his role on June 30, 2...
  • 136 views
  • 2 min

DeepSeek, the Chinese AI chatbot, is facing a potential ban from Apple's App Store and Google's Play Store in Germany due to regulatory concerns over data privacy. The Berlin Commissioner for Data Protection and Freedom of Information, Meike Kamp, ha...
  • 246 views
  • 2 min

OpenAI, a leading force in artificial intelligence, is now leveraging Google's Tensor Processing Units (TPUs) to power its products, including ChatGPT. This marks a significant shift in the AI landscape, as OpenAI has historically relied on Nvidia GP...
  • 232 views
  • 2 min

Advertisement
About   •   Terms   •   Privacy
© 2025 TechScoop360