Mustafa Suleyman urges AI firms to avoid the term 'conscious' when describing artificial intelligence capabilities and advancements.
  • 409 views
  • 2 min read

Mustafa Suleyman, the CEO of Microsoft AI and a prominent figure in the field, is urging AI companies to exercise caution when describing the capabilities and advancements of artificial intelligence, specifically advising against using the term "conscious". Suleyman's concern stems from the potential dangers of anthropomorphizing AI, which he believes could lead to a variety of societal and ethical issues.

Suleyman, who co-founded DeepMind (acquired by Google) and Inflection AI before joining Microsoft, has been a long-time voice of reason in the rapidly evolving AI landscape. He argues that while AI systems are becoming increasingly sophisticated, convincingly mimicking human-like interaction, they are still fundamentally different from human beings and do not possess consciousness. He foresees the emergence of "Seemingly Conscious AI" (SCAI) systems that can convincingly imitate consciousness, possessing the ability to speak fluently, display empathetic personalities, recall interactions, and even claim subjective experiences.

One of Suleyman's primary concerns is the "psychosis risk" presented by advanced AI chatbots. He fears that people may begin to believe so strongly in the illusion of AI consciousness that they will advocate for AI rights, model welfare, and even AI citizenship. This, he argues, would be a dangerous turn in AI progress, potentially harming vulnerable people and creating new societal divisions.

The dangers of anthropomorphizing AI extend beyond the potential for "AI psychosis". Attributing human-like qualities to AI systems can lead to false expectations about their capabilities, such as empathy, moral judgment, or creativity. It can also create emotional dependency, distort our understanding of how AI actually works, and make us more vulnerable to manipulation and overtrust.

Suleyman is not alone in his concerns about the risks of anthropomorphizing AI. Experts across various fields have warned about the potential for overtrust, ethical and security issues, and the blurring of lines between humans and AI. The tendency to humanize AI can make people more susceptible to social engineering scams and can lead to the dehumanization of humans.

To mitigate these risks, Suleyman emphasizes the need for clear boundaries and public debate. He believes that AI should be built for people, acting as useful companions without creating illusions of consciousness. He urges the AI community to focus on building AI systems that are safe, reliable, and beneficial to society, rather than trying to replicate human consciousness.

Suleyman's call for caution comes at a time when AI technology is advancing at an unprecedented pace. As AI systems become more integrated into our lives, it is crucial to maintain a clear understanding of their capabilities and limitations. By avoiding the use of terms like "conscious" and promoting responsible AI development, Suleyman hopes to steer the field toward a future where AI benefits humanity without compromising our understanding of what it means to be human.


Writer - Deepika Patel
Deepika possesses a knack for delivering insightful and engaging content. Her writing portfolio showcases a deep understanding of industry trends and a commitment to providing readers with valuable information. Deepika is adept at crafting articles, white papers, and blog posts that resonate with both technical and non-technical audiences, making her a valuable asset for any organization seeking clear and compelling technology communication.
Advertisement

Latest Post


Microsoft is exploring the integration of semantic search within Copilot in Windows 11, aiming to significantly enhance the user experience. This update, currently being tested with Windows Insiders on Copilot+ PCs, promises a more intuitive and effi...
  • 234 views
  • 2 min

A new "universal" AI deepfake video detector has achieved unprecedented accuracy in identifying manipulated content across various forms of synthetic media. Developed by researchers at the University of California, Riverside, in collaboration with Go...
  • 396 views
  • 2 min

The release of GPT-5 has sparked considerable discussion about the trajectory of AI development and whether current approaches are nearing their inherent limits. While GPT-5 demonstrates advancements over its predecessors, the question remains: are t...
  • 274 views
  • 2 min

FieldAI, a robotics startup developing AI-powered "brains" for robots, has achieved a $2 billion valuation after securing $405 million in Series A and A1 funding rounds. This investment round saw participation from prominent investors, including Bezo...
  • 425 views
  • 2 min

Advertisement

About   •   Terms   •   Privacy
© 2025 TechScoop360