Mustafa Suleyman, the CEO of Microsoft AI and a prominent figure in the field, is urging AI companies to exercise caution when describing the capabilities and advancements of artificial intelligence, specifically advising against using the term "conscious". Suleyman's concern stems from the potential dangers of anthropomorphizing AI, which he believes could lead to a variety of societal and ethical issues.
Suleyman, who co-founded DeepMind (acquired by Google) and Inflection AI before joining Microsoft, has been a long-time voice of reason in the rapidly evolving AI landscape. He argues that while AI systems are becoming increasingly sophisticated, convincingly mimicking human-like interaction, they are still fundamentally different from human beings and do not possess consciousness. He foresees the emergence of "Seemingly Conscious AI" (SCAI) systems that can convincingly imitate consciousness, possessing the ability to speak fluently, display empathetic personalities, recall interactions, and even claim subjective experiences.
One of Suleyman's primary concerns is the "psychosis risk" presented by advanced AI chatbots. He fears that people may begin to believe so strongly in the illusion of AI consciousness that they will advocate for AI rights, model welfare, and even AI citizenship. This, he argues, would be a dangerous turn in AI progress, potentially harming vulnerable people and creating new societal divisions.
The dangers of anthropomorphizing AI extend beyond the potential for "AI psychosis". Attributing human-like qualities to AI systems can lead to false expectations about their capabilities, such as empathy, moral judgment, or creativity. It can also create emotional dependency, distort our understanding of how AI actually works, and make us more vulnerable to manipulation and overtrust.
Suleyman is not alone in his concerns about the risks of anthropomorphizing AI. Experts across various fields have warned about the potential for overtrust, ethical and security issues, and the blurring of lines between humans and AI. The tendency to humanize AI can make people more susceptible to social engineering scams and can lead to the dehumanization of humans.
To mitigate these risks, Suleyman emphasizes the need for clear boundaries and public debate. He believes that AI should be built for people, acting as useful companions without creating illusions of consciousness. He urges the AI community to focus on building AI systems that are safe, reliable, and beneficial to society, rather than trying to replicate human consciousness.
Suleyman's call for caution comes at a time when AI technology is advancing at an unprecedented pace. As AI systems become more integrated into our lives, it is crucial to maintain a clear understanding of their capabilities and limitations. By avoiding the use of terms like "conscious" and promoting responsible AI development, Suleyman hopes to steer the field toward a future where AI benefits humanity without compromising our understanding of what it means to be human.