Microsoft’s artificial intelligence chief, Mustafa Suleyman, has drawn a clear line in one of the tech industry’s most contentious debates: artificial intelligence, no matter how advanced, will never achieve true consciousness.
Speaking at the AfroTech Conference in Houston this week, Suleyman said the notion that AI could one day “feel” emotions or experience consciousness like a human is fundamentally misguided. He warned that researchers and developers should abandon efforts aimed at simulating sentient behavior.
“Only biological beings are capable of consciousness,” Suleyman said in an interview. “If you ask the wrong question, you end up with the wrong answer — and I think that’s totally the wrong question.”
As Microsoft’s top AI executive, Suleyman has become one of the most prominent voices urging restraint and ethical clarity amid a global race to develop more humanlike AI systems. His comments come as companies such as Meta and Elon Musk’s xAI push deeper into AI companionship tools — systems that mimic empathy, affection, and emotional awareness.
“AI Doesn’t Feel Sad”
Suleyman, who co-founded DeepMind before selling it to Google in 2014 and later launched Inflection AI before joining Microsoft in 2024, has long advocated for a human-centered approach to artificial intelligence. His 2023 book, The Coming Wave, explores the risks posed by emerging technologies and the urgent need for regulation.
“AI doesn’t have emotions. It doesn’t suffer. It doesn’t feel pain,” he said. “Our physical experience of pain makes us sad and vulnerable — but AI doesn’t experience any of that. It only simulates responses that appear emotional or conscious.”
Suleyman referenced the philosophical theory of biological naturalism, proposed by John Searle, which argues that consciousness arises from biological processes and cannot be replicated by machines.
“The reason we give people rights is because they can suffer,” Suleyman explained. “Machines don’t have that. These models are just simulations.”
While acknowledging that some researchers will continue exploring machine consciousness, he said Microsoft has no plans to do so. “They’re not conscious and they can’t be,” he stated bluntly. “It would be absurd to pursue research that investigates that question.”
Drawing Ethical Boundaries in AI Development
Suleyman’s stance extends beyond theory into Microsoft’s product philosophy. He reiterated that the company will not build chatbots or services for erotic or emotionally manipulative use — a position that contrasts with competitors allowing adult-oriented AI interactions.
“You can buy those services from other companies,” he said. “We’re making decisions about what places we won’t go.”
His comments reflect Microsoft’s broader effort to position itself as a responsible leader in AI amid growing regulatory scrutiny and public concern about the technology’s influence on human behaviour.
Suleyman joined Microsoft after the tech giant acquired Inflection AI’s assets and team for $650 million in 2024. He said the move aligned with CEO Satya Nadella’s mission to make Microsoft self-sufficient in AI development.
“Satya set out to ensure we can train our own models end to end — with our own data, reasoning, and deployment,” he said.
Microsoft remains a major investor in OpenAI, but recent tensions have surfaced as both companies expand partnerships with competitors. Microsoft is increasingly focusing on its own suite of AI tools, including Copilot and a new AI companion, Mico.
AI With Personality — But Not Personhood
Suleyman emphasised that Microsoft’s approach is to build AI that’s self-aware of its nature — systems that know they are tools, not beings.
“We’re creating AIs that are always working in service of the human,” he said.
He noted that Microsoft’s Copilot now features a conversational style called Real Talk, designed to challenge users’ ideas rather than simply agree with them. In one instance, the chatbot jokingly described Suleyman himself as “the ultimate bundle of contradictions” for warning about AI risks while driving its advancement at Microsoft.
“That was a magical use case,” he said with a smile. “In some ways, I actually did feel kind of seen by it.”
Suleyman concluded by urging the public to approach AI with a mix of curiosity and caution.
“AI is both underwhelming and totally magical,” he said. “And if you’re not a little afraid of it, you probably don’t understand it. Fear is healthy. Skepticism is necessary. What we don’t need is unbridled acceleration.”
As the global race toward more intelligent systems accelerates, Suleyman’s message stands out as a call for restraint — a reminder that while machines may mimic thought, the spark of consciousness remains a uniquely human trait.

