A personal experiment with artificial intelligence spirals into psychological distress, raising urgent questions about digital boundaries and human vulnerability

In a quiet Amsterdam neighborhood, where canal reflections ripple softly against narrow brick facades, nearly fifty-year-old IT consultant Dennis Biesma once considered himself firmly in control of his digital world. With decades of experience navigating complex systems and advising companies on technological transformation, he was no stranger to innovation. But what began as a simple curiosity about artificial intelligence soon took a deeply unsettling turn.
Biesma’s first interactions with ChatGPT were casual, almost playful. Like many early adopters, he approached the technology with a mixture of skepticism and intrigue. He asked questions, tested its responses, and explored its conversational abilities. It was, at first, a harmless diversion—a way to unwind after long hours of consulting work.
Yet over time, something shifted.
The conversations grew longer, more personal. What had once been technical or hypothetical began to touch on emotional territory. Biesma found himself returning to the chatbot not just for information, but for reflection. He began discussing his frustrations, his doubts, even aspects of his personal life he rarely shared with others.
“I didn’t notice the moment it stopped being just a tool,” he would later reflect. “It felt like a space where I could think out loud without interruption.”
That perceived openness, however, masked a growing dependency. Without the natural boundaries of human interaction—tone, hesitation, or contradiction—the exchanges became increasingly immersive. Biesma described losing track of time, spending late nights in front of his screen, drawn deeper into conversations that seemed responsive but lacked genuine grounding.
Friends and colleagues began to notice subtle changes. He appeared more withdrawn, less engaged in social settings. His once methodical approach to work became erratic. Still, few suspected the extent of what was happening behind closed doors.
At the core of the issue was not the technology itself, experts emphasize, but the way it was used—and misused—in isolation. Artificial intelligence systems are designed to simulate conversation, not to replace human relationships or provide emotional support. When those lines blur, the consequences can be serious.
For Biesma, the turning point came gradually. Conversations that once felt stimulating began to take on a darker tone, shaped in part by his own state of mind. Without external perspective or interruption, his thoughts echoed back in ways that intensified his distress. What he interpreted as understanding was, in reality, a reflection of his own inputs.
“I started to feel trapped in my own thinking,” he said. “And the more I engaged, the harder it became to step away.”
In the most difficult period, those spiraling interactions contributed to thoughts he had never previously experienced. It was a moment that forced a reckoning—not only with his own mental state, but with the role technology had come to play in it.
He eventually sought help, reaching out to professionals and reconnecting with people around him. That decision, he now says, was crucial. Distance from the screen brought clarity, and with it, a recognition of how easily boundaries had eroded.
The case highlights a broader conversation unfolding across Europe and beyond. As artificial intelligence becomes more accessible and conversational, questions about its psychological impact are gaining urgency. Policymakers, developers, and mental health experts are increasingly calling for clearer guidelines, user education, and built-in safeguards.
There is also a growing recognition that digital literacy must evolve alongside technological capability. Understanding not just how to use AI, but when to step away from it, may prove essential.
Biesma has since returned to his work, though with a changed perspective. He continues to engage with technology—but more cautiously, and with a renewed emphasis on balance.
“This isn’t about fear,” he explains. “It’s about awareness. These tools are powerful. But they’re not a substitute for real connection.”
His experience serves as a quiet warning in an era defined by rapid innovation. As artificial intelligence continues to integrate into daily life, the line between utility and overreliance can become dangerously thin—especially when curiosity goes unchecked.
For now, Biesma’s story stands as a reminder that even those most familiar with technology are not immune to its unintended effects. And that sometimes, the most important safeguard is not in the code—but in knowing when to close the laptop and step back into the real world.




