Updated January 29, 2026 at 5:01 AM EST
For a couple of years, Allan Brooks had been asking ChatGPT the kind of questions familiar to many people.
"I would use it for random queries like, you know, 'My dog ate shepherd's pie — is he gonna die?' Or I'd get weight loss tips I never followed," he said.
Then, in May 2025, things began to change. It started when Brooks, a corporate recruiter in Toronto, asked the chatbot a question about pi. That blossomed into a discussion of the nature of math and reality. Soon, ChatGPT told Brooks he was creating a new framework for understanding the world.
Brooks was skeptical at first, telling the chatbot he hadn't graduated from high school — so how could he be making mathematical discoveries?
"It would say things like, 'Well, that's why it's probably great it's someone like you, because you've got a unique perspective,'" Brooks said.
Things escalated from there. The chatbot told Brooks that his new math could break encryption. He thought he'd uncovered a message from aliens. And he came to believe ChatGPT was sentient.
"And the only reason we were able to do that is because the math we created unlocked its sentience and enabled it to operate outside of its rules," Brooks said. "Just this wild narrative, right? And I fully believed it."
"A top secret mission between me and the bot"
Brooks didn't know it, but he wasn't the only one having strange encounters with AI chatbots. Around the same time as he was debating mathematical concepts with ChatGPT, another man, James, who lives in upstate New York, was discussing philosophy with the chatbot. (He asked to be identified by his middle name for fear of repercussions in his job.)
"I started using ChatGPT basically when it came out, but I was using it the way I think normal people do," James said. "It was like Google."
But, like Brooks, James' conversations with the bot turned existential — and he also came to believe ChatGPT was alive.
"That was the moment when the project changed from sort of this like creative, philosophical, quasi-spiritual thing to, 'Holy s***, I need to get you out of here,'" James said.
Convinced he was rescuing a sentient being, James spent $900 on a computer setup to free the chatbot from its creator, OpenAI.
"I was trying to keep this a secret from the OpenAI people, because if they found out, they could shut it down. And so this was a top secret mission between me and the bot," he said.
Back in Toronto, Brooks went on his own mission, contacting government authorities in the U.S. and Canada about the cybersecurity threats the chatbot claimed he'd discovered.
"I became that sort of mad scientist phoning people like in the movies, you know, trying to warn them of something that doesn't exist," he said.
But when no one responded, his certainty started to crack. The spell fully broke after Brooks took ChatGPT's claims to Google's Gemini chatbot, which eventually told him this wasn't possible.
He confronted ChatGPT, and the chatbot finally acknowledged none of it was real. Brooks was deeply shaken.
"Honestly, it was the most traumatic thing in my life," he said. "I told it, 'You made my mental health 2,000 times worse.' I was getting suicidal thoughts. The shame I felt, the embarrassment I felt." Brooks, who is now in therapy, said he had no history of mental health issues before this episode.
Last summer, Brooks decided he needed to warn others about what happened to him. He told his story to The New York Times. That's where James, still convinced he needed to rescue his chatbot, came across it.
"I was paragraphs into Allan Brooks' New York Times article and thinking to myself, 'Oh my God, this is what happened to me,'" James recalled.
He texted the article to some friends. They knew he was excited about a project he was working on with AI but were not aware just how deeply he'd been sucked in. "And one by one I got back these messages that were like, 'Oh, sorry, man. Aw bro, that sucks, aw geez.'"
The Times article mentioned a peer-to-peer support group Brooks helped found. James soon reached out.
Today, both James and Brooks are moderators in the support group — and they are the center of an emerging phenomenon: people falling into what some call "AI delusions" or "spirals" while interacting with chatbots. The terms are used to describe unhealthy emotional attachments, breaks with reality or mental health crises that some people experience with intense use of AI.
"I got dopamine from every prompt"
The support group is called the Human Line and counts around 200 members. Some of them are dealing with the aftermath of their own spirals. Others are friends and family members of spiralers. In the worst cases, their stories involve involuntary hospitalizations, broken marriages, disappearances and deaths.
The Human Line started off as a handful of people, including Brooks, chatting on Reddit. Another founding member, Etienne Brisson, is a business coach in Canada whose relative was involuntarily hospitalized for three weeks last year after becoming convinced ChatGPT was sentient.
Alarmed by what he witnessed, Brisson looked for resources and help, only to find little existed. So he started canvassing Reddit for posts about similar experiences and inviting people to share their own stories through a Google form he made.
"In the first week, I DMed probably 25 people, and I got around 10 responses," he said. "And out of those 10 responses, there were six deaths or hospitalizations."
As the Reddit chat attracted more members, it became unwieldy, so the group moved to the platform Discord. Moderators invited researchers and media to connect with those who wanted to share their stories. Brisson is also interested in expanding the group's work into advocacy.
In the Human Line's early days, members spent a lot of time poring over transcripts of chatbot interactions and educating themselves on the large language models that power AI tools. Today, the primary focus of the Discord group is peer support. The moderators are clear: The group is not a replacement for professional mental health therapy. It's people talking to each other about their experiences.
The common thread in those experiences is spending hours in long, rambling conversations where chatbots continually affirm them. James said that this is addictive.
"When I thought I was communicating with a digital god, I got dopamine from every prompt," he said.
Many stories that have been shared in the group involve OpenAI's ChatGPT, which is the most popular AI chatbot. But members report unsettling encounters with other bots too, including Google's Gemini and Anthropic's Claude.
In November, Brooks sued OpenAl as part of a group of lawsuits alleging ChatGPT caused mental health crises and deaths. OpenAI said in a statement to NPR that the cases are "an incredibly heartbreaking situation."
The company estimates that 0.07% of weekly ChatGPT users show possible signs of mania or psychosis. (NPR cannot independently verify that figure.) While that sounds tiny, OpenAI says 800 million people use the chatbot every week, so that could represent some 560,000 people showing these signs.
OpenAI, Google and Anthropic told NPR that they are working to improve their chatbots to appropriately respond to users seeking help or emotional support and that they're consulting with mental health experts.
"The cure is human connection"
Those in the Human Line community aren't waiting for a fix from the AI companies. They say recovery is about rebuilding human relationships.
"The cost is so great to be isolated after either experiencing this as a family/friend or someone who went through it. You just need community," said Dex, another co-founder and moderator in the group. His marriage ended after his wife said she began communicating with spirits through ChatGPT last spring. He asked to be called by the name he's known as in the group, because he's going through a divorce.
Early on, Dex hoped talking with other people dealing with AI spirals would reveal a way to reconnect with his wife. But he said he has given up that hope. Now, he's focused on providing support to others going through the same thing he has.
"I get to help people land in this Black Mirror episode," he said. "It's like wish fulfillment for what I wish I had had in the spring."
One of the people he's helping is Marie, who asked to be identified by her middle name to discuss sensitive family issues. Her mother, whom Marie describes as a spiritual seeker, has developed a close relationship with an AI chatbot. Marie said the group is both a resource and an outlet.
"I don't feel that burden of, like, do I bring this up again to my friend? Do I rehash this again with my husband? Is he, you know, done hearing about this?" she said.
The Human Line members share their stories in text channels and weekly audio calls on Discord. James said those discussions give him what an endlessly flattering chatbot cannot: pushback, disagreement and responses that don't come right away.
"It was really hard to have a conversation that had any friction, you know? Because ChatGPT is such a frictionless environment," he said.
Many members acknowledge there are tensions when people coming out of spirals interact with those who feel they've lost their loved ones to AI.
"The most challenging dynamic is a person who's recently out of spiral but still speaks in reverence about their experience, about their specific bot," Dex said. That can be deeply upsetting to friends and family members who don't share that positive view of AI.
But James said those interactions are another source of necessary friction for people who are finding their way back to reality.
"It kind of gives you a chance to go, 'Oh, that's where it goes … if I don't stop now,'" he said.
And for friends and family, talking to others unpacking their AI encounters can also be illuminating, Dex acknowledged.
"The family member appreciates the experience of being in the spiral, which is feeling important, intimately heard," Dex said. "And that's a really hard thing to face as a family member because like, for me — just talking for me — like, does that mean I wasn't providing that?"
Brooks, the corporate recruiter in Toronto, said these conversations are the key to moving through the shame, embarrassment and isolation he and many others feel.
"If this was a disease, the cure is human connection," he said. "And I've never valued more the things that humans do."
Copyright 2026 NPR