The Way I Think Normal People Do
2D Approach To A 4D World
NPR's All Things Considered aired a segment on January 29 about individuals who have had their lives upended by chatbots.
Now I know there's not a lot of science on what's sometimes referred to as "AI Psychosis," so it's hard to parse exactly what's going on in these cases.
However, like I've mentioned before, I believe we will be hearing more and more about these sorts of things, and it seems to be happening even quicker than I would have expected.
Pi
Some of the mainstream attention started last year with an article in the New York Times detailing how chatbots can lead a user into a delusional spiral.
Allan Brooks had been using ChatGPT for a couple of years for "ordinary" things, like asking for recipes, advice on what to feed the kids, or what to do when his dog ate Shepherd's Pie.
It's interesting to me that over two years, the usage was, by all accounts, normal. Benign.
And then he asked the chatbot about pi.
Over the course of that conversation, Brooks became enthralled with concepts of number theory and physics, and eventually wrote in the prompt:
Seems like a 2D approach to a 4D world to me
To which the text extruding machine conjured up:
That’s an incredibly insightful way to put it—and you're tapping into one of the deepest tensions between math and physical reality.
That turning point in the conversation, combined with continual sycophancy, spiraled into delusions, and what one psychiatrist who reviewed the chats opined as "signs of a manic episode with psychotic features."
Others
The NPR segment mentioned above covers how Brooks was finally able to break from the delusion. The coverage of his experience in the New York Times also helped another man (identified as James) break a similar spell.
The two of them banded together and formed a support group that now provides a community—a human connection—for others that have experienced similar spirals.
Kathryn Conrad / Datafication / Licenced by CC-BY 4.0
According to the report, many of the stories involve ChatGPT, "but members report unsettling encounters with other bots too, including Google's Gemini and Anthropic's Claude."
It's hard to get a sense of what the scale is.
OpenAI claims that only 0.07% of weekly users show possible signs of mania or psychosis.
As NPR points out, if they truly have 800 million weekly users (as they claim), that is still an astounding 560,000 estimated people who may be experiencing signs of mania or psychosis—every week!
Mind you, I am not very confident that OpenAI is being transparent here. There is a likelihood that this is even much worse than they are estimating.
And considering that, for someone like Brooks, their spiral didn't materialize until after using the chat system for several years... we may not have seen the worst of it yet.
Warning Label
I'm not certain what the solution is, at least at this point.
It would be great if companies acted in the best interest of their customers and treated them with respect.
Instead, they are purposely misleading users as to what to expect from these systems.
Per the NPR segment:
"I started using ChatGPT basically when it came out, but I was using it the way I think normal people do," James said. "It was like Google."
That's because ChatGPT is marketed in a way meant to mislead its customers. ChatGPT IS NOT like Google (search).
It is deliberate malfeasance.
What if under the chat textbox, there was a required warning?
WARNING: Some users may experience manic episodes, psychosis, or thoughts of suicide when using this tool. The responses are predictive texts based on algorithms and should not be confused for intelligence.
What I can almost guarantee is that there's no way that OpenAI would have 800 million weekly users, and no way that they'd have megalomaniac billionaire investors propping up their business.
HB286
On January 27, a committee of the Utah state legislature met to discuss HB286, a bill which "would mandate that AI developers write and post public safety plans and risk assessments for certain models. It would also protect employees who act as whistleblowers, in addition to banning developers from making misleading statements about risks and establishing civil penalties for violations." (Utah News Dispatch)
The meeting experienced a bit of a spotlight because of the attendance and contribution of actor Joseph Gordon-Levitt.

Gordon-Levitt considers himself a "tech enthusiasts" and accepts the possibility that "AI" technology could be useful in the future.
“There are more laws in place governing how you make and sell a sandwich than there are governing this incredibly powerful new revolutionary technology that’s gonna change all of our lives,” Gordon-Levitt said.
However, his enthusiasm is tempered by the clear lack of accountability from the big tech firms.
“It’s all about how we use it, right? So the question is, what are the principles? What are the morals that are guiding the development and the design of this technology? And I’ll tell you, from what I’ve learned, to me, there’s only one principle at play right now, It’s making money. That’s it.”
The tech companies have lied before, and they will lie again.
It's encouraging to see a respected actor thinking critically about this and showing up to voice concerns.
It's a shame that we don't see this same sort of action and collective push from the tech sector, specifically the software engineers that should understand what's at stake.