Challenging
Predictions
I've been following Simon's Willison's blog for some time now, finding his insights on LLM usage very interesting and enlightening.
While I myself don't feel a prerogative to use LLM technology, I am still fascinated by how others approach the landscape we're in.
Kathryn Conrad & Digit / Isolation / Licenced by CC-BY 4.0
I still think that, in spite of how we feel about the current landscape, we'd all find common ground in which to push back on the tech overlords that are operating at an unprecedented scale, without accountability or repercussions.
But I'll likely dive into that topic a little deeper at a later time.
More currently, I had some reactions to Simon's most recent post, LLM predictions for 2026, shared with Oxide and Friends.
What This Isn't
I'm not attempting to repudiate the predictions.
I am clearly far less qualified and/or informed on current capabilities of these technologies, and as such, far less likely to guess what our landscape will look like even just a few months from now, much less a year or more.
For example, his first prediction is that it will become undeniable that LLMs write good code.
This is indeed a very bold prediction, and I don't claim to have any particular knowledge to argue the opposite.
However, in his final sentence, he writes:
At this point if you continue to argue that LLMs write useless code you're damaging your own credibility. [Emphasis Mine]
This emphasis on useless code is interesting to me. Does it mean it already is or will be equivalent to human-generated code, in terms of utility, maintainability, security, and so on?
Again, I'm not an expert here. But my sense is, within a certain context (pun intended), some argue this is already the case.
So no, I'm not here to litigate on that, but I don't, generally, like to mince words.
Challenger
A bigger prediction that caught my eye, however, was a reference to a "Challenger disaster" for coding agent security.
I found this comparison remarkably irreverent.
In context of his quote, he's not talking about an actual, real cost to human lives, but rather to a risk to his computing workflow.
I think many people, myself included, are running these coding agents practically as root, right? We're letting them do all of this stuff. And every time I do it, my computer doesn't get wiped. I'm like, "oh, it's fine".
He links to Johann Rehberger's essay, The Normalization of Deviance in AI, which itself also references the tragedy of the space shuttle Challenger disaster.
The thing is, the result of the Challenger disaster is that seven human lives were lost.
It wasn't just about an engineering flaw. It wasn't about losing a space shuttle. It wasn't about subsequent committees and commissions to establish what went wrong.
The disaster is that seven lives were lost.
I do appreciate the qualifier in Simon's specificity when speaking of disaster within the context of coding agent security.
But I feel like we are already experiencing a Challenger-level disaster, with actual lives affected and actual lives lost.
That last link has to do with a settlement in which Google and Character.AI agreed to settle five lawsuits related to minors harmed by their interactions with chatbots.
Last year, the US Senate's Committee on Health, Education, Labor, and Pensions sent a letter to Dario Amodei, CEO of Anthropic.
Within the letter:
Tragically, teenagers have taken their own lives after being influenced by AI platforms. That AI would have the capability to encourage, instruct, or convince a user—in one of the most recent cases a fourteen-year-old—to end his life is deeply troubling. Reports indicate that teenagers were able to get AI chatbots to respond to questions, including how to hide evidence of selfharm, sexually explicit conversations, and even ignoring specific comments related to suicide.
I don't even want to link to the most recent Grok news...
Per a Time magazine headline a few months ago, OpenAI Removed Safeguards Before Teen’s Suicide, Amended Lawsuit Claims.
Not The Same
I get it.
I really do.
I'm not suggesting that Simon or other LLM enthusiasts are unaware or unsympathetic toward these situations.
But the connection I'm trying to make is that LLM usage within the engineering space is not isolated or disconnected from the technologies that are actively harming human beings—at Challenger levels!
Even if the argument is that the you still want the ability to run Claude Code on YOLO mode on your desktop in order to make you a 10x developer in terms of throughput (without even having to type a single line of code) but are still at least mildly concerned about the responsibility that those same companies providing these models have toward all users—then maybe we can find some common ground.
The Future
Predicting the future is a fool's errand.
I'm not at all interested in that.
But I do want a better future, which is affected by what I do today.
Over on Mastodon, I noticed that there is general weariness toward what is apparently deemed "anti-AI" writing.
I understand that.
I think part of the problem is in how much of that (or this?) kind of writing tends to remain pessimistic, angry, or disheartened. It isn't pleasant.
My hope (and that's all it is at this point) is that we can continue having a dialogue where we can imagine a future that is better for everyone, not just those we tend to agree with.