Monday, March 02, 2026
A rather disorganized rant about LLMs
This post is a month's worth of angst about LLMs.
I honestly don't know where to begin with this. I've been quiet for the past month (and that one post for February was posted earlier today) mainly because I just couldn't be bothered to post due to a general feeling of blahness. Early in the month, I received an email from Mark with a link to the article “AI Is Garbage and a Bubble (Please Learn This),” thinking it would be good material to blog about. It is, to me, but that's because I already feel this way about LLMs and is thus reinforcing my bubble. Unfortunately, I didn't have much to say about this other than “me too!”
I spent the month collecting links to articles I wanted to blog about, but the entire process largely sapped any desire I had to do so. The industry as a whole has seemingly taken crazy pills as it marches whole heartly into the craziness that is vibe coding (or is the meme I took the crazy pills while the whole industry descends into the madness of LLMs?).
It's not like the above article is the only one decrying the madness. There's “Nobody knows how the whole system works” (via Lobsters). From Lobsters, I was struck by this comment:
All of this means that the abstraction layers let you reason about correctness by building things in terms of the abstract machine that they provide.
Probabilistic tools like LLMs lack this property. They do not present a coherent abstract machine. They do not give you a foundation for reasoning about the top of a system by targeting a simplified model of the real system with well-defined semantics. They just give you a way of creating an approximation of what you wanted with no bounds on the failure modes.
There's also this lament (via Lobsters).
Then there are the predictions, such as “Humanity's last programming language,” which is a concern I have. And related to this last computer language, with the amount of training apparently required to get something plausible out of an LLM, how much training material must exist for a new language to get any use? I seriously doubt an LLM can master a new language with only a few examples, so either someone (or a group of people) must write a ton of code for training, or a new language is dead on arrival in this wonder era of LLMs. And it's not like Xe Iaso is the only one prognosticating LLMs as the new high level language (via Hacker News), which I pondered about last year, and is expanded on in “AI-generated output is cache, not data.” It's not directly stating that LLMs are the new compilers, but it's a related aspect of the whole thing—why store a large multimegabyte image of a tiger when you can store “draw me a tiger!” instead? Why store source code when you can store “do what I mean?”
And then there's “Programming is Dead” (via Lobsters). Scary stuff.
Even scarier—people are treating the prompts as source code. There's a library that is nothing but prompts (via Lobsters), and the story behind it (via Simon Willison and Hacker News). This came after I saw most pro-AI programmers swear that the prompts will never, ever, under any circumstance, become the source code.
Yeah, right. Prompts are the new source code. How else do you explain Gas Town? (Although Steve Yegge eventually realized he went off the deep end (via both Lobsters and Hacker News) but only after he made around $100,000 from it.)
And as if that wasn't bad enough,
February was the month that a LLM wrote a hitjob against a blogger
for rejecting a PR it made
(via Xe Iaso,
and some Hacker News commentary),
with the subsequent followups
(via https://news.ycombinator.com/item?id=47009949).
We asked for flying cars, and instead we got passive agressive obsequiously incompetent non-thinking Markov chain generators that people fall for. It's not like we weren't warned about this in the 60s. And because of the way my brain works, the video “AI isn't the future. It's Medieval Alchemy” seems to me to describe the AI-techbro crowd—they're throwing a bunch of stuff against the wall to see what sticks, only to have to rethrow stuff a month or two later when the LLMs model change yet again. What's it been? Prompt engineering? No, now it's agetic coding. No, it's Gas Town now. Geeze, I think it's easier to convert lead to gold (where you just have to remove 5% of the atomic mass of each lead atom—trivial!) than it is to effectively use an LLM for any length of time.
Hmm … one of the things I noted was the phrase “mathematical notation.” Back in the old days, say, before the 17th century, methematics really didn't have a convenient notation so math was written as (from the Indian mathematician Brahmagupta in 628 AD): “Twice the difference of the initial terms divided by the difference of the common differences is increased by one. That will be time when the distances moved (by the two travellers) will be same.” In other words: n = 2(b-a) / (d-e) + 1. Or how about “to the absolute number multiplied by four times the square, add the square of the middle term; the square root of the same, less the middle term, being divided by twice the square is the value.” Did you recognize the formula of the quadratic equation? Mathematics really took off when a formalize language for math was developed, separating math from prose descriptions that allowed one to manipulate formulas.
What do you think a programming language is, other than a formalized language? Going from a formalized language where (hopefully) ambiguity is eliminated to a less formalized language full of ambiguity is not progress in my mind. I can see over the next few years that, if LLMs continue and a new multi-century AI winter hasn't manifested yet, prompts will gain more formalism and stilted language (you know, like doctors or lawyers already have) to bring back some more determinism into the chaos that is vibe coding. Mark my words—you heard it here first. I already saw the “prompts as source code” prediction I made coming true.
Another thought just mixed in with the rest of the links—what is “high qualty code” in these vibe coding days? Why is “maintainable code” desirable? It's not like programmers will bother with code (much like today where the majority of programmers don't bother with the assembly or byte code their compilers generate). Today, it seems like all is “okay” if it works 80% of the time, edge cases and performance be damned! New functionality? Just vibe code that XXXX!
I'm not saying I like it—I'm just seeing a possible future here.
And just because I don't have a section for it, “AI forced before the play period could form” (via Hacker News). Also, note to self—Hacker Hews and Lobsters isn't the entirely of the industry. The map is not the territory; I need to keep that in mind.
But even with all this, I do see some hope. In “Giving University Exams in the Age of Chatbots”, the author noted that out of 60 students where the use of LLMs were allowed (but needed to be stated before taking the exam) only three used LLMs. I'm seeing a bit more push back with articles like “Have I hardened against LLMs?,” and news like “AI Fails at 96% of Jobs (New Study)”. So it's not all doom-and-gloom.
And I can't forget Flutterby. Dan Lyke links to all sorts of contra-LLM/AI articles, which is helping to keep me sane in these Crazy Years (and it's funny think think that Robert Heinlein forsaw a rise in religious fanaticism in the early 21st century).
![Oh Chrismtas Tree! My Christmas Tree! Rise up and hear the bells! [Self-portrait with a Christmas Tree] Oh Chrismtas Tree! My Christmas Tree! Rise up and hear the bells!](https://www.conman.org/people/spc/about/2025/1203.t.jpg)