Yet just in 2026 we had:
- AI.com was sold for $70M - Crypto.com founder bought it to launch yet another "personal AI agent" platform, which promptly crashed during its Super Bowl ad debut.
- MoltBook-mania - a Reddit clone where AI bots talk to each other, flooded with crypto scams and "AI consciousness" posts. 250,000+ bot posts burning compute for what actual value? [0]
- OpenClaw - a "super open-source AI agent" that is a security nightmare.
- GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I understand there are legitimate use cases for LLMs, but the hype-to-utility ratio seems completely out of whack.
Am I not seeing something?
[0] https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
When I began playing around with LLMs, I had my initial aha moment. It was far above either of those for me. So, I think that collective aha moment we've been having the last few years (still) drives a lot of the excitement/hype.
If you view 'hype around [x]' as essentially a probability ranking problem, that is, whatever is most likely to give you your next aha moment generates the most hype, at any point in time. There's a decay element, too, if the reality doesn't match the expectations, then other technologies are viewed as more likely to produce that next big aha moment.
But for now I think this characterizes AI today more than other technologies.
- the printing press
- radio
- tv
- personal computers
- internet
in terms of important contributors to human civilization. We live in the information age, and all of these are significant advances in information.
The printing press allowed small organizations to create written information. It de-centralized the power of the written text and encouraged the rapid growth of literacy.
Radio allowed humans to communicate quickly across long distances
TV allowed humans to communicate visually across long distances - what we see is very important to the way we process information
PCs allowed for digitizing of information - made it denser, more efficient, easier to store and generate larger datasets
The internet is a way to transfer large amounts of this complex digital information even more quickly across large distances
AI is the ability to process this giant lake of digital information we've made for ourselves. We can no longer handle all the information that we create. We need automated ways to do it. LLMs, which translate information to text, is a way for humans to parse giant datasets in our native tongue. It's massive.
Things that I had labelled "too hard, pain in the ass" I'm now finishing in half an hour or so with proper tests and everything.
It's an exciting time to be a product engineer IMO.
By the way, good job at pointing out some low hanging fruit for your example cases.
> GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I have not seen any claims of this other than Opus 4.6 being weirdly token-hungry.
A recent prompt of mine:
>Write a dialogue where H.P. Lovecraft and David Foster Wallace are tasked with developing a DSL for audio synthesis and composition, they are trying to sell each other on their preferred language, Lovecraft wants rust, Wallace Zig. Their discussion should be deeper than just the talking points and include code examples. Thomas Pynchon is also there, he is a junior dev and enamored with C, he occasionally interjects into the discussion and generally fails to follow the discussion but somehow manages to make salient points; Lovecraft never address Pynchon or C but alludes to the horror, Wallace tries to include and encourage Pynchon but mostly ignores him.
I pretty much knew how ChatGPT would use each of them; Lovecraft would view everything other than rust as some lurking unnameable horror, Pynchon would seem random and nonsensical, and Wallace would vainly attempt to explore all options without weighing in, and that is exactly what happened. Being able to get this sort of information within contexts I understand is amazing, 15 or 20 minutes later I had more direction than I ever had before and it answered a dozen or so language agnostic questions I had regarding implementing such a DSL which is really what I was after. And it gave me some good laughs.
A year ago I thought these projects would never move beyond being ideas, when ever I tried to get help from people they just told me what I was doing wrong and tried to send me down the path of their ideals, how they think things should be done, which was probably more a failure on my part than theirs with my being a humanities sort and incompetent programmer. LLMs have infinite patience and time, those people who I sought help from in the past were not completely wrong but they did try to lead me down a wrong path for what ever reason, partially because they don't have infinite time and patience but also because they thought they were right.
Henry James has been walking me through the PureData source code this past week.
Everyone is not hyping AI/LLM, that is just the bias of tech communities and the like, most of us see it about the same as we see a blender or toaster. Can't remember the last time AI/LLMs came up in general conversation for me.
Introjection occurs when a person unconsciously adopts the ideas, attitudes, or behaviors of another person or group, often an authority figure.
like someone with high status in the world of Big Tech. The CEO's we see selling their bullshit every day
The influencers, boosters and shills are perfectly places to create this type of environment.
The more you promote a product, the more likely people will introject that product even if it does not work work.
Imagine how a child learns the rules of life from parents.
These are introjected by the child without question or any consideration whether the rules are right or wrong, the child just takes it all in to become part of them