Yet just in 2026 we had:
- AI.com was sold for $70M - Crypto.com founder bought it to launch yet another "personal AI agent" platform, which promptly crashed during its Super Bowl ad debut.
- MoltBook-mania - a Reddit clone where AI bots talk to each other, flooded with crypto scams and "AI consciousness" posts. 250,000+ bot posts burning compute for what actual value? [0]
- OpenClaw - a "super open-source AI agent" that is a security nightmare.
- GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I understand there are legitimate use cases for LLMs, but the hype-to-utility ratio seems completely out of whack.
Am I not seeing something?
[0] https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
But I also don’t think the signal is zero. It’s just buried under capital and compute flexing.
The pattern I see isn’t AI is revolutionary. It’s: 1) The easy wins are done. 2) The marginal gains are getting expensive. 3) The distribution layer is shifting faster than the capability layer.
Most new model releases aren’t unlocking fundamentally new workflows. They’re compressing friction in workflows that already work. That’s useful, but not narrative-worthy.
The real shift isn’t GPT-5.3 vs GPT-5.2.
It’s: - AI replacing search as the interface layer. - AI compressing junior-level execution work. - AI reshaping how products are discovered (AI Overviews, summaries, agents).
That doesn’t make MoltBook any less absurd. Burning compute on bots talking to bots is peak theater.
But dismissing everything because of the theater might be like dismissing the internet because of Pets.com.
We may be at peak hype, but that doesn’t mean the substrate shift isn’t real.
The question isn’t, is AI overhyped? It’s, where is durable value forming?
That’s harder, and way less viral, to answer.
When I began playing around with LLMs, I had my initial aha moment. It was far above either of those for me. So, I think that collective aha moment we've been having the last few years (still) drives a lot of the excitement/hype.
If you view 'hype around [x]' as essentially a probability ranking problem, that is, whatever is most likely to give you your next aha moment generates the most hype, at any point in time. There's a decay element, too, if the reality doesn't match the expectations, then other technologies are viewed as more likely to produce that next big aha moment.
But for now I think this characterizes AI today more than other technologies.
> AI.com was sold for $70M
This news is everywhere elsewhere, and it's still going on in my WhatsApp groups. I can't even find the active HN thread. This was more because people were surprised about the actual owner of ai.com which everyone assumed was OpenAI or Sam Altman or something.
> OpenClaw/Moltbot/Clawdbot
This has been everywhere elsewhere too, and Clawdbot hit HN a few days late, after people have already set up videos on it. It's a security hole, but it's the first legitimately good automation tool we've had from AI. Moltbook is more fascination with moltbot+MCPs rather than something interesting in itself - it meant you had a tool that could use the internet from a CLI and such. It's a bit like the Wright Brothers' plane - nobody expects to fly to Japan on it, but it meant flight was possible.
> GPT-5.3-Codex and Opus 2.6 were released
I think the only real news was Opus 4.6. I love it. It's like a PB&J sandwich. It's cheap. It's the combination of technology people take for granted. It's also something usable in daily life.
Opus 4.6 had better parallel command use - meaning it would search for all the files at once instead of 1 at a time. And it was better at going deeper. It helped me pin a bug by going into Android source code and finding the exact line causing a bug, then all the functions that were called by this bug. Most people don't need to look at the source code of the thing they built something on top of, and the people who are vibe coding don't care much for code. Nobody benchmarks how crunchy the peanut butter is.
gpt-5.3-codex benchmarked better, but I'm not seeing this translating to useful code. It failed with the first few requests I gave it. Maybe it's just me and my repo.
- the printing press
- radio
- tv
- personal computers
- internet
in terms of important contributors to human civilization. We live in the information age, and all of these are significant advances in information.
The printing press allowed small organizations to create written information. It de-centralized the power of the written text and encouraged the rapid growth of literacy.
Radio allowed humans to communicate quickly across long distances
TV allowed humans to communicate visually across long distances - what we see is very important to the way we process information
PCs allowed for digitizing of information - made it denser, more efficient, easier to store and generate larger datasets
The internet is a way to transfer large amounts of this complex digital information even more quickly across large distances
AI is the ability to process this giant lake of digital information we've made for ourselves. We can no longer handle all the information that we create. We need automated ways to do it. LLMs, which translate information to text, is a way for humans to parse giant datasets in our native tongue. It's massive.
Things that I had labelled "too hard, pain in the ass" I'm now finishing in half an hour or so with proper tests and everything.
It's an exciting time to be a product engineer IMO.
> GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I have not seen any claims of this other than Opus 4.6 being weirdly token-hungry.
By the way, good job at pointing out some low hanging fruit for your example cases.