Coldbrew's law states that if an article is written in praise of AI and abstrusely written, then it's probably AI slop, written by someone who holds their own ideas in high enough regard to publish but holds their audience in low enough regard that they won't bother to edit it.
Edit: there it is. `vibe-coded and deployed with Claude Code`
Y'know, you (the general 'you', not you specifically, coldbrewed) feel bad about your writing or blog because of the odd spelling error, or grammar issue, or repeated language, or maybe your points aren't clear enough, or maybe you're talking in the wrong tone...
...and then you read something like this, and realize, "Yeah, no, I have room for improvement, sure, but thank f*** I'm not like this."
The entire post reads like someone high on their own supply. Just when I think they're getting to a point, they pull out every fifty-dollar word and concept they possibly can (explaining none of it, nor linking to any Wikipedia articles to help readers understand) to ostensibly sound smarter-than-you and therefore entitled to authority.
I'm sure there's a law/rule/principle for this concept somewhere, but if you can't explain your point simply, you don't understand the topic you're trying to communicate. This one-off, vibe-coded (RETCH), slop-slinger is a prime example of such.
Pay no attention to the charlatan cosplaying as tenured academia.
Again this piece is not written with GPT, feel free to ask any GPT. Ironically, maybe I should have to increase the appeal of my ideas. I chose my words carefully to communicate my ideas precisely.
GPTs' historically aren't great at identifying their own work; if they could, AI-based cheating wouldn't be the problem it is at present.
Assuming you're the OP, and this is your blog, let me give you some feedback:
* Choosing your words carefully and communicating your ideas clearly are separate skills. You may have chosen the most precise language, but your ability to communicate ideas to as wide an audience as HN is lacking (judging by the comments)
* If you're going to invoke half a dozen rules, principles, laws, and/or proofs in the span of two pages, then you'd better link the associated Wikipedia articles for folks to follow along, at least until you've established a readership baseline. People read blogs for learning or entertainment, and if you're trying to teach a perspective, then you need to include copious links to this material; otherwise, your readership is going to turn into an echo chamber of similar academics (or people cosplaying as such, which is dangerous)
* Your narrative structure leaves a lot to be desired. Are you sharing an opinion piece about a potential AI-energized future? Or are you mocking AI detractors? Or are you digging up old memes? Maybe you're getting into the philosophy angle of capitalism and entrepreneurship? Or perhaps making a judgement about the perceived lack of "startup spirit" of modern workers? I honestly can't tell, because at times it feels like this single piece is touching upon all of them, but not going into anything more than surface-depth about any of them
* As far as reads go, it's a strugglebus. Your blog gives no insight into the author as a person, but the piece reads as if we should already know you and respect your authority on the topic because of credentials. Its sentences meander far too long before stopping, as if you're trying to consolidate complex thoughts that demand a paragraph of context into a single, lengthy, concise sentence - and leaving readers to figure it out on their own time, like a University Professor with tenure.
* Personal nitpick here, but your application of the Pareto Principle to human labor within corporations betrays your inexperience (at best) or your absence of empathy (at worst). More than likely, it displays a profound level of distance from work "in the trenches", and the associated lack of understanding of why corporations are formed, grow, function, struggle, wither, collapse, and die. Talk to more workers, and not just ones at your present company/title/rank/experience level/demographic brackets. Humans are messy creatures, not machines, and assuming they will behave as machines inside other machine-like structures is ignoring the inherent chaos of existence.
I got so annoyed with the “trying to be smart” writing that I summarized it with ChatGPT.
The article “The World Is Ending. Welcome to the Spooner Revolution” from Aethn delves into the transformative impact of advanced AI models on the global socio-economic landscape.
The author critiques the belief in a static “end of history,” suggesting that recent advancements in AI, particularly large language models (LLMs), are catalyzing a profound shift in how work and economic structures function. These AI models have evolved beyond previous limitations, enabling automation of tasks across various knowledge-based professions without extensive fine-tuning.
This technological leap is diminishing the traditional value of wages, as individuals can now leverage AI to perform tasks that previously required entire teams. Consequently, there’s a growing trend toward self-employment and entrepreneurial ventures, echoing the ideals of 19th-century thinker Lysander Spooner, who advocated for a society where individuals operate independently of wage-based systems.
The article posits that this “Spooner Revolution” will lead to a surge in small enterprises, a decline in traditional corporate structures, and a reevaluation of educational and economic institutions to accommodate this new paradigm.
Fascinating read, very unique writing style. I’m not sure I agree that it will make people more entrepreneurial. People desire the stability a wage provides; it is incredibly difficult to set out on your own. Most people don’t have financial security to start one company so just because they can trial 20 in the same time now doesn’t change that. I do think it enables rapid prototyping via fast access to experts or code generation. But if you’re starting a biotech you probably still need a lab which is expensive or tied to an institution so AI can only provide so much there.
I’m not sure I understand your argument for early stage venture going away? There’s more proven before funding with AI so the round has to be bigger since it’s inherently later stage by the time it reaches VC?
For some industries such as chemistry and biotech there is little change to be expected.
The self-employed is now able to rapidly develop their ideas while employed reducing their risk and cost.
Per VC, that's right and secondarily it's more difficult to justify earlier funding when one no longer needs to hire a substantial team. I expect the current major VC firms to get even bigger.
I don't, because such a model presumes every human has similar entrepreneurial capabilities and spirit. Neurodivergent humans may lack the social skills necessary for success in entrepreneurship, some humans may place a higher valuation on relationships than they could effectively trade upon via entrepreneurship, and others still may be of a persecuted group for whom entrepreneurship brings more risk than would be offset through income gains.
Even if we did restructure society around everyone being their own business owner (and man, are the powers that be trying to do so for a whole slew of awful, worker-screwing reasons), the level of competition OP tries to describe (of sole proprietors competing 1:1 against corporate behemoths, as if corporations somehow wouldn't also benefit from such AI-amplification of output) is highly unlikely. The driver of massive AI investment at present is to lessen the impact of workers in an attempt to drive down wages and consolidate power within the hands of the few, prime corporate players; such a future filled with self-starting entrepreneurs would still be limited to those who can afford the tokens or subscription plans to the largest AI models (or have the hardware and expertise at home to roll their own), and so you'd just have a global gig economy for workers with three or four companies hoovering up most of the excess for themselves through AI costs.
Look, I'd love nothing more than to run my own little IT Services firm with a sociable partner to handle the social networking aspect, while I get to tinker, build, and make people smile all day long at how cool technology can be. AI, however, ain't it. If anything, it threatens the survival of corporate knowledge workers and entrepreneurs both, by grossly devaluing their labor at a time of massive wealth inequality and severe employment precarity.
I try to address this in the article. I think the same could have been expected of plumbing, electricians, landscaping, etc but through a different means, standardization, the corporations simply don't have a good position.
In this world the neurodivergent is empowered, they no longer are charged with persuasion of their ideas to a team or corporation. They can build their ideas themselves and form a limited partnership with someone with the talents they lack.
Though I make it a point to describe the new economic order without regard to its desirability. However, other authors on the subject (e.g. Lysander Spooner, Rothbard, etc) would be pleased by the development in terms of its social welfare.
This reads like an example from Orwell's "Politics and the English Language". Which on its face leads me to wonder what sort of semantic shell game the author is up to.
> The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few
>Even with that, there are obvious limitations described by Amdahl's law, which states there is a logarithmic maximum potential improvement by increasing hardware provisions.
I don't know why so many people are obsessed with Amdahl's law as some universal argument. The quoted section is not only 100% incorrect, it sweeps the blatantly obvious energy problem under the rug.
Imagine going to a local forest and pointing at a crow and shouting "penguin!", while there are squirrels running around.
What Amdahl's law says is that given a fixed problem size and infinite processors, the parallel section will cease to be a bottleneck. This is irrelevant for AI, because people throw more hardware at bigger problems. It's also irrelevant for a whole bunch of other problems. Self driving cars aren't all connected to a supercomputer. They have local processors that don't even communicate with each other.
>The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few.
>And yet these perinatal automatons are totally eviscerating all knowledge based work as the relaxation of the original hysterics arrives.
These two sentences contradict each other. You can't eviscerate something and only mostly "replace" it.
This is a very disappointing blog post that focuses on wankery over substance.
We would see neither squirrels nor crows since these criticisms miss the forest for the trees. But we can address them.
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
I am new to Amdahl's law, but wouldn't a rearchitecture make it less relevant. For example if instead of growing an LLM that has more to do in parallel, seperate it into agents (maybe a bit like areas of the brain?). Is Amdahls law just a limit for the classic LLM architecture?
I don't think it can ultimately be escaped but the cited Vegh et al exactly proposes that, the bioinspiration, as a means to surpass those limitations.
However, in this article I contend that those limitations have posed little adversity in the field given the success of the latest models. As a result, it may be a bit premature to be concerned about it.
This is GPT’s take after I prompted it for his opinion on the crux of the piece:
It’s a bold, sharp take on AI’s coming shake-up.
• Core claim: New GPT-powered tools let one person match a small team’s output. That erodes big firms’ value and shifts us toward solo, AI-driven gigs.
• Strength: It sees how low coding, design or analysis costs plunge entry barriers—e.g. an indie writer using GPT to research, draft and polish entire articles.
But it skips some real-world frictions.
• Go-to-market still needs sales, trust and networks. AI can build a prototype, not always sell it.
• Risk and capital haven’t vanished: legal, data, infrastructure, marketing still demand teams or funding.
My take: AI will empower many more solo ventures and cut rote jobs. Yet wages and firms won’t crumble overnight. We’ll get hybrids—small outfits plus AI—before a full “Spooner” world arrives.
Edit: there it is. `vibe-coded and deployed with Claude Code`
...and then you read something like this, and realize, "Yeah, no, I have room for improvement, sure, but thank f*** I'm not like this."
The entire post reads like someone high on their own supply. Just when I think they're getting to a point, they pull out every fifty-dollar word and concept they possibly can (explaining none of it, nor linking to any Wikipedia articles to help readers understand) to ostensibly sound smarter-than-you and therefore entitled to authority.
I'm sure there's a law/rule/principle for this concept somewhere, but if you can't explain your point simply, you don't understand the topic you're trying to communicate. This one-off, vibe-coded (RETCH), slop-slinger is a prime example of such.
Pay no attention to the charlatan cosplaying as tenured academia.
Assuming you're the OP, and this is your blog, let me give you some feedback:
* Choosing your words carefully and communicating your ideas clearly are separate skills. You may have chosen the most precise language, but your ability to communicate ideas to as wide an audience as HN is lacking (judging by the comments)
* If you're going to invoke half a dozen rules, principles, laws, and/or proofs in the span of two pages, then you'd better link the associated Wikipedia articles for folks to follow along, at least until you've established a readership baseline. People read blogs for learning or entertainment, and if you're trying to teach a perspective, then you need to include copious links to this material; otherwise, your readership is going to turn into an echo chamber of similar academics (or people cosplaying as such, which is dangerous)
* Your narrative structure leaves a lot to be desired. Are you sharing an opinion piece about a potential AI-energized future? Or are you mocking AI detractors? Or are you digging up old memes? Maybe you're getting into the philosophy angle of capitalism and entrepreneurship? Or perhaps making a judgement about the perceived lack of "startup spirit" of modern workers? I honestly can't tell, because at times it feels like this single piece is touching upon all of them, but not going into anything more than surface-depth about any of them
* As far as reads go, it's a strugglebus. Your blog gives no insight into the author as a person, but the piece reads as if we should already know you and respect your authority on the topic because of credentials. Its sentences meander far too long before stopping, as if you're trying to consolidate complex thoughts that demand a paragraph of context into a single, lengthy, concise sentence - and leaving readers to figure it out on their own time, like a University Professor with tenure.
* Personal nitpick here, but your application of the Pareto Principle to human labor within corporations betrays your inexperience (at best) or your absence of empathy (at worst). More than likely, it displays a profound level of distance from work "in the trenches", and the associated lack of understanding of why corporations are formed, grow, function, struggle, wither, collapse, and die. Talk to more workers, and not just ones at your present company/title/rank/experience level/demographic brackets. Humans are messy creatures, not machines, and assuming they will behave as machines inside other machine-like structures is ignoring the inherent chaos of existence.
I wanted to avoid my experience but I worked at FAANG and helped create a multi-billion dollar corporation (from a handful of people to 1,500 people).
I have doubt that it's true though, it really sounds like AI writing.
The article “The World Is Ending. Welcome to the Spooner Revolution” from Aethn delves into the transformative impact of advanced AI models on the global socio-economic landscape.
The author critiques the belief in a static “end of history,” suggesting that recent advancements in AI, particularly large language models (LLMs), are catalyzing a profound shift in how work and economic structures function. These AI models have evolved beyond previous limitations, enabling automation of tasks across various knowledge-based professions without extensive fine-tuning.
This technological leap is diminishing the traditional value of wages, as individuals can now leverage AI to perform tasks that previously required entire teams. Consequently, there’s a growing trend toward self-employment and entrepreneurial ventures, echoing the ideals of 19th-century thinker Lysander Spooner, who advocated for a society where individuals operate independently of wage-based systems.
The article posits that this “Spooner Revolution” will lead to a surge in small enterprises, a decline in traditional corporate structures, and a reevaluation of educational and economic institutions to accommodate this new paradigm.
I’m not sure I understand your argument for early stage venture going away? There’s more proven before funding with AI so the round has to be bigger since it’s inherently later stage by the time it reaches VC?
The self-employed is now able to rapidly develop their ideas while employed reducing their risk and cost.
Per VC, that's right and secondarily it's more difficult to justify earlier funding when one no longer needs to hire a substantial team. I expect the current major VC firms to get even bigger.
Even if we did restructure society around everyone being their own business owner (and man, are the powers that be trying to do so for a whole slew of awful, worker-screwing reasons), the level of competition OP tries to describe (of sole proprietors competing 1:1 against corporate behemoths, as if corporations somehow wouldn't also benefit from such AI-amplification of output) is highly unlikely. The driver of massive AI investment at present is to lessen the impact of workers in an attempt to drive down wages and consolidate power within the hands of the few, prime corporate players; such a future filled with self-starting entrepreneurs would still be limited to those who can afford the tokens or subscription plans to the largest AI models (or have the hardware and expertise at home to roll their own), and so you'd just have a global gig economy for workers with three or four companies hoovering up most of the excess for themselves through AI costs.
Look, I'd love nothing more than to run my own little IT Services firm with a sociable partner to handle the social networking aspect, while I get to tinker, build, and make people smile all day long at how cool technology can be. AI, however, ain't it. If anything, it threatens the survival of corporate knowledge workers and entrepreneurs both, by grossly devaluing their labor at a time of massive wealth inequality and severe employment precarity.
We can do better than that.
In this world the neurodivergent is empowered, they no longer are charged with persuasion of their ideas to a team or corporation. They can build their ideas themselves and form a limited partnership with someone with the talents they lack.
Though I make it a point to describe the new economic order without regard to its desirability. However, other authors on the subject (e.g. Lysander Spooner, Rothbard, etc) would be pleased by the development in terms of its social welfare.
Nothing interesting, they're just trying to vibe code their way into being a thought leader. Which is both hilarious and sad.
"Replaces". Uh-huh.
I don't know why so many people are obsessed with Amdahl's law as some universal argument. The quoted section is not only 100% incorrect, it sweeps the blatantly obvious energy problem under the rug.
Imagine going to a local forest and pointing at a crow and shouting "penguin!", while there are squirrels running around.
What Amdahl's law says is that given a fixed problem size and infinite processors, the parallel section will cease to be a bottleneck. This is irrelevant for AI, because people throw more hardware at bigger problems. It's also irrelevant for a whole bunch of other problems. Self driving cars aren't all connected to a supercomputer. They have local processors that don't even communicate with each other.
>The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few.
>And yet these perinatal automatons are totally eviscerating all knowledge based work as the relaxation of the original hysterics arrives.
These two sentences contradict each other. You can't eviscerate something and only mostly "replace" it.
This is a very disappointing blog post that focuses on wankery over substance.
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
1. https://pmc.ncbi.nlm.nih.gov/articles/PMC6458202/
Further reading on Amdahl's law w.r.t LLM:
2. https://medium.com/@TitanML/harmonizing-multi-gpus-efficient...
3. https://pages.cs.wisc.edu/~sinclair/papers/spati-iiswc23-tot...
However, in this article I contend that those limitations have posed little adversity in the field given the success of the latest models. As a result, it may be a bit premature to be concerned about it.
It’s a bold, sharp take on AI’s coming shake-up.
• Core claim: New GPT-powered tools let one person match a small team’s output. That erodes big firms’ value and shifts us toward solo, AI-driven gigs.
• Strength: It sees how low coding, design or analysis costs plunge entry barriers—e.g. an indie writer using GPT to research, draft and polish entire articles.
But it skips some real-world frictions.
• Go-to-market still needs sales, trust and networks. AI can build a prototype, not always sell it.
• Risk and capital haven’t vanished: legal, data, infrastructure, marketing still demand teams or funding.
My take: AI will empower many more solo ventures and cut rote jobs. Yet wages and firms won’t crumble overnight. We’ll get hybrids—small outfits plus AI—before a full “Spooner” world arrives.