> I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it’s just going to do it right. It’s not going to mess that up. You have it add automated tests, you have it add documentation, you know it’s going to be good.
I feel like this is just not true. An JSON API endpoint also needs several decisions made.
- How should the endpoint be named
- What options do I offer
- How are the properties named
- How do I verify the response
- How do I handle errors
- What parts are common in the codebase and should be re-used.
- How will it potentially be changed in the future.
- How is the query running, is the query optimized.
…
If I know the answer to all these questions, wiring it together takes me LESS time than passing it to Claude Code.
If I don’t know the answer the fastest way to find the answer is to start writing the code.
Additionally, whilst writing it I usually realize additional edge cases, optimizations, better logging, observability and what else.
The author clearly stated the context for this quote is production code.
I don’t see any benefits in passing it to Claude Code. It’s not that I need 1000s of JSON API endpoints.
Like writing code to me is not slower than writing text?
When I write code every character I type in my computer has less ambiguity than when I write it in human language? I also have the help of LSPs, Linters and Auto-completes.
It's not much to go on by, but I kinda feel ya. I think one exception I'd perhaps make is doing a large mechanic refactor. I find them incredibly daunting. So, I'll just ask AI for that. I mean it probably takes me a similar time to do, but it feels less daunting.
I've been trying to get into agentic coding and there are non-refactoring instances where I might reac for it (like any time I need to work on something using tailwind; I'm dyslexic and I'd get actual headaches, not exaggerating, trying to decipher Tailwind gibberish while juggling their docs before AIs came around)
I use Jetbrains features for that usually, it has great tools for that.
Lets say on that JSON API I want to extract part of the logic in a repositiory file i CTRL + W the function then I have almost all of my shortcuts with left alt + two character shortcuts. So once marked i do LAlt + E + M for Extract Method then it puts me in a step in between to rename the function
and then LAlt + M+V for MoVe and then it puts me in an interface to name the function.
Once you used to it its like a gamer doing APMS and its deterministic and fast. I also have R+N (rename), G+V (generate vitest) Q+C(query console), Q+H(Query history) and many more. Really useful. Probably also doable with other editors.
Yeah I can and I’ve done it and for fun project it’s fun and cool. But its like using templates to build your website. You’ll be annoyed and at one point your project goes in the endless graveyary of abandoned projects
I don’t want every verb implemented, I also dont want an IETF standard. I want as little as possible, so I have to worry about as little as possible in the future.
Use-cases differ, you described a complete REST API, which can be as much of a problem as a too little.
Till it has explored the codebase, asked me follow up questions, suggested the code change, incorporating my fixes after losing time on context switch + the extra time I need when somebody requests a change in 3 months to learn the mental model. I’m way faster to just write it myself (mental model included)
Vibe Coding (and LLMs) did not create undisciplined engineering organizations or engineers. They exposed and accelerated them.
Plenty of engineers have loose (or no!) standards and practices over how they write coee. Similarly, plenty of engineering teams have weak and loose standards over how code gets pushed to production. This concept isn't new, it's just a lot easier for individuals and teams who have never really adhered to any sort of standards in their SDLC to produce a lot more code and flesh out ideas.
Bad engineers continue being bad, good engineers continue being good.
I personally don’t know any colleagues who were good engineers just because they wrote code faster. The best engineers I know were ones who drew on experience and careful consideration and shared critical insights with their team that steered the direction of the system positively.
> Claude, engineer a system for me, but do it good. Thanks!
>> Bad engineers continue being bad, good engineers continue being good.
I don't know if good engineers can necessarily continue to be good. There is limit to how much careful consideration one can give if everything is on an accelerated timeline. Regardless good or not, there is limit on how much influence you have on setting those timelines. The whole playing field is changing.
It's deeper. We used to mock architects that stepped back and stopped coding, because they generated trash.
There's a cycle that is needed for good system design. Start with a problem and an approach, and write some code. As you write the code, you reify the design and flesh out the edge cases, learning where you got the details wrong. As you learn the details, you go back to the drawing board and shuffle the puzzle pieces, and try again.
Polished, effective systems don't just fall out of an engineers head. They're learned as you shape them.
Good engineers won't continue to be good when vibe-coding, because the thing that made them good was the learning loop. They may be able to coast for a while, at best.
Good engineers are also capable of managing expectations. They can effectively communicate with stakeholders what compromises must be made in order to meet accelerated timelines, just as they always have.
We’ve already had conversations with overeager product people what the ramifications are for introducing their vibe coded monstrosities:
- Have you considered X?
- Have you considered Y?
Their contributions are quickly shot down by other stakeholders as being too risky compared to the more measured contributions of proper engineers (still accelerated by AI, but not fully vibe-coded).
If that’s not the situation where you work, then unfortunately it’s time to start playing politics or find a new place to work that knows how to properly assess risk.
Yep, validation is key. The smartest thing I've heard on this, which has reoriented how I think about this is that the objective function of a piece of software is now more important to get right than the implementation.
> the objective function of a piece of software is now more important to get right than the implementation
That has always been the case. That is why weeks or even months of programming and other project busy work could replace a couple of days of time getting properly fleshed out requirements down.
I estimate that I'm now spending about 10 to 30 hours less time a week in the mechanical parts of writing and refactoring code, researching how to plumb components together, and doing "figure out how to do unfamiliar thing" research.
All of those hours are time that can now be spent doing "careful consideration" (or just being with my family or at the gym or reading a book, which is all cognitively valuable as well).
Now, I suppose I agree that if timelines accelerate ahead of that amount of regained time, then I'm net worse off, but that's not the current situation at the moment, in my experience.
Maybe we do different things. Not that you are wrong about spending less time on things that you don't care about, but at the same time all that mechanical things helps you build a really good mental model of your product from high level design to individual classes. If I already have a good mental model of that I can direct AI to make really good changes fast, if I don't I will get things done ... but it does end up with less than ideal changes that compounds over time.
What you said: "figure out how to do unfamiliar thing" -- is correct, and will get things done, but overall quality, maintainability or understanding how individual pieces work...that's what you don't get. One can argue who care about all that as AI can take care of that or already can. I don't think its true today at-least.
Yeah! I mean, who needs to LEARN how to to these things properly when you can just let an autocorrect on steroids hallucinate the closest thing to “barely working”. Right?
10 to 30 hours saved on not learning new things! Hurray!
in my experience this is because there are very very very very few thoughtful designers and engineers, especially compared to people that are cranking out code.
> I personally don’t know any colleagues who were good engineers just because they wrote code faster
Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower, less panicked when production was down and could reason their way (slowly) through pretty much anything thrown at them.
Opposite experience, I've sit next to developers who are trying their fastest to restore production and then making more mistakes to make it even worse, or developers who rush through the first implementation idea they had for a feature, missing to consider so many things and so on.
This is true. But I find AI tools to be a huge help for all of this. Not to do any of it faster, but to remove a bunch of the tedium from the process of testing ideas and iterating on them. Instead of "I wonder if the problem is..." requiring half an hour of research, now I can do an initial check of that theory in less than a minute, and then dig further, or move onto the next one. Or say I estimate it's gonna take me an hour or more to test an idea, I might just decide I don't have time to invest in that. Well now maybe I can get a tentative answer on that by spending a minute laying out the theory and letting an agent spend ten or twenty minutes on it in the background. In this way I can explore space I just would have determined was not worth the effort previously.
To me, none of this feels like "going faster", it feels like "opening up possibilities to try more things, with a lot less tedious work".
> Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower
Unfortunately, a lot of workplaces are ignoring this, believing their engineers are assembly line workers, and the ones who complete 10 widgets per minute are simply better than the ones who complete 5 widgets per minute.
It isn't just that they believe this - they want a business model where this is how it works. For a big company a star coder is a liability - they have strong labor power, they can leave and they are hard to replace, etc.
Companies want workflows that work with mediocre programmers because they are more like interchangeable parts. This is the real secret to why AI programming will work in a lot of places. If you look at the externalities of employing talented people, shitty code actually looks better than great code.
To these kinds of companies, what's even better than a rack of mediocre programmers? AI agents that you can just conjure up and prompt. They take up no facility space, don't require lunch breaks or vacations, obey all commands and direction, and produce a predictable and consistent amount of output per dollar.
This is the earworm the leaders of these companies have allowed into their minds. Like Agent Mulder, they Want To Believe in this so badly...
> This is the earworm the leaders of these companies have allowed into their minds. Like Agent Mulder, they Want To Believe in this so badly...
If you assume they are not idiots and analyze the FOMO incentives via a little game-theory, it becomes clear why.
Assuming the competition has adopted AI, leadership can ignore it, or pursue it. If they adopt it, then they are level with the completion whether AI actually succeeds or fails - they get to keep their executive job.
If leadership ignores AI, and it actually delivers the productivity gains to the competition, they will be fired. If they ignore AI and it's a bust, they gain nothing.
Glad I find myself employed under a division called Research and Development. Poaching and retaining highly compensated individuals is the entire purpose.
> I personally don’t know any colleagues who were good engineers just because they wrote code faster.
However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out. It's precisely that speed that enables a process like "let's try X, hmm, how about Y, no... ok, Z is nice; ok team, here are the tradeoffs...". Then they remember their experience with X, Y, and Z, and use it to shape their thinking going forward.
Meanwhile, other engineers have gotten X to finally mostly work and are invested in shipping it because they just want to be done. In my experience, this is how a lot of coding agents seem to act.
It's not obvious to me how to apply the expert loop to agentic coding. Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier...
Or: depth-first search of the solution space vs breadth-first (or balanced) search of the solution space.
> Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier
The ideal solution increasingly seems to be encoding everything that differentiates a good engineer from a bad engineer into your prompt.
But at that point the LLM isn’t really the model as much as the medium. And I have some doubts that LLMs are the ideal medium for encoding expertise.
> However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out.
That's not my experience... mostly it's about first interrogating the actual problem with the customer and conditions under which it occurs. Maybe we even have appropriate logging in our production application? We usually do, because you know, we usually need to debug things that have already happened.
(If it's new/unreleased code, sure fine, let's find a debugger.)
> However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out
The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.
Yeah, a lot of people came of age with a "we'll fix it when it's a problem" mindset. Previously their codebases would start to resist feature development, you'd fix the immediate bottlenecks, and then you could kick the can down the road a bit until you hit the next point of resistance. You kinda refactor as you do features. The frontier models have pushed the "it's a problem" moment further back. They can kinda work with whatever pile of code you give them... to a point. So it manifests as the LLM introducing extra regressions, or dropping more requirements than it used to, but it's not really manifesting as the job being harder for you. It's just not as smooth as it was from an empty repository. Then you hit the point where it just breaks too much and you need to fix it. And the whole codebase is just fractal layers of decisions that you didn't make. That's hard to untangle. And you're not editing the code yourself, so you don't have that visceral "adding this specific thing in this specific way has a lot of tension" reaction that allows you to have those refactoring breakthroughs.
This is the sharpest observation in the thread. The "tension" you describe is proprioception for code — you feel where the abstractions leak, where the seams don't align, through the act of writing and refactoring. It's not a visual signal. You can't get it from reading a diff.
The risk isn't that agents write bad code. It's that developers lose the sense that tells them where code is bad. Code review is perception. Writing code is proprioception. They're different senses and one doesn't substitute for the other.
The question for the agent era isn't "is the code good enough to ship" — it's "do I still have enough coupling to the codebase to know when it isn't?"
> Vibe Coding (and LLMs) did not create undisciplined engineering organizations or engineers.
Loss of discipline can be a result of panic or greed.
Perhaps believing that your own costs or your competitors' costs are suddenly becoming 10x lower could inspire one of those conditions?
(Also for greenfield projects specifically, it can plausibly be an experiment just to verify what happens. Some orgs are big enough that of course they can put a couple people on a couple-month project that'll quite likely fall flat.)
This is very true, I've found these tools that I am highly encouraged to use very hit and miss, which they are by nature. After using Matt Pocock's skills, I've come around to the idea that LLM's main utility is to act as the ultimate rubber ducky. The `grill-me` feature is honestly the most useful, not for guiding the follow up writing of code, but to make me write down and explore the idea I have more quickly. It's guesses of questions to ask are generally pretty good. I don't believe there is any 'understanding', so I feel the rubber ducky analogy works quite well. This isn't anything you couldn't do before with some discipline, but at least I find it helpful to be more consistent.
The first time i used LLMs it was to try and refactor behind a solid body of tests i trusted.
I figure if it cant code when it has all of the necessary context available and when obscure failures are easily detected then why would i trust it when building features and fixing bugs?
Can’t wait for the next stage of escalation when teams start to feel code review is keeping them from vibe coding utopia. It’ll probably be “AI review only, keep your human opinions to yourself” just so they can continue to check the “all changes are reviewed” box on security checklists.
Vibe coded apps with barely no tests, invariants, etc. No wonder it turns into spaghetti. You can always refactor code, force agents to write small modular pieces and files. Good engineering is good engineering whether an agent or human wrote the code. Take time to force agents to refactor, explore choices. Humans must at least understand and drive architecture at this point still. Agents can help and do recon amazingly and provide suggestions.
I can’t understand this. The first thing I do with new agent driven project is set up quality checks. Linters, test frameworks, static analysis, etc… Whatever I would expect a developer to do, I would expect an agent to do. All implementation has to go through build success and mixed agent reviews before moving on.
I might not do this with initial research/throwaway prototype, but once I know what direction to go and expect code to go to production it is vital to set guard rails.
> The first thing I do with new agent driven project is set up quality checks. Linters, test frameworks, static analysis, etc
I do this too, but then I sit and observe how agent gets very creative by going around all of these layers just to get to the finish line faster.
Say, for example, if I needlessly pass a mutable reference and the linter screams at me, I know it's either linter is wrong in this case, or I should listen to it and change the signature. If I make the lazy choice, I will be dissatisfied with myself, I might even get scolded, or even fired if I keep making lazy choices.
LLM doesn't get these feelings.
LLM will almost always go for silencing it because it prevents it from reaching the 'reward'. If you put guardrails so that LLM isn't allowed to silence anything, then you get things like 'ok, I'll just do foo.accessed = 1 to satisfy the linter'.
Same story with tests. Who decides when it's the test that should be changed/deleted or the implementation?
I can generate a lot of tests amounting to assert(true). Yeah, LLM generated tests aren't quite that simplistic, but are you checking that all the tests actually make sense and test anything useful? If no, those tests are useless. If yes, I don't actually believe you.
It's the typical 10 line diff getting scrutinized to death, 1000 line diff: Instant LGTM.
Lead engineer says something is not workable? Pm overrides saying that Claude code could do it. Problems found months later at launch and now the engineers are on the hook.
New junior onboardee declares that their new vision is the best and gets management onto it cuz it’s trendy -> broken app.
It’s made collaboration nearly unbearable as you are beholden to the person with the lowest standards.
I hate how correct you are.
Working at a company with only two engineers and few sales and marketing people the amount of "hey i made that feature with claude when can we ship it for the customer? I showed them and they really like it" only to look at the code and find out that it doesn't adhere any of our standards and is not of a good quality either. But if you tell that then it's "yea but everyone is ai shipping now and we cannot be the ones not doing it as we will lose customers..." yea but now we are losing maintainability, understanding of our codebase and make ourself dependant on LLM providers who are getting more expensive every week.
Perhaps I've missed a few weeks worth of progress, but I don't think that AIs have become more trustworthy, the errors are just more subtle.
If the code doesn't compile, that's easy to spot. If the code compiles but doesn't work, that's still somewhat easy to spot.
If the code compiles and works, but it does the wrong thing in some edge case, or has a security vulnerability, or introduces tech debt or dubious architectural decisions, that's harder to spot but doesn't reduce the review burden whatsoever.
If anything, "truthy" code is more mentally taxing to review than just obviously bad code.
> I don't think that AIs have become more trustworthy, the errors are just more subtle.
Honest question: what about the counter-argument that humans make subtle mistakes all the time, so why do we treat AI any differently?
A difference to me is that when we manually write code, we reason about the code carefully with a purpose. Yes we do make mistakes, but the mistakes are grounded in a certain range. In contrast, AI generated code creates errors that do not follow common sense. That said, I don't feel this differentiation is strong enough, and I don't have data to back it up.
One answer, as another person pointed out, is that LLM mistakes are just different. They are less explicable, less predictable, and therefore harder to spot. I can easily anticipate how an inexperienced engineer is going to mess up their first pull request for my project. I have no idea what an LLM might do. Worse, I know it might ace the first fifty pull requests and then make an absolutely mind-boggling mistake in the 51st one.
But another answer is that human autonomy is coupled to responsibility. For most line employees, if they mess up badly enough, it's first and foremost their problem. They're getting a bad performance review, getting fired, end up in court or even in prison. Because you bear responsibility for your actions, your boss doesn't have to watch what you're up to 24x7. Their career is typically not on the line unless they're deeply complicit in your misbehavior.
LLMs have no meaningful responsibility, so whoever is operating them is ultimately on the hook for what they do. It's a different dynamic. It's probably why most software engineers are not gonna get replaced by robots - your director or VP doesn't want to be liable for an agent that goes haywire - but it's also why the "oh, I have an army of 50 YOLO agents do the work while I'm browsing Reddit" is probably not a wise strategy for line employees.
Obviously, the measure isn’t mistakes per day, it’s mistakes per LOC. And that’s not the whole story either - AI self-corrects in addition to being corrected by the operator. If the operator’s committed bugs/LOC rate is as low as the unaugmented programmer’s bugs/LOC, you always choose the AI operator. If it’s higher, it might still be viable to choose them if you care about velocity more than correctness. I’m a slow, methodical programmer myself, but it’s not clear to me that I have a moat.
This is like having a coworker who's as skilled as you if not more skilled, but also an alien.
Their mental model doesn't map cleanly enough to yours, and so where for a human you'd have some way to follow their thought patterns and identify mistakes, here the alien makes mistakes that don't add up.
Like the alien has encyclopedic knowledge of op codes in some esoteric soviet MCU but sometimes forgets how to look for a function definition, says "It looks like the read tool failed, that's ok, I can just make a mock implementation and comment out the test for now."
Nope, I tried my best to be really detailed and already knew these replies would come flooding.
I'm an engineers engineer: I get the job isn't LOC but being able to communicate and translate meatspace into composable and robust sustems.
So when I mean an alien when I say an alien.
Not human.
Not in the cute "oh that guy just hears what everyone else hears and somehow interprets it entirely differently like he's from a different planet" alien way, but in the, "it is a different definition of intelligence derived from lacking wetware" alien way.
Intelligence is such multidimensional concept that all of humanity as varied as we are, can fit in a part of the space that has no overlap with an LLM.
-
Now none of that is saying it can't be incredibly useful, but 99% of the misuse and misunderstanding of LLMs stems from humans refusing to internalize that a form of intelligence can exist that uses their language but doesn't occupy the same "space" of thinking that we all operate in, no matter how weird or unqiue we think we are.
I have no strong idea why people can't accept that intelligence formed separately of a human brain can truly be alien: not in the hyperbolic sense of "that person is so unique it's like they're a different species", but "that thing does not have a brain, so it can have intelligence that is not human-like".
A human without a brain would die. An LLM doesn't have a brain and can do wonderous things.
It just does them in ways that require first accepting that there is no homo sapien thinks like an LLM.
We trained it on human language so often times it borrows our thought traces so to speak, but effective agentic systems form when you first erase your preconceived notions of how intelligence works and actually study this non-human intelligence and find new ways to apply it.
It's like the early days of agents when everyone thought if you just made an agent for each job role in a company and stuck them in a virtual office handing off work to each other it'd solve everything, but then Claude Code took off and showed that a simple brain dead loop could outperform that.
Now subagents almost always are task specific, not role specific.
I feel like we could leap ahead a decade if people could divorce "we use language, and it uses language so it is like us", but I think there's just something really challenging about that because it's never been true.
Nothing had this level of mastery over human language before that wasn't a human. And funnily enough, the first times we even came close (like Eliza) the same exact thing happened: so this seems like a persistent gap in how humans deal with non-humans using language.
I know there are good uses of LLMs out there. I do. But.
The current fever pitch mandates from above seem to want it applied liberally, and pushing back against that is so discouraging and often career-limiting as to wear the fabric of one's psyche threadbare. With all the obvious problems being pointed out to people, there are just as many workarounds; and these workarounds, as is often revealed shortly thereafter, have their own problems, which beget new solutions, ad infinitum.
At some point it genuinely seems like all this work is for the sake of the machine itself. I suppose that is true: The real goal has become obscured at so many firms today, that all that remains is the LLM. Are the people betting the farm and helping implement the visions of those who have done so guaranteed a soft exit to cushion them from the consequences, or is rationality really being discarded altogether?
Sure, sound engineering principles can help work around these problems, but what efficiency is truly gained, in terms of cognitive load, developer time, money, or finite resources? Or were those ever an earnest concern?
It’s an absolute game changer, and it can now multiply your productivity fivefold if it’s a solo greenfield project.
Maybe half a year ago it was as you said. You had to wait for the agent to finish, you had to review carefully, and often the result was not that great. You did not save a lot of time.
Now I can spin up 3+ parallel conversations in Codex, each in a git worktree. My work is mainly QA testing the features, refining the behavior, and sometimes making architectural decisions.
The results are now undeniable. In the past I could not have developed a product of that scope in my free time.
That is what is possible today. I suspect many engineers have not yet tried things that became feasible over the last months. Like parallel agents, resolving merge conflicts, separating out functionality from a large branch into proper PRs.
"many engineers have not yet tried things that became feasible over the last months"
I have heard this statement every single day for 2 years and yet we still have no companies compressing 10 years into 1 year thus exploding past all the incumbents who don't "get it".
which is a pretty large caveat. Anecdotally, I've found my side projects (which are solo greenfield projects, and don't need to be supported to the same standards as enterprise software) have gained the boost the GP was talking about.
At work, it's different, since design, review, and maintenance is much more onerous.
If you want an example of a project that condensed 5 years into 6 months and exploded past the competition I suggest looking at OpenClaw.
The first line of code was written on November 25th. It achieved adoption in the "personal agents" space that far exceeded the other companies that had tried the same thing.
(Whether or not you trust the quality of the software you can't deny the impact it had in such a short time. It defined a new category of software.)
Ideally, the given example would be something not ajacent to the presently white-hot category of "AI agents".
Like, look at e.g. YC minus the AI and AI ajacent companies. Are those startups meaningfully more impressive or feature-rich as compared to a couple years ago?
OpenClaw is definitely not a "5 years" project pre-AI though. That was more like a month of greenfield work compressed into a weekend -- which is still really impressive, don't get me wrong! -- but I think the point is we're not seeing mature, legacy codebases get outcompeted by new, agile, AI-driven codebases; we're seeing greenfield projects get spun up faster. Which, again, is still impressive and valuable.
If agents could really compress 10 years of development into 1 year, you'd see people making e.g. HFT platforms and becoming obscenely rich, not making a fun open-source project and getting hired by OpenAI as an employee.
Its trash vibecoded markdown files around pi. This exemplifies well what op’is saying. We are at the ICO stage of llms. Hopefully there wont be an nft one
As much as I love to hate on AI: even the bad apples still produce something that one can reasonably work with.
Cryptocurrencies? Barely any other use than money laundering, buying drugs and betting on the outcome of battles in war. And NFTs? No use at all other than money laundering and setting money ablaze.
The degenerate side is clueless upper management and fad-driven engineering. We have talked extensively about this.
There is a more rational side to it that I've seen in my org: some engineers absolutely refuse to use AI and as a consequence they are now, clearly and objectively, much less productive than other engineers. The thing is, you still need to learn how to use the tool, so a nontrivial percentage of obstinate engineers need to be driven to use this in the same way that some developers have refused to use Docker or k8s or whatever.
Ah yes, we must force these obstinate engineers to the right path! Only after getting everyone to see the light will they understand and thank us for boundless productivity!! /s
Perhaps these “obstinate” engineers have good reason in their decision. And it should be their decision!
To be so confident in what is “the right way (TM)” and try to force it onto others is... revealing.
You would be absolutely shocked how many software projects are still run, to this day, without source control at all. Or automated (or manual) testing. And how many hand crafted artisanal servers are running on AWS, never to be recovered if their EC2 instance is killed for some reason.
Sure, but that’s a small and shrinking market. Not a source of economic security or growth for its employees, nor for most of its companies (though some have defended niches).
I've seen growing companies running multiple million ARR through systems like that. It's way more common than you'd think if you're a professional software developer.
I seriously don't see how version control and LLMs are comparable. A deterministic way to track code changes over time, versus an essentially non-deterministic statistical code generator that might get you what you want, and might do it in a reasonable time frame, and that might not land you in a minefield of short-term-good/long-term-bad design points.
I ran the statistics myself and my company is spending 40% less time doing feature development since AI agents began to be used en masse and pushing 50% more tickets without any noticeable increase in regressions.
After 18 months the hard evidence is in place. And much like replacing bare-metal servers for many use cases where evidence shows that the burden of k8s or the substitution of shell scripts for Terraform, it's time to move on.
I don't really see a place for no AI usage in line-of-business software apps anymore.
You can direct LLMs to do test-driven development, though. Write several tests, then make sure the code matches it. And also make sure the agent organizes the code correctly.
"I don't trust this giant statistical model to generate correct code, so to fix it, I'm going to have this giant statistical model generate more code to confirm that the other code it generated is correct."
This has generally been the case, though. As mentioned in the post, "You want solutions that are proven to work before you take a risk on them" remains true and will be place where the edges are found.
If I get pwned because my AI agent wrote code that had a security vulnerability, none of my users are going to accept the excuse that I used AI and it's a brave new world. I will get the blame, not Anthropic or OpenAI or Google but me.
The same goes for if my AI generated code leads to data loss, or downtime, or if uses too many resources, or it doesn't scale, or it gives out error messages like candy.
The buck stops with me and therefore I have to read the code, line-by-line, carefully.
It's not even a formality. I constantly find issues with AI generated code. These things are lazy and often just stub out code instead of making a sober determination of whether the functionality can be stubbed out or not.
You could say "just AI harder and get the AI to do the review", and I do this a lot, but reviewing is not a neutral activity. A review itself can be harmful if it flags spurious issues where the fix creates new problems. So I still have to go through the AI generated review issue-by-issue and weed out any harmful criticism.
First of all, building a system that constrains the output of the AI sufficiently, whether that's typing, testing, external validation, or manual human review in extremis. That gets you the best result out of whatever harness or orchestration you're using.
Secondly, there's the level at which you're intervening, something along the hierarchy of "validate only usage from the customer perspective" to "review, edit, and validate every jot and tiddle of the codebase and environment". I think for relatively low importance things reviewing at the feature level (all code, but not interim diffs) is fine, but if you're doing network protocol you better at least validate everything carefully with fuzzing and prop testing or something like that.
And then you've got how you structure your feedback to the LLM itself - is it an in-the-loop chat process, an edit-and-retry spec loop, go-nogo on a feature branch, or what? How does the process improve itself, basically?
I agree with you entirely that the responsibility rests on the human, but there are a variety of ways to use these things that can increase or decrease the quality of code to time spent reviewing, and obviously different tasks have different levels of review scrutiny, as well.
> My nonexistent backend isn’t going to be pwned if there is a bug in the thumbnail generation.
Hmm. Historically image editing was one of the easier to exploit security holes in many systems. How do you feel about having unknown entities having shell inside your datacenter or vpc?
I feel pretty good about the odds of attackers exploiting security holes in image editing functions my app does not have, in order to enter my also nonexistent datacenter or vpc.
> If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn’t.
It is so embarrassing that LOC is being used as a metric for engineering output.
LOC is useful here not because it's a metric for output but because it's a metric for _understandability_. Reviewing 200 lines is a very different workload than reviewing 2000.
That's assuming the 200 lines are logical and consistent. Many of my most frustrating LLM bugs are caused by things that look right and are even supported by lengthy comments explaining their (incorrect) reasoning.
The point is that LOC is never a good metric for any aspect of determining the quality of code or the coder because it ignores the nuance of reality. It's impossible to generalize because the code can be either deceptively dense or unnecessarily bloated. The only thing that actually matters is whether the business objective is achieved without any unintended side effects.
> The only thing that actually matters is whether the business objective is achieved without any unintended side effects.
Objectives change; timeliness matters. The speed at which you deliver value is incredibly important, which is why it matters to measure your process. Deceptively dense is what I’d call software engineers who can’t accept that the process is actually generalizable to a degree and that lines of code are one of the few tangible things that can be used as a metric. Can you deliver value without lines of code?
> Objectives change; timeliness matters. The speed at which you deliver value is incredibly important, which is why it matters to measure your process.
This assumes that shorter code is faster to write. To quote Blaise Pascal, "I would have written a shorter letter, but I did not have the time."
> Can you deliver value without lines of code?
No, but you can also depreciate value when you stuff a codebase full of bloated, bug-ridden code that no man or machine can hope to understand.
You seem determined to misinterpret. I’m not talking about LOC as a measure of productivity. The ratio of LOC needing review to the capacity of reviewers (using how many LOC can be read/reviewed over the sampling period) is what’s being discussed. Agentic AI/vibe coding has caused that ratio to increase and shows a bottleneck in the SDLC. It’s a proxy metric, get over yourself.
“All models are wrong, some are useful”. What’s not useful is constantly bitching about how there’s no way to measure your work outside of the binary “is it done” every time process efficiency is brought up.
Yes, reading this back, I definitely veered off-topic. I apologize. I still don't think that you can say how much time it will take to review code based on how many lines of code are involved, but my argument was not well crafted. I just hope that others can learn something from our discussion. Thank you for being patient with me, and I hope you have a good day! :)
I have worked with code where 1000s of lines are very straightforward and linear.
I’ve worked on code where 100 lines is crucial and very domain specific. It can be exceptionally clean and well-commented and it still takes days to unpack.
The skills and effort required to review and understand those situations are quite different.
One is like distance driving a boring highway in the Midwest: don’t get drowsy, avoid veering into the indistinguishable corn fields, and you’ll get there. The other is like navigating a narrow mountain road in a thunderstorm: you’re 100% engaged and you might still tumble or get hit by lightning.
The number of bugs tends to be linear to lines of code written meaning fewer lines of code for the same functionality will have fewer bugs.
So I’m pretty skeptical that reviewing 2000 lines of code won’t take any more time than reviewing 200 lines of code.
Furthermore how do you know the AI generated lines are the open highway lines of code and not the mountain road ones? There might be hallucinations that pattern match as perfectly reasonable with a hard to spot flaw.
> The number of bugs tends to be linear to lines of code written meaning fewer lines of code for the same functionality will have fewer bugs.
It depends on the code. If you’re comparing code of the same complexity then, sure, 2000 lines will take longer than 200.
I was comparing straight linear code to far more complex code. The bug/line rate will be different and the time to review per line will be different.
> Furthermore how do you know the AI generated lines are the open highway lines of code and not the mountain road ones?
Again, it depends on the code. Which was my point.
Linear code lacks branches, loops, indirection, and recursion. That kind of code is easy to reason about and easy to review. The assumptions are inherently local. You still have to be alert and aware to avoid driving into the cornfields.
It’s a different beast than something like a doubly-nested state machine with callbacks, though. There you have to be alert and aware, and it’s inherently much harder to review per line of code.
LoC is perfectly fine as a metric for engineering output. It is terrible as a standalone measure of engineering productivity, and the problems occur when one tries to use it as such.
It's still useful, however, because that is the only metric that is instantly intuitively understandable and comparable across a wide variety of contexts, i.e. across companies and teams and languages and applications.
As we know, within the same team working on the same product, a 1000 LoC diff could take less time than a 1 line bug fix that took days to debug. Hence we really cannot compare PRs or product features or story points across contexts. If the industry could come up with a standard measure of developer productivity, you'd bet everyone would use it, but it's unfeasible basically for this very reason.
So, when such comparisons are made (and in this case it was clearly a colloquial usage), it helps to assume the context remains the same. Like, a team A working on product P at company C using tech stack T with specific software quality processes Q produced N1 lines of code yesterday, but today with AI they're producing N2 lines of code. Over time the delta between N1 and N2 approximates the actual impact.
(As an aside, this is also what most of the rigorous studies in AI-assisted developer productivity have done: measure PRs across the same cohorts over time with and without AI, like an A/B test.)
I experimented with vibe coding (not looking at the code myself) and it produced around 10k LOC even after refactors etc.
I rewrote the same program using my own brain and just using ChatGPT as google and autocomplete (my normal workflow), I produced the same thing in 1500 LOC.
The effort difference was not that significant either tbh although my hand coded approach probably benefited from designing the vibe coded one so I had already though of what I wanted to build.
Sounds like a great oppurtunity to understand your own development process, and codify it in such detail that the agent can replicate how you work and end up with less code but doing the same.
My experience was the same as you when I started using agents for development about a year ago. Every time I noticed it did something less-than-optimal or just "not up to my standards", I'd hash out exactly what those things meant for me, added it to my reusable AGENTS.md and the code the agent outputs today is fairly close to what I "naturally" write.
The charitable interpretation here is obviously that the LoCs are equivalent in quality, in which case it is a very useful metric in the context that was presented. The inability to infer that should be embarrassing.
I deleted 75000 lines of code of my codebase in the last 2 months and that was tremendously more useful to by business than the 75000 AI has written the 2 months before...
Is it? The whole point of the article is that the rate of output for writing code has surpassed the rate at which it can be reviewed by humans. LOC as an input for software review makes a lot of sense, since you literally need to read each line.
Because not everyone is just out after earning the most money, some people also want to enjoy the workplace where they work. Personally, what the quality of the codebase and infrastructure is in matters a lot for how much you enjoy working in it, and I'd much rather work in a codebase I enjoy and earn half, than a codebase made by just jerking out as many LOC as possible and earn double.
Although this requires you to take pride in your profession and what you do.
All of human agency must prop up the vanity of you. Of all people.
Got it.
...ok fine; lack of political action to put us all on the hook for your healthcare is your choice to take a gamble on a paycheck. It's a choice to say your own existence is not owed the assurance of healthcare.
So I will honor your choice and not care you exist.
Do you reject all stats that treat the number of people involved (eg. 2 million pepole protested X) as "embarrassing" ... because they lump incredibly varied people together and pretend they're equal?
I read somewhere that measuring software engineering output by LoC is like measuring aerospace engineering by pounds added to the plane and I thought that was an apt comparison.
Totally. I thought Simon was wiser than this; even he couldn't resist getting swept up by breathless hype. The moment you start typing "LOC as a metric", alarm bells should go off in your head.
This was a podcast, not a pre-scripted talk. I suggest listening to the audio version - it makes it more clear that this was thinking out loud, not carefully considering every word.
I see, fair point. Sorry for taking a dig at you. Please know that I do appreciate a lot of work that you do. I was just worried for a moment when just reading that bit.
Have you noticed that the coding agents get really close to the solution on the first one shot and then require tons of work to get that last 10% or 5%?
If we shift the paradigm of how we approach a coding problem, the coding agents can close that gap. Ten years ago every 10 or 15 minutes I would stop coding and start refactoring, testing, and analyzing making sure everything is perfect before proceeding because a bug will corrupt any downstream code. The coding agents don't and can't do this. They keep that bug or malformed architecture as they continue.
The instinct is to get the coding agents to stop at these points. However, that is impossible for several reasons. Instead, because it is very cheap, we should find the first place the agent made a mistake and update the prompt. Instead of fixing it, delete all the code (because it is very cheap), and run from the top. Continue this iteration process until the prompt yields the perfect code.
Ah, but you say, that is a lot of work done by a human! That is the whole point. The humans are still needed. The process using the tool like this yields 10x speed at writing code.
This was often true when writing code manually to be fair.
You could get to "something that works" rather fast but it took a long time to 1) evaluate other options (maybe before, maybe after), 2) refine it, 3) test it and build confidence around it.
I think your point stands but no one really knows where. The next year or so is going to be everyone trying to figure that out (this is also why we hear a lot of "we need to reinvent github")
When I hire fresh out of college… I can see them coming in and not having the slightest comprehension of the difference of the things that they did in school to get a grade and never touch it again versus a product that is supposed to exist and work for 10+ years.
The problem of life in general is the last 5-10% is always the hardest. And it makes no economic sense in many cases to invest in trying to make that last part mechanised.
I believe the llm providers went with the wrong approach from the off - the focus should’ve been on complementing labour not displacement. And I believe they have learned an expensive lesson along the way.
I tend to get something working and refactor my way out, which does work and you can use a coding agent to do it, but it takes time. Maybe starting over would have been better, but I didn’t know what I wanted the architecture to look like at the beginning.
That will not work as cleanly as you described once a lot of code has been committed to the code base. You cannot just blow away an entire working code base and start over just because an LLM is struggling to make a feature work with existing architecture.
This happened on every single greeenfield project that I've started with AI, no matter how rigorous process I've had defined.
And it's not just easier because it's cheap, it's easier because you're not emotionally attached to that code. Just let it produce slop, log what worked, what didn't, nuke the project and start over.
For me the distinction is the quality and rigor of your pipeline.
Vibe coding: one shot or few shot, smoke test the output, use it until it breaks (or doesn't). Ideal for lightweight PoC and low stakes individual, family or small team apps.
Agentic engineering:
- You care about a larger subset of concerns such as functional correctness, performance, infrastructure, resilience/availability, scalability and maintainability.
- You have a multi-step pipeline for managing the flow of work
- Stages might be project intake, project selection, project specification, epic decomposition, d=story decomposition, coding, documentation and deployment.
- Each stage will have some combination of deterministic quality gates (tests must pass, performance must hit a benchmark) and adversarial reviews (business value of proposed project, comprehensiveness of spec, elegance of code, rigor and simplicity of ubiquitous language, etc)
And it's a slider. Sometimes I throw a ticket into my system because I don't want to have to do an interview and burn tokens on three rounds of adversarial reviews, estimating potential value and then detailed specification and adversarial reviews just to ship a feature.
If your slider only goes between vibe coding or agentic engineering you're missing an entire range of engineering where the human is more involved.
I've been using Opus, GPT-5.5, and some lesser models on a daily basis, but not having them handle entire tasks for me. Even when I go to significant effort to define and refine specs, they still do a lot of dumb things that I wouldn't allow through human PR review.
It would be really easy to just let it all slide into the codebase if I trusted their output or had built some big agentic pipeline that gave me a false sense of security.
Maybe 10 years from now the situation will be improved, but at the current point in time I think vibe coding and these agentic engineering pipelines are just variations of a same theme of abdicating entirely to the LLM.
This morning I was working on a single file where I thought I could have Opus on Max handle some changes. It was making mistakes or missing things on almost every turn that I had to correct. The code it was proposing would have mostly worked, but was too complicated and regressed some obvious simplifications that I had already coded by hand. Multiply this across thousands of agentic commits and codebases get really bad.
Next time give it the context required for the task, eg an explanation of why you have those hand coded simplifications, and be amazed at how proper use of a tool works better than just assuming your drill knows what size bit to pick.
I agree, vibe coding does not have quality gate checks at each stage, while agentic engineering does. Dev teams get into trouble when they try build to build without a proper process of design, tests, and reviews. This was true before agentic coding, but it's especially true now. The teams that understand how to leverage agents in this process are the ones that will be most successful.
Let's assume AI is 10x perfect than humnas in accuracy and produces 10x less bugs and increases the speed by 1000x compared to a very capable software engineer.
Now imagine this:
A car travels at a road that has 10x more bumps but it is traveling 1000x slower pace so even though there are 10x bumps, your ride will feel less bumpy because you're encountering them at far lower pace.
Now imagine a road that has 10x less bumps on the road but you're traveling at 1000x the speed. Your ride would be lot more bumpy.
That's the agentic coding for you. Your ride would be a lot more painful. There's lots of denial around that but as time progresses it'll be very hard to deny.
Lastly - vibe coding is honest but agentic coding is snake oil [0] and these arguments about having harnesses that have dozens of memory, agent and skill files with rules sprinkled in them pages and pages of them is absolutely wrong as well. Such paradigm assumes that LLMs are perfect reliable super accurate rule followers and only problem as industry that we have is not being able to specify enough rules clearly enough.
Such a belief could only be held by someone who hasn't worked with LLMs long enough or is a totally non technical person not knowledgeable enough to know how LLMs work but holding on to such wrong belief system by highly technical community is highly regrettable.
You are speaking out of my soul. Thank you. Great example. I have grinded AI extensively 14 hours a day on my own project for months. I’ve been using AI since GPT-2.
I maxxed out Claude Max $200 subscription and before I justified spending $100/day.
And it was worth it, but not because it wrote me so good code, but because I learnt the lessons of software engineering fast. I had the exact ride you are describing. My software was incredible broken.
Now I see all the cracks, lies and "barking the wrong tree" issues clearly.
NOW i treat it as an untrustworyth search engine for domains I’m behind at. I also use predict next edit and auto-complete, but I don’t let AI do any edit on my codebase anymore.
What an excellent article by a smart, humble, still-learning person!
Favorite quote:" There are a whole bunch of reasons I’m not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you’re doing, you can run so much faster with them. [...]
I’m constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we’re trying to achieve here is still really difficult. [...]"
it’s sad that i had to triple-read this to determine you weren’t being sarcastic. sad for whom? i don’t know. but the amplifier take is exactly the right one.
I think all coding will become vibe coding, but it will be no less an engineering discipline.
Note: I still review pretty much every line of code that I own, regardless of who generates it, and I see the problems with agents very clearly... but I can also see the trends.
My take: Instead of crafting code, engineering will shift to crafting bespoke, comprehensive validation mechanisms for the results of the agents' work such that it is technically (maybe even mathematically) provable as far as possible, and any non-provable validations can be reviewed quickly by a human. I would also bet the review mechanisms would be primarily visually, because that is the highest bandwidth input available to us.
By comprehensive validations I don't mean just tests, but multiple overlapping, interlocking levels of tests and metrics. Like, I don't just have an E2E test for the UI, I have an overlapping test for expected changes in the backend DB. And in some cases I generate so many test cases that I don't check for individual rows, I look at the distribution of data before and after the test. I have very few unit tests, but I do have performance tests! I color-code some validation results so that if something breaks I instantly know what it may be.
All of this is overkill to do manually but is a breeze with agents, and over time really enables moving fast without breaking things. I also notice I have to add very few new validations for new code changes these days, so once the upfront cost is paid, the dividends roll in for a long time.
Now, I had to think deeply about the most effective set of technical constraints that give me the most confidence while accounting for the foibles of the LLMs. And all of this is specific to my projects, not much can be generalized other than high-level principles like "multiple interlocking tests." Each project will need its own custom validation (note: not just "test") suites which are very specific to its architecture and technical details.
So this is still engineering, but it will be vibe coding in the sense that we almost never look at the code, we just look at the results.
This is complete insanity for anyone that actually works on production-grade, hundred billion dollar systems that are critical to the function of the global economy.
Other than for your own pet projects, almost all of what you said has no place for "vibe engineering" / or "vibe coding" on serious software engineering products that are needed in life and death situations.
That may be true for highly critical systems, but those are a tiny, tiny, tiny minority of all software projects. I mean, how many engineers work on aviation or automotive or X-ray machine or other life-and-death code compared to pretty much anything else?
And not all "production-grade, hundred billion dollar systems" are that critical. Like, Claude Code as we all know is clearly vibe-coded and is already a 10-billion (and rapidly increasing!) dollar system. Google Search and various Meta apps meet those criteria and people are already using LLMs on that code, and will soon be "vibe coding" as I described it.
AWS meets that criteria and has already had an LLM-caused outage! But that's not stopping them from doing even more AI coding. In fact I bet they will invest in more validation suites instead, because those are a good idea anyways. After all, all the cloud providers have been having outages long before the age of LLMs.
The thing most people are missing is that code is cheap, and so automated validations are cheap, and you get more bang for the buck by throwing more code in the form of extensive tests and validations at it than human attention.
Edited to add: I think I can rephrase the last line better thus: you get more bang for the buck by throwing human attention at extensive automated tests and validations of the code rather than at the code itself.
There are people who write software for hedge funds, quant firms, aviation and defense systems, data center providers, major telecom services used by hospitals and emergency services and semiconductor firms and the big oil and energy companies and that is NOT "almost no-one" and these companies see and make hundreds of billions of dollars a year on average.
This is even before me mentioning big tech.
Perhaps the work most here on this site are doing is not serious enough that can be totally vibe-coded and are toy projects and bring in close to $0 for the company to not care.
What I am talking about is the software that is responsible for being the core revenue driver of the business and it being also mission critical.
> It’s not just the downstream stuff, it’s the upstream stuff as well. I saw a great talk by Jenny Wen, who’s the design leader at Anthropic, where she said we have all of these design processes that are based around the idea that you need to get the design right—because if you hand it off to the engineers and they spend three months building the wrong thing, that’s catastrophic.
This is spot on. I think the tooling is evolving so much particularly on the design side that its not worth the "translation cost" to stay (or even be) on the Figma side anymore.
The distinction between 'vibe coding' and 'agentic engineering' is important. In my experience, the key difference is whether you're reviewing and understanding the code the agent produces. When I use coding agents for non-trivial tasks, I always review the diff before committing — that's the engineering part. The danger is when people skip that step and just trust the output.
When I was in grad school I graded homework for first year math classes, and the thing about math homework is that the perfect homework takes almost no time to grade.
It's the bad, semi-coherent submissions that eat up your time, because you do want to award some points and tell students where they went wrong. It's the Anna Karenina principle applied to math.
Code review is the same thing. If you're sure Claude wrote your endpoint right, why not review it anyway? It's going to take you two minutes, and you're not going to wonder whether this time it missed a nuance.
Typically in engineering you don't know what you're doing. If you're sure of what it should look like going in, you're more of a technician. I think most people coding have no idea what they're doing to a large extent- not many people can do the same rote work for years straight.
The scary part is that codebases are getting layers of AI complexity, that it's going to cost $$$ to have the latest model decipher and make changes as no human can understand the code anymore.
Pretty soon there is no code reuse and we're burning money reinventing the wheel over and over.
I genuinely think it's part of a psyop. If we bloat all codebases and eventually start printing the models on chips to reduce inference costs by 50-100x they'll take in massive profits from 5M line codebases instead of 350k
Prior to the advent of LLMs, I had this concept of the 'complexity horizon' - essentially a [hand built] software system will naturally tend to get more and more complex until no-one can understand it - until it meets the complexity horizon. And there it stays, being essentially unmaintainable.
With LLMs, you can race right for that horizon, go right through, and continue far beyond! But then of course you find yourself in a place without reason (the real hell), with all the horror and madness that that entails.
> The scary part is that codebases are getting layers of AI complexity, that it's going to cost $$$ to have the latest model decipher
Isn't this a bit like old Java or IDE-heavy languages like old Java/C#? If you tried to make Android apps back in the early days, you HAD to use an IDE, writing the ridicolous amount of boilerplate you had to write to display a "Hello Word" alert after clicking a button was soul destroying.
The difference is that the complexity to achieve “Hello World” was the same for everyone, and more or less well-understood and documented. With AI, you get some different random spaghetti slop each time.
Claude often does things in more detail, and even better, than I would, in the first pass. But I don't understand how anybody stands comments generated by an LLM?
It's seriously the thing that worries (and bothers) me the most. I almost never let unedited LLM comments pass. At a minimum.
Most of the time, I use my own vibe-coded tool to run multiple GitHub-PR-review-style reviews, and send them off to the agent to make the code look and work fine.
It also struggles with doing things the idiomatic way for huge codebases, or sometimes it's just plain wrong about why something works, even if it gets it right.
And I say this despite the fact that I don't really write much code by hand anymore, only the important ones (if even!) or the interesting ones.
Also, don't even get me started on AI-generated READMEs... I use Claude to refine my Markdown or automatically handle dark/light-mode, but I try to write everything myself, because I can't stand what it generates.
I find that the best thing about generating documentation with LLM's is that it gets me angry enough to rewrite it correctly.
"Ugh, no! Why would you say it like that? That's not even how it works! Now, I need to write a full paragraph instead of a short snippet to make sure that no future agents get confused in the same way."
This is a timely observation and feels right to me. I needed to get a relatively simple batch download -> transform -> api endpoint stood up. I wrote a fairly detailed prompt but left a lot of implementation details out, including data sources.
Opus 4.7 built it about 90% the same way I would, but had way more convenience methods and step-validations included.
It's great, and really frees me
up to think about harder problems.
This is my experience too. I'm primarily a python dev, but have been routinely using other backend languages (rust, go, etc) that I'm familiar with but not at the same level.
Just having ~13yrs experience heavily weighted in one language with some formal studying of others makes directing llms a lot simpler.
Learning syntax, primitives, package managers, testing, etc isn't that much of a lift compared to how I used to program.
Was helping a non-dev colleague who's using claude cowork/code to automate reporting the other day. They understand the business intelligence side well, but were struggling with basic diction to vibe code a pyautogui wrapper to pull up RDP and fill out a MS Access abstraction on a vendor DB.
Think we'll be fine for another 5-10 years as a profession
Repeat after me: most software spends the majority of its lifetime in the maintenance phase.
Repeat after me: it follows that most of the money the software makes occurs during the maintenance phase.
Repeat after me: our industry still does not understand this after almost 100 years of being in existence.
Alan Kay was 100% right when he said that the computer revolution hasn't occurred yet. For all of our current advancements all tools are more or less in the Stone Age.
My great hope is that AI will actually accelerate us to a point where the existing paradigm fully breaks beyond healing and we can finally do something new, different, and better.
So for now - squeee! - put a jetpack on your SDLC with AI and go to town!!! Move fast and break things (like, for real).
From the podcast episode they talk about the idea of using an LLM for training by disallowing the model to write code. I've been experimenting with exactly that in conjunction with a proof checker (Agda) to help me learn some cubical type theory and category theory.
I find the LLM as interactive tutor reviewing my work in a proof checker to be a really killer combo.
The real paradigm shift is not here yet, but not very far away. I'm talking about the single unified codebase. Agents building a unique codebase for all your software needs.
Because most of the complexity in software comes from interfacing with external components, when you don't need to adapt to this you can write simpler and better code.
Rather than relying on an external library, you just write your own and have full control and can do quality control.
Linux kernel is 30 000 000 LOC. At 100 tokens /s, let's say 1 LOC per second produced for a single 4090 GPU, in one year of continuous running 3600 * 24 * 365 = 31 536 000 everyone can have its own OS.
It's the "Apps" story all over again : there are millions of apps, but the average user only have 100 max and use 10 daily at most.
Standardize data and services and you don't need that much software.
What will most likely happen is one company with a few millions GPUs will rewrite a complete software ecosystem, and people will just use this and stop doing any software because anything can be produced on the fly. Then all compute can be spent on consistent quality.
There are techniques for improving our confidence in our software: unit testing, integration testing, fuzz testing, property-based testing, static analysis, model checking, theorem proving, formal methods, etc. The LLM is not only a tool for generating lines of code. It can also generate lines of testing. The goal is that the tests are easier to audit by the humans than the code.
I've found that one of the areas I enjoyed least is now what I spend a lot of time on now: testing!
Property-based testing in particular has uncovered a number of invariants in every code base I've introduced it to.
tbf depending on the agent/model a lot of the tests end up being thrown out so it's possible I _should_ handwrite more tests, but having better prompts and detailed plans seems to mitigate that somewhat
> The thing that really helps me is thinking back to when I’ve worked at larger organizations where I’ve been an engineering manager. Other teams are building software that my team depends on.
> If another team hands over something and says, “hey, this is the image resize service, here’s how to use it to resize your images”... I’m not going to go and read every line of code that they wrote.
The distance of accountability of the output from its producer is an important metric. Who will be held accountable for which output: that's important to maintain and not feel the "guilt".
So, organizations would need to focus on better and more granular building incentives and punishment mechanisms for large-scale software projects.
I agree somewhat, but I do still think there is a decently sized separation between true vibe coding (the typical "make me an app...fix this bug") and actual AI assisted development. I personally think that if you are a dev and you simply trust the AI's output, that is still vibe coding.
I am not a developer and have very basic code knowledge. I recently built a small and lightweight Docker container using Codex 5.5/5.4 that ingests logs with rsyslog and has a nice web UI and an organized log storage structure. I did not write any code manually.
Even without writing code, I still had to use common sense in order to get it in a place I was happy with. If i truly knew nothing, the AI would have made some very poor decisions. Examples: it would have kept everything in main.go, it would have hardcoded the timezone, the settings were all hardcoded in the Go code, the crash handling was non existent, and a missing config would have prevented start. And that is on a ~3000 line app. I cannot imagine unleashing an AI on a large, complex. codebase without some decent knowledge and reviewing.
But using an agentic LLM to complete boilerplate is attractive simply because we've created a mountain of accidental and intentional complexity in building software. It's more of a regression to the mean of going back to the cognitive load we had when we simply built desktop applications.
Strong agree. Most orgs will stay tangled in the mess they hand-coded over the years, a few greenfield teams will pull ahead, but until some LLM-fuelled startup displaces a strong incumbent I'm skeptical that we're on the cusp of anything other than a K-shaped transition. I see already low quality software and orgs getting flushed to make room for some new ideas now that the barrier to entry is slightly lower (but far from free). I just wish the transition was done with more humanity.
It's already the case that you get much better results out of LLMs by forcing agents using them to go through additional layers of planning, design & review.
The future is going to dynamically budget and route different parts of the SLDC through different models and subagents running on the cloud. Over time, more and more of that process will be owned by robots and a level of economic thinking will be incorporated into what is thought of today as "software engineering." At some point vibe coding _is_ coding and we're maybe closer to that point than popularly believed.
Given rapidly decelerating quality of, at least, claude code output, the agentic coding use may decrease. It is insane how bad the results of background agents are now: constant hallucinations, nonsensical outputs.
The heavy users of Claude at my job disagree (me included), our work gets shipped and the quality has increased by all metrics. Are you talking about enterprise or consumer Claude subscriptions? I think they're serving drastic different quality depending on how much $ you fork up.
I don't see much sense to have hn as support thread, but here are quotes from my single claude investigation session, and that happens in every claude code session that I have, especially with 4.7
* The first agent's claim that was 3.x-only was wrong
* is nice-to-have but doesn't target our exact case as cleanly as the agent claimed.
* The agent's "direct fix for yyy" is overstated.
* not 57% as the earlier agent claimed
etc etc etc
And I forgot how many times my session with claude starts: did you read my personal CLAUDE.md and use background agents for long running operations?
I use enterprise subscription, max effort, was with both 4.6 and 4.7.
And please refrain from comments like "you're using it wrong", as the drop in output quality is very clear and noticeable.
I think I'm just too opinionated to go there. If I see something that works fine, but isn't the way I'd do it, it doesn't matter if a human or an LLM wrote it I'm still in there making it match my vision.
I concur, and I think that is one of the most difficult aspects of reviewing another's code. It's difficult for me to sometimes differentiate between what is acceptable vs. what I would have done. I have to be very conscious to not impose my ideals.
So you are going to waste everyone's time getting another developer to write code the way you want? This resonates with me because at my company I get this all the time. At that point, you might as well close my PR and do it yourself, whatever way you want. I really like the advice from the book 0 2 1, to assign different areas of responsibility to people, so that there is no conflict.
The current state of the technology is that you must read at least some of the code, but everyone keeps shipping tools that are focussed on churning out more and more stuff without giving you any affordances to really understand the output.
Claude Code in particular seems really uninterested in this aspect of the problem and I've stopped using entirely because of this.
the discourse around "code quality" has always attracted the least nuanced minds, ones who see the world and the phenomenon of life as nothing but territory to be divided up by the latest buzzwords. the worst ones insist that we narrow the discussion even further, to focus on the conflicts between these buzzwords. whenever i have to sit through such discussions, i try to meditate on the irony of mother nature weaving the most functionally brutal, ruthlessly redundant poetry that is the genetic code, only for the resulting creatures to deny themselves the power of the principles inherent in their own construction.
I am experimenting with writing en entire TypeScript compiler[1] with AI assistant. I've spent 4 months on it already. It might not be successful at the end of the day but my thinking is that if LLMs are going to write a lot of the code I better learn how this can and can not work. I've learned a lot from this project already. I think we're still in charge of design and big ideas even if all of the code is written by AI
I'm also experimenting with it more and more. Now I'm trying to create a 2D side-scrolling shooter with it, running in the browser. When it was relatively small, it did a good job. As the codebase and docs/ files that I'm using get larger it starts hallucinating, especially when the context gets at about 50% usage (Codex w/ gpt5.5). As in, it'll literally forget to update parts of the code.
e.g, I change velocity of player to '200' and of bullets to '300', and it only updated the bullet velocity. Then told me the player was already 'at the correct value' even though it was set to 150. Things like that.. :)
For me, unless there is a concrete way of proving work is correct you can't rely on AI coding. tsz has super strict tests around correctness, performance and architectural boundaries
If I understood you correctly, I think I'm less extreme than that. Most code written by humans is also not provably correct. But I'm assuming you mean provably correct like Lean: https://lean-lang.org/, and not just "passes tests".
If you mean 'passes tests', that can be tackled by AI. Although AI writing its own tests and then implementing its own code is definitely not a foolproof strategy.
Multiple computers and each multiple Claude Code or Codex sessions. It had lots of ups and downs. Now I have a good enough test harness that makes it easier to iterate faster
The problem with vibe coding closer is that the agentic makes a very plasticy samey feel unless you work with something that makes it unique or can pass a template through it.
> I’m starting to treat the agents in the same way. And it still feels uncomfortable, because human beings are accountable for what they do. A team can build a reputation. I can say “I trust that team over there. They built good software in the past. They’re not going to build something rubbish because that affects their professional reputations.”
The most important part and why slop isn't the same as a code written by someone else. The model doesn't care, it just produces whatever it is asked to produce. It doesn't have pride, it doesn't have ego, it doesn't artisanal qualities, it doesn't have ownership.
Honestly, I think the need for devs is total copium, the progress made in two years is astounding and in two years time they will be better at programming than 99% of programmers. It’s incredible what they can do now. No it’s not perfect but imagine where we’ll be in 5 or 10 years.
Correct me if I’m wrong Simon, but weren’t you highly optimistic about llm’s and agentic-use of them?
I believe this is a common fault of not being able to zoom out and look at what trade offs are being made. There’s always trade-offs, the question is whether you can define them and then do the analysis to determine whether the result leaves you in a net benefit state.
I think you kind of answered this in the post though. "I want somebody to have used the thing" is dogfooding. and it's probably the only quality signal left that can't be generated in 30 minutes.
Yes. I do "agentic engineering," primarily using Cline as it allows me to gas-and-brake the AI and review what it's doing on a granular level. So, think pair programming but my #2 is an LLM. I routinely reject turns when a given model goes off into space. I also routinely make hot edits to its changes before advancing, several times per day.
You can use these tools wisely without letting it run unverified carelessly.
I think this is what people mean when they say LLMs are a higher level abstraction. We still need to consider edge cases and have tests. We still to sweat the architecture and understand how the pieces fit together and have a mental map of the codebase. But within each bottom node of that architecture we don't sweat the details. Anything obvious gets caught right away. Most subtle/interaction-based issues occur at the architecture level. Anything that bypasses those filters is a weird bug that is no worse or different from a normal bug fixes - an edge case that was hit in a real world scenario that gets flagged by a user or a logged as an error.
There are certain codebases and pieces of code we definitely want every line to be reasoned and understood. But like his API endpoint example, no reason to fuss with the boilerplate.
This has definitely been my shift over the past few months, and the advantage is I can spend much more time and energy on getting the code architecture just right, which automatically prevents most of the subtle bugs that has people wringing their hands. The new bar is architecting code to be defined as well as an API endpoint->service structure so you can rely on LLMs to paint by numbers for new features/logic.
Vibe coding is just coding now. Writing assembly used to be a thing too until higher and higher languages were created. LLM is like that except it compiles English to code. This scares lot of professionals understandably.
> I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it’s just going to do it right. It’s not going to mess that up.
> Claude Code does not have a professional reputation!
That's a wild statement to me. Even with spending significant time making plans with Opus 4.7 and GPT 5.5 on xhigh, I still find lots of poor decisions made when it actually goes to implement it. I find the quality of PRs hasn't dramatically changed either way because the better engineers will spot the issues whereas others will find what the AI is doing acceptable.
Every time I do deep work, and think of solutions to a complex problem. I always have the opportunity to ask claude to implement a sub-par AI slop solution.
Do this enough times, and I will have forgotten how to think.
Or, you just explain the solution and save some typing and get the same thing. I find it refreshing to be able to just talk to Claude and have it generate the same thing I would have built.. It gives me more time to articulate and solve complex problems, and less time with the mundane writing, test loops etc.
I feel like an outlier in all of this. But isn't this just more AI slop? How is this different from text generation or image generation?
Like many people I have used AI to generate crap I really don't care about. I need an image. Generate something like, whatever. Great hey a good looking image! No that's done I can do something I find more interesting to do.
But it's slop. The image does not fit the context. Its just off. And you can tell that no one really cared.
People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read almost any of it and is no longer possible for anyone to maintain neither with nor without LLMs. And the LLM generated garbage will have drowned out all of the good quality code that ever existed and no one will be able to find even human generated code anymore on the internet.
Makes me want to just give up programming forever and never use a computer again.
I think it’s a mistake to think that we will be blindly going in this direction for many years and then suddenly collectively wake up and realize what have we done. It’s a great filter and a great opportunity.
If LLMs stop improving at the pace of the last few years (I believe they already are slowing down) then they will still manage to crank out billions lines of code which they themselves won’t be able to grep and reason through, leading to drop in quality and lost revenue for the companies that choose to go all-in with LLMs.
But let’s be realistic - modern LLMs are still a great and useful tool when used properly so they will stay. Our goal will be to keep them on track and reduce the negative impact of hallucinations.
As a result software industry will move away from large complex interconnected systems that have millions of features but only a few of them actively used, to small high quality targeted tools. Because their work will be easier to verify and to control the side effects.
> If LLMs stop improving at the pace of the last few years (I believe they already are slowing down)
Depending on how you measure "improvement" they already have or they never will :-/
Measuring capability of the model as a ratio of context length, you reach the limits at around 300k-400k tokens of context; after that you have diminishing returns. We passed this point.
Measuring capability purely by output, smarter harnesses in the future may unlock even more improvements in outputs; basically a twist on the "Sufficiently Smart Compiler" (https://wiki.c2.com/?SufficientlySmartCompiler=)
That's the two extremes but there's more on the spectrum in between.
300k-400k isn’t the current limit if you create modules and/or organize the code reasonably.. for the same reason we do this for humans: it allows us to interact with a component without loading the internals into out context.
you can also execute larger tasks than this using subagents to divide the work so each segment doesn’t exceed the usable context window. i regular execute tasks that require hundreds of subagents, for example.
in practice the context window is effectively unlimited or at least exceptionally high — 100m+ tokens. it just requires you to structure the work so it can be done effectively — not so dissimilar to what you would do for a person
ok "series of context windows spread across many agents".. sure much clearer.
Doesn't change my point: the amount of code the agent can operate on is very large, if not unlimited, as long as you put even a little bit of thought into structuring things so it can be divided along a boundary.
If you let the codebase degrade into spaghetti, then the LLM is going to have the same problem any engineer would have with that. The rules for good code didn't disappear.
I keep getting surprised that people who are all-in on this (" i regular execute tasks that require hundreds of subagents ") don't have any idea of what is happening even a single layer below their interface to the LLM ("in practice the context window is effectively unlimited or at least exceptionally high — 100m+ tokens.")
I looked at that response by GP (rgbrenner) and refrained from replying because if someone is both running hundreds of agents at a time AND oblivious to what "context window" means, there is no possible sane discourse that would result from any engagement.
Maybe I am unlucky but I had worked with too many developers who couldn't make a good decision if their life depended on it. LLMs at least know how to convince you of their decisions with strong arguments.
30 years down the line a human will wake up in his climate controlled bed in an idyllic large scale people-zoo, think about what information he wants, and immediately his 900TB ferroelectric compute-in-memory exobrain will read his thoughts via his brain-computer-interface, and render a custom 3d visualization of that information floating in front of him. There will be no separate code stage, just neural rendering of data to pixels.
> custom 3d visualization of that information floating in front of him.
Eh, what a waste. Can't we just stimulate the optic nerve? Or better yet, whatever region of the brain is responsible for me being able to 'see' anything? And perhaps we can finally get smell-o-vision too.
The only people who are going to put in the time, are people who care enough to. The problem is you have people who didn’t care before who were equipped with a garden hose. Now that they have a fully pressurized fire hose they can make more of a mess faster.
As an author of fine literature, these million monkeys on typewriters simply upset my sense of dignity. And to imagine the impoverished prose so many readers shalt forthwith be perusing!
Maybe. But it depends on the metric. It seems like orgs are focused on PR count and token usage. Issues caused by poor code are often lagging indicators so it’s asymmetrical in that aspect.
Write lots of code now and statistically look great, while the impact won’t be felt for a much larger range of time.
With the job search and whatnot then yeah, caring becomes a lot more important. That’s true.
Hard disagree. LLMs are fantastic for fixing bad architecture that's been around for a decade because nobody was willing to touch it. I can have it write tons and tons of sanity checks and then have it rewrite functionality piece by piece with far more verification than what I'd get from most engineers.
It's not immediate, it still takes weeks if you want to actually do QA and roll out to prod, but it's definitely better than the pre-LLM alternatives.
Because there is a certain point where barrier to entry prevents meaningful competition once winner-take-all power laws start kicking in, and stability hitherto has been predisposed on having a plurality of non interrelated competitors to ensure no one man's quirks drives too much of societies theoretical output.
AI will make this dynamic worse, and it's got the extra danger of the default banal way of applying the technology in fact encourages it's application to that end.
I don't really see it that way because most software companies overestimate the importance of fantastic software vs merely adequate software, and most times good sales development, support, and negotiation skills are what helps actually sell.
I also don't think that the commodification of programming is a substitute for things like understanding your customers, having good taste for design, and designing software in a way that is maximally iterable.
Like with a lot of things in this space, it depends where you invest your effort. If you care about quality design and good code, you can definitely get there - but that doesn't happen by default.
With the right investment, we could certainly have tooling that creates and maintains very good designs out of the box. My bet is that we'll continue chasing quick and hacky code, mostly because that's the majority of the code that it was trained on, and because the majority of people seem to be interested in a quick result vs a long-term maintainable one.
But we aren't cooking with gas. We are cooking with a more controlled burner than ever that can download a clean code claude skill and be committing better code than you or I could write.
What would normally be considered overengineered gold plating is "free" now.
By then, the fix will be easy. Fire up the latest LLM, point it at your codebase and tell it "rewrite this from scratch. do it well. fix the architecture mistakes"
There is definitely going to be some Wirth's law-like [0] effect about the asymmetry of software complexity outpacing LLMs' abilities to untangle said software. Claude 9.2 Optimus Prime might be able to wrangle 1M LoC, but somehow YC 2035 will have some Series A startup with 1B+ LoC in prod — we'll always have software companies teetering on the very edge of unmaintainability.
It's the Peter principle for computers. Codebases expand to the limits of the organization's ability to manage them. If you make one person use ed to write code for a bare metal environment, you'll get a comparatively small, laser-focused codebase. If you task a hundred modern developers to solve the same problem, you'll get a Linux box device running a million lines of JavaScript.
Same thing happens in other fields. A rich country and a poor country might build equivalent roads, but they won't pay the same price for them.
It won't be an LLM that does it, the entire feature of an LLM is it produces generalizable reasonably "correct" text in response to a context.
The system that makes it have an opinion about good vs bad architecture or engineering sensibilities will be something on top of the transformer and probably something more deterministic than a prompt.
We can do this today too (but definitely hopefully future LLMs make better architectural decisions). With Claude, I've been working on an application for the last 2 months. I didn't have a great vision of what I wanted when I started but I didn't want that to slow me down. The architecture is terrible - Claude separated some functionality into different classes but did a bad job at it and created a big ball of mud. Now that I finally have my vision locked down and implemented (albeit poorly), it'd be a great time to throw it away and start over. It'd be interesting to see the result and see how long it takes.
Just have claude (or gpt maybe) do an architecture review and request a multi-phase refactoring plan. This is probably better to do incrementally as you notice the balls of mud forming but it might not be too late. Either way, if it does something you don't like, `git checkout` and start over
Yes. The models may have started from indiscriminate scraping, but people are undoubtedly working on refining the training data. Combined with the overall model capabilities, I suspect code quality will continue to go up.
What you're suggesting is a negative flywheel where quality spirals down, but I'm hoping it becomes a positive loop and the quality floor goes up. We had plenty of slop before LLMs, and not all LLM output is slop. Time will tell, but I think LLMs will continue to improve their coding abilities and push overall quality higher.
Why are we pretending everyone's code is an etalon of quality? Most software out there is probably hot mess already. No think behind it, let alone ultrathink.
Exactly, before the rise of LLMs it was not at all uncommon to hear people claiming that their job is to just Google API calls or copy and paste code from Stackoverflow. The context back then was that companies are being picky by hiring people who can demonstrate some modicum of understanding of data structures and algorithms because all any developer does is tweak some CSS or make some calls to a database to glue together a CRUD app... why should anyone be expected to know how to reverse a linked list, or how a basic sorting algorithm works... just download an npm package to do that stuff and glue it all together with a series of nested for loops.
With the rise of LLMs that do all of that... those people shutup and shutup real fast.
Code will never go away. Code was there before computer hardware and it will always be there. Code is (almost?) all of computation theory so unless we throw computers away, we shall always use code.
They're not suggesting that code will go away, but rather that it will be abstracted beneath an LLM interface, so that writing code in the future will be like writing assembly today: some people do it for fun or niche reasons, but otherwise it's not necessary, and most developers can't do it.
Whether that happens or not is a different question, but I believe that's what they're suggesting.
Code is formal and there are basic axioms that grounds its semantic. You can build great constructs on top of those semantics, but you can’t strip away their formality without the whole thing being meaningless. And if you can formalize a statement well enough to remove all ambiguity, then it will turn into code.
Programming is taking ambiguous specs and turning them into formal programs. It’s clerical work, taking each terms of the specs and each statements, ensuring that they have a single definition and then write that definition with a programming language. The hard work here is finding that definition and ensuring that it’s singular across the specs.
Software Engineering is ensuring that programming is sustainable. Specs rarely stay static and are often full of unknowns. So you research those unknowns and try to keep the cost of changing the code (to match the new version of the specs) low. The former is where I spend the majority of my time. The latter is why I write code that not necessary right now or in a way that doesn’t matter to the computer so that I can be flexible in the future.
While both activities are closely related, they’re not the same. Using LLM to formalize statements is gambling. And if your statement is already formal, what you want is a DSL or a library. Using LLM for research can help, but mostly as a stepping stone for the real research (to eliminate hallucinations).
If you like sci-fi takes on software systems, check out Vernor Vinge "A Fire upon the deep" and sequels. I recall ship systems software is something like all the code humanity has ever written, plus centuries of LLM churn. One of the protagonists is a space faring software developer particularly good with legacy code.
We are used to thinking about software like in the article, a program that runs deterministically in an OS. Where we are headed might be more like where the LLM or AI system is the OS, and accomplishes things we want through a combination of pre-written legacy software, and perhaps able to accomplish new things on the fly.
Interesting, I kinda do this. Sometimes when an LLM solves a problem for me, I have it write code so that I can reuse that exact same approach deterministically(and I line by line check it). Now I have about a dozen CLI commands that the LLM can use and I'm reasonably (although not 100%) sure I'll get an expected outcome. Really helpful with debugging via steam pipe and connecting to read replicas.
I can't get used to vibe-coded projects on Github. One that I was using for a little while is about a year old, with 40,000 commits and 15,000 PRs. And it has "lite" in its name; it's supposed to be the simple alternative. There were so many bugs. I fixed one, submitted a PR, but it was off the first page in hours. It will never be merged. I moved to a different project with a bit less... velocity, and it has been way smoother.
Hello from assembly programmers to present day javascript folks. Joke aside, I sometimes think how VS Code is written in such layers and layers of code - ~200mb of minified code - Java based IDEs were worser with almost 1GB of code (libs/dependencies). And VS Code did beat native editors (Sublime) of its time to dominate now - may be because of the business model (open & free vs freemium). But it does the job quite well IMO. And it enabled swarms of startups to go to market including billion $ wrappers - including Cursor, Antigravity and almost all UI coding agents. I remember backend developers (Java/C++ type) looking down upon Javascript developers as if we are from an inferior planet or something.
How many of us remember that VSCode is actually a browser wrapped inside a native frame?
VS Code has two things that worked well for it. Web Tech and Money. Web tech makes it easy to write plugins (you already know the stack vs learning python for sublime). And I wonder how much traction it would get if not Microsoft paying devs to wrangle Electron in a usable shape.
People, as a rule, don't really "go backwards." We didn't really walk back on the industrial revolution, and we're probably not going to walk back from this day-and-age's activities. It's only unsettling until the changes are accepted. The old timers can vie for a time before "all this" when they were children and all their needs were given to them by their now-deceased parents, and the cycle can continue on, yet again.
Have you ever encountered the very common real life situation where there's some software that works, and you have a binary for it but you either don't have the source code or it doesn't compile for whatever reason? This is the pre-LLM world. Now, do you think LLMs make this situation better or worse? You may not know what's wrong with your software or how to fix it, but unlike in the past you can throw compute at trying to figure it out, or replicating a subset of it, or even replicating all of it depending on what it is. I think LLMs are making this situation better not worse.
I think the problem with that sort of thought is that the burgeoning sizes of output for even trivial software makes it almost a certainty that:
a) The stuff output by the existing LLMs is too unwieldy even for them to handle , even if the product itself is a glorified chatbot.
b) If all software is throwaway, then the value of all software drops to, effectively, the price of an AI subscription. We'll all be drowning in a market of lemons (https://en.wikipedia.org/wiki/The_Market_for_Lemons), whilst also being producers in said market.
another aspect is amount of code LLMs can handle went from few lines to small codebase in few years, so future is just possible for a lot bigger codebases?
I don't think that's how money works. Enough people have poured enough money into this thing that the actual, measurable results/efficacy/ROI are of secondary importance (to put it mildly). At this point AI adoption is (at least sold as) a fait accompli.
This is wishful thinking. The force of the market is "number go up". Quality increasingly has less and less of a role in the equation. You will eat your slop, and you will like it. It will be the only choice you have.
But the quality of code was already very bad due to market forces. Most code at large companies is notoriously poor despite the talent density, because the incentives are not there to tackle tech debt or improve code quality.
With such a low baseline, there is an optimistic perspective that LLMs could improve the situation. LLMs can produce excellent code when prompted or reviewed well. Unlike human employees, the model does not worry about getting a 'partially meets expectations' rating or avoid the drudgery of cleaning up other people's code.
The model is optimized in a different way to "partially meet expectations". Sycophancy coupled with only really "knowing" what it has been trained on assure a different kind of mediocrity.
The same incentives that discourage good code in pre-AI times are still dominating now. You will be pushed to ship sub-par products in the future, just like you were in the past.
AI certainly has the potential to make the underlying code/design a lot cleaner. We will also be working with dramatically more code, at a much higher rate of change. That alone will be a big challenge to keep sustainable.
The ones making the decision to under-invest on design are either are unaware of the real costs, or are aware and are deliberately choosing that path - that's not new, and I don't expect it to change.
The only thing that has changed is that there used to be a loose correlation between capability to effect change and inherent desire for quality. This correlation barely exists anymore, so the counter-cultural acts that happened to manifest quality inside our perverse systems will occur much more rarely now.
I agree generally but there are periods where creative people show up and a whole slew of existing firms go bust/shrink due to one’s ability to envision a path toward creative destruction.
There is nothing in the post to support the statement. An interesting personal confession, but it does not establish that vibe coding and agentic engineering are converging as a general phenomenon.
As a piece of meat, I look forward to charge rates of $10,000 an hour, to fix code out the vibe code generation.
> People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read
--
It's just as likely that people will be surprised that we used to have billions of lines of human generated code, that no LLM ever approved.
By then AI would be good enough to clean them all up....like I dont get these dooming scenarios they always assume that we are going to be stuck with LLMs and there wont be anything new coming.
The difference between writing assembly code and Ruby code is much smaller than the difference between programming and vibe coding.
Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech.
> Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech.
Companies have been enforcing technology mandates since time immemorial. In the early 2000s there were definitely a lot of mandates to move away from commercial UNIX to Linux. Lots of companies began enforcing the switch to PHP, Ruby and Python for new projects.
Yes, but the entire industry was not pushing any one single tool at the same time. If you disliked Django, you could go to Rails. If you disliked Rails, you had Phoenix. Etc.
Or, it could be like asbestos and the immediate benefits are just too appealing to listen to arguments of skeptical naysayers about some vaguely defined problems that are decades away, if they even happen.
I use AI tools daily (because they feel like they're helping me)
but it's not exactly hard to imagine scenarios where an explosion of slop piling up plus harm to learning by outsourcing all thinking results in systemic damage that actually slows the pace of technological progress given enough time.
History of new technologies tend to average into a positive trend over a long enough time scale but that doesn't mean there aren't individual ups and downs. Including WTF moments looking back at what now seems like baffling decision-making with benefit of hindsight.
Some of us are already experiencing that. For example I handed off an initial version of something some months ago, and the AI-generated stuff they came up with was a huge buggy mess of spaghetti code neither of us understood. Months later we've detangled it, cutting it down to a third the size, making it far simpler to understand, and fixing several bugs in the process (one was even by accident, we'd made note of it, then later when we went to fix it, it was already fixed).
If it is, the fall out will be way worse than if AI ends up living up to (reasonable) expectations.
If it doesn’t, we are going to see over a trillion dollars of capital leave the tech sector, which I think will have worse impacts on the livelihood of tech workers than if AI ends up panning out.
This is something the naysayers need to grapple with. We’ve crossed a line where this tech needs to work simply because of the amount of money depending on that fact.
> If it doesn’t, we are going to see over a trillion dollars of capital leave the tech sector, which I think will have worse impacts on the livelihood of tech workers than if AI ends up panning out.
I don't think it will be worse; if AI pans out the world would be able to continue without a single programmer left. If a trillion dollars leave the tech sector, all those programmers employed outside of the tech sector will still have jobs.
The asbestos hypothetical is a bit different than the "bubble popping" economic crisis scenario though. In this world, AI would just continue being adopted and shoved into every nook and cranny into which it can be made to fit, with valuations only getting bigger and bigger.
The damage would come much later, well beyond the point where it could be simply pulled out and replaced without spending massive amounts of money and would also basically necessitate training an entire new generation of engineers.
Then the AI giants would start appearing vulnerable like cigarette companies in the 90s while an AI Superfund and interstate class action are being planned but Sam Altman would already be a centitrillionaire at that point so it would be someone else's problem.
Have you ever worked on a legacy codebase with actual good code? I struggle to see the difference between your predicted future and today's reality when it comes to working with legacy disasters.
Well, on legacy code base, you still needed humans to write those lines of code. There's a maximal amount of lines a human can write in a year.
Now with LLM we are talking of millions and millions of line of code that could be generated in a single day. The scale of the problem might not be the same at all.
> It used to be if you found a GitHub repository with a hundred commits and a good readme and automated tests and stuff, you could be pretty sure that the person writing that had put a lot of care and attention into that project.
I think this highlights a problem that has always existed under the surface, but it's being brought into the light by proliferation of vibeslop and openclaw and their ilk. Even in the beforetimes you could craft a 100.0% pure, correct looking github repo that had never stood the test of production. Even if you had a test suite that covers every branch and every instruction, without putting the code in production you aren't going to uncover all the things your test suite didn't--performance issues, security issues, unexpected user behavior, etc.
As an observer looking at this repo, I have no way to tell. It's got hundreds of tests, hundreds of commits, dozens of stars... how am I to know nobody has ever actually used it for anything?
I don't know how to solve this problem, but it seems like there's a pretty obvious tooling gap here. A very similar problem is something like "contributor reputation", i.e. the plague of drive-by AI generated PRs from people (or openclaws) you've never seen before. Stars and number of commits aren't good enough, we need more.
> where you fully give in to the vibes, embrace exponentials, and forget that the code even exists [...] It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
So clearly we need a term for what happens when experienced, professional software engineers use LLM tooling as part of a responsible development process, taking full advantage of their existing expertise and with a goal to produce good, reliable software.
"Agentic engineering" is a good candidate for that.
> as part of a responsible development process, taking full advantage of their existing expertise and with a goal to produce good, reliable software
Its shifted so much for me. I used to think that I had a solemn duty to read every line and understand it, or to write all the test cases. Then I started noticing that tools like CodeRabbit, or Cursor would find things in my code that I would rarely find myself.
I think right now, its shifted my perception of my role to one where I am responsible for "tilting" the agentic coding loop; ultimately the goal is a matter of ensuring the agent learns from its mistakes, self-organize and embrace a spirit of Kaizen.
Btw thank you for your work on Django, last 20 years with it were life changing (I did .NET before).
"Code quality" was always a mirage imo. Logic is what matters. I've used the internet from the early days, and probably 99% of software I used always had serious bugs. Ultima online was mentioned in HN recently: it was a real bug-and-exploit-fest. Banks, AAA games, companies like Uber with 1000's of engineers - they all had serious problems (and that's still true). It would be worst if some engineers didn't have that drive to code in high quality, but we gotta admit that was not ever enough. Even now with Claude Code, I see a lot of "specifications" that are far from specified enough - and people blame the LLM.
I agree, I'm actually generating just over of 20,000 lines of code each day at my company. Part of that was the mandate and leaderboards around token usage, but also they started using pull requests as an explicit metric. What I do is usually pull around 5 or so tickets at once, spin up 5 different agents on their own branch, have them work until completion, and then spin up two more agents to handle the merge request.
I'm not checking the code since the code doesn't really matter anymore anyways - I just have the agent write passing tests for the changes or additions I make, and so even if something breaks I can just point to the tests.
Some days, the tickets are completed much faster than I expect and I don't hit my daily token expenditure goal, so I have my own custom harness that actually hooks up an agent to TikTok, basically it splits up the reel into 1 second increments and then feeds those frames to the LLM for it's own consumption. I can easily burn 10m tokens a day on this, and Claude seems to enjoy it.
Personally I want to thank you Simon for putting me onto this "vibe engineering" concept, I really didn't expect an archaeology major like myself to become a real engineer but thanks to AI now I can be! Truly gatekeeping in tech is now dead.
For work I do agentic engineering. As the code that I submit for a code review is hand reviewed by me. I know every line and file that I submit.
My side project is 80% vibe code. Every now and then I look and see all the bad stuff, then I scold Codex a bit and it refactors it for me. So I do see the author's point.
Instead of "vibe coding" by asking the AI to design and write code, I'm having it refine my own designs, and write code under strict supervision and guidance, that I carefully review and iterate on.
I took a rock carving course in school that really enlightened me about software engineering, and it still applies today, especially to AI. You can't just decide what you want to carve, hold the chisel in just the right spot, and whack it with a hammer just perfectly so all the rock you want falls away leaving a perfect statue behind.
"I saw the angel in the marble and carved until I set him free." -Michelangelo
It's a long drawn out iterative process of making millions of tiny little chips, and letting the statue inside find its way out, in its natural form, instead of trying to impose a pre-determined form onto it.
Vibe coding is hoping your first whack of the hammer is going to make a good statue, then not even looking at the statue before shipping it!
But AI assisted conscientious coding (or agentic engineering as Simon calls it) is the opposite of that, where you chip away quickly and relentlessly, but you still have to carefully control where you chisel and what you carve away, and have an idea in your mind what you want before you start.
> I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it’s just going to do it right. It’s not going to mess that up. You have it add automated tests, you have it add documentation, you know it’s going to be good.
> But I’m not reviewing that code. And now I’ve got that feeling of guilt: if I haven’t reviewed the code, is it really responsible for me to use this in production?
Answer: it wholly depends upon what management has dictated be the goal for GenAI use at the time.
There seems to be a trend of people outside of engineering organizations thinking that the "iron triangle" of software (and really, all) engineering no longer holds. Fast, cheap, good: now we can pick all three, and there's no limit to the first one in particular. They don't see why you can't crank out 10x productivity. They've been financially incentivized to think that way, and really, they can't lose if they look at it from an "engineer headcount" standpoint. The outcomes are:
1) The GenAI-augmented engineer cranks out 10x productivity without any quality consequences down the line, and keeps them from having to pay other people
or
2) The GenAI-augmented engineer cranks out 10x productivity with quality consequences down the line, at which point the engineer has given another exhibit in the case as to why they should no longer be employed at that organization. Let the lawyers and market inertia deal with the big issues that exist beyond the 90-day fiscal reporting period.
Either way, they have a route to the destination of not paying engineers, and that's the end goal.
If you don't like that way of running a software engineering organization, well, you're not alone, but if nothing else, you could use GenAI to make working for yourself less risky.
I grew up on construction sites with my dad. If i've done well in my career, it was from watching him operate - managing huge construction crews, how he figured out who to put on what tasks, handling suprises, setbacks, all that stuff
My dad (now retired) was always super practical about stuff. He'd tell me pretty nonchalantly things like "yeah we're dealing with xyz constraint, we may have to cut a corner over here, but that's ok", when I asked him about it he gave me a little spiel that you can be thoughtful about how you do things, including when you can cut a corner and more importantly, what corners are ok to cut.
I really took that to heart - especially the "be thoughtful about the corners you cut"
If an LLM has consistently one shotted certain tasks and they are rote/mechanical - not reviewing that code is probably ok.
Are you getting lazy and not reviewing stuff that should be reviewed even if a human wrote it? That's probably not ok
I can live with some basic code that broke because it used outdated syntax somewhere (provided the code isn't part of a mission critical application), but I can't live with it fucking JWT signing etc
I'd be lying if I said I was not worried about the future. I am not necessarily worried in the sense that there will be some grave, impeding doom that awaits the future of humanity.
Rather, I just feel like I have to constantly remind myself of the impermanence of all things. Like snow, from water come to water gone.
Perhaps I put too much of my identity in being a programmer. Sure, LLMs cannot replace most us in their current state, but what about 5 years, 10 years, ..., 50 years from now? I just cannot help be feel a sense of nihilism and existential dread.
Some might argue that we will always be needed, but I am not certain I want to be needed in such a way. Of course, no one is taking hand-coding away from me. I can hand-code all I want on my own time, but occupationally that may be difficult in the future. I have rambled enough, but all and all, I do not think I want to participate in this society anymore, but I do not know how to escape it either.
If you work in any new technology field, the chances that your job will exist in the same way 50 years from now is very small.
The job, as you have done it at least, was also not here 50 years before you started doing it.
Did you have any of the same feelings knowing that you were doing a job that has not existed in the world very long? That seems like a strange requirement for a meaningful job, that it should remain the same for 50+ years.
In truth, our world and what we do for our careers is entirely shaped by the time that we live in. Even people that ostensibly do the same thing people have done for centuries (farmer, teacher, etc) are very different today than 100 years ago.
> And that feels about right to me. I can plumb my house if I watch enough YouTube videos on plumbing. I would rather hire a plumber.
I don't buy this argument at all. I think if we could pay $20/month to a service that would send over a junior plumber/carpenter/electrician with an encyclopedic knowledge of the craft, did the right thing the majority of the time, and we could observe and direct them, we'd all sign up for that in a heartbeat. Worst case, you have to hire an experienced, expensive person to fix the mess. Yes, I can hear everyone now, "worst case is they burn your house down." Sure, but as we're reminded _constantly_ when we read stories about AI agent catastrophes -- a human could wipe your prod database too. wHy ArE yOu HoLdInG iT tO a DiFfErEnT sTaNdArD???
The business side of the house is getting to live that scenario out right now as far as software goes. Sure you've got years of expertise that an LLM doesn't have _yet_. What makes you think it can't replace that part of your job as well?
You're comparing paying $20 for an AI plumber to paying hundreds/thousands for a traditional plumber.
But that's not what the author is talking about in that passage you quoted. What he's saying is that, if you can pay $20 for an AI plumber, then it stands to reason that eventually you will be able to pay $30 to a company that manages AI plumbers for you, so that you don't even have to go to the trouble of supervising the plumber. Most people will choose the $30.
It's in a section called "Why I’m still not afraid for my career."
The implication here is software engineer jobs are still safe despite basically free labor/material being available to do said jobs because he thinks other people would prefer to pay experienced professionals to do it right at a significantly higher cost. My point is, I think most people will take the low-stakes gamble of having the cheap AI agent do it with self-supervision[0]. He's naive in thinking people are really going to care about artisanal software built by experienced professionals in the future.
0: Even if you subscribe to the "your job will be to supervise the agents" train of thought, you're kinda glossing over the fact that it's probably gonna involve a pretty significant pay cut and the looming problem of "how do new experienced professionals get created if they don't have to/don't need to get their hands dirty"?
> I think if we could pay $20/month to a service that would send over a junior plumber/carpenter/electrician with an encyclopedic knowledge of the craft, did the right thing the majority of the time, and we could observe and direct them, we'd all sign up for that in a heartbeat.
I don’t think this comparison quite works (or maybe I think it works and is wrong) and I think it has something to do with creativity or the initial ideation.
I would do this, but I’m a jack of all trades. I built my own diner booth in my kitchen recently. But my wife, who loves the diner booth, just doesn’t really want to get over the hump of figuring out what she might want. I think most people want to offload the mental load of figuring out where to start.
Most people aren’t just bored by coding, they’re bored or overwhelmed by the idea of thinking about software in the first place. Same with plumbing or construction, most people aren’t hiring someone to direct, they’re hiring a director.
Even I have this about some things, sometimes I choose to outsource the full stack of something to give me more space to do creativity elsewhere.
It is pure arrogance to expect that machines will never be able to code as good as a skilled human.
And AI generated code should be different than human code. AI has infinite memory for details. AI doesn’t need organizational patterns like classes. Potentially AI can write code that is more performant than any human.
Will it look like garbage? Sure. Will the code be more suited to the task? Yes.
What will happen when AI companies increase the price of tokens?
The code produced will only be understandable by AI. You could use locally hosted LLMs, but it won't be as performant as AI run by big guys. And there is nothing stopping greedy companies implementing some ridiculous pattern that only their model can reasonably work with.
So what you'll do in situation when you can't understand "your" codebase and you have to make changes or fix a bug?
The open weight models are nipping on the heels of frontier models. The frontier labs have to make forward progress and keep tokens cheap in order to maintain marketshare.
Eventually, we'll have a Mythos-level model running on integrated hardware on every PC.
I find it hard to believe that code with unnecessary cruft and repetition is "more suited to the task". I've literally deleted hundreds of unnecessary or unused functions at this point. The only way I can agree is if "more suited" means, "it's wearing multiple suits for no reason".
Your post weeks of pure arrogance. You sound like the bozo’s at Anthropic who made an AI agent for finance and think this is somehow going to provide a huge productivity boost because all they do is a bunch of tick boxing and spreadsheet work.
I feel like this is just not true. An JSON API endpoint also needs several decisions made.
- How should the endpoint be named
- What options do I offer
- How are the properties named
- How do I verify the response
- How do I handle errors
- What parts are common in the codebase and should be re-used.
- How will it potentially be changed in the future.
- How is the query running, is the query optimized.
…
If I know the answer to all these questions, wiring it together takes me LESS time than passing it to Claude Code.
If I don’t know the answer the fastest way to find the answer is to start writing the code.
Additionally, whilst writing it I usually realize additional edge cases, optimizations, better logging, observability and what else.
The author clearly stated the context for this quote is production code.
I don’t see any benefits in passing it to Claude Code. It’s not that I need 1000s of JSON API endpoints.
How so?
When I write code every character I type in my computer has less ambiguity than when I write it in human language? I also have the help of LSPs, Linters and Auto-completes.
I've been trying to get into agentic coding and there are non-refactoring instances where I might reac for it (like any time I need to work on something using tailwind; I'm dyslexic and I'd get actual headaches, not exaggerating, trying to decipher Tailwind gibberish while juggling their docs before AIs came around)
Lets say on that JSON API I want to extract part of the logic in a repositiory file i CTRL + W the function then I have almost all of my shortcuts with left alt + two character shortcuts. So once marked i do LAlt + E + M for Extract Method then it puts me in a step in between to rename the function and then LAlt + M+V for MoVe and then it puts me in an interface to name the function.
Once you used to it its like a gamer doing APMS and its deterministic and fast. I also have R+N (rename), G+V (generate vitest) Q+C(query console), Q+H(Query history) and many more. Really useful. Probably also doable with other editors.
Every verb implemented, and implemented correctly according to the obscure IETF and most compatible way when the IETF never made it clear
Intuitively named routes, error, authentication all easily done and swappable for another if necessary
I feel like our timeline split if you’re not seeing this
Use-cases differ, you described a complete REST API, which can be as much of a problem as a too little.
It'll even suggest it
You want a single RPC websocket go for it
Plenty of engineers have loose (or no!) standards and practices over how they write coee. Similarly, plenty of engineering teams have weak and loose standards over how code gets pushed to production. This concept isn't new, it's just a lot easier for individuals and teams who have never really adhered to any sort of standards in their SDLC to produce a lot more code and flesh out ideas.
I personally don’t know any colleagues who were good engineers just because they wrote code faster. The best engineers I know were ones who drew on experience and careful consideration and shared critical insights with their team that steered the direction of the system positively.
> Claude, engineer a system for me, but do it good. Thanks!
I don't know if good engineers can necessarily continue to be good. There is limit to how much careful consideration one can give if everything is on an accelerated timeline. Regardless good or not, there is limit on how much influence you have on setting those timelines. The whole playing field is changing.
There's a cycle that is needed for good system design. Start with a problem and an approach, and write some code. As you write the code, you reify the design and flesh out the edge cases, learning where you got the details wrong. As you learn the details, you go back to the drawing board and shuffle the puzzle pieces, and try again.
Polished, effective systems don't just fall out of an engineers head. They're learned as you shape them.
Good engineers won't continue to be good when vibe-coding, because the thing that made them good was the learning loop. They may be able to coast for a while, at best.
Good engineers are also capable of managing expectations. They can effectively communicate with stakeholders what compromises must be made in order to meet accelerated timelines, just as they always have.
We’ve already had conversations with overeager product people what the ramifications are for introducing their vibe coded monstrosities:
Their contributions are quickly shot down by other stakeholders as being too risky compared to the more measured contributions of proper engineers (still accelerated by AI, but not fully vibe-coded).If that’s not the situation where you work, then unfortunately it’s time to start playing politics or find a new place to work that knows how to properly assess risk.
- I've taken a controversial new pill that accelerates my brain.
-- So you're smart now?
- I'm stupid faster!
That being said, being stupid faster can work if validation is cheap (and exists in the first place).
Turns out "eh close enough" for AGI is just stupidity in an "until done" loop. (Technically referred to as Ralphing.)
That has always been the case. That is why weeks or even months of programming and other project busy work could replace a couple of days of time getting properly fleshed out requirements down.
I estimate that I'm now spending about 10 to 30 hours less time a week in the mechanical parts of writing and refactoring code, researching how to plumb components together, and doing "figure out how to do unfamiliar thing" research.
All of those hours are time that can now be spent doing "careful consideration" (or just being with my family or at the gym or reading a book, which is all cognitively valuable as well).
Now, I suppose I agree that if timelines accelerate ahead of that amount of regained time, then I'm net worse off, but that's not the current situation at the moment, in my experience.
What you said: "figure out how to do unfamiliar thing" -- is correct, and will get things done, but overall quality, maintainability or understanding how individual pieces work...that's what you don't get. One can argue who care about all that as AI can take care of that or already can. I don't think its true today at-least.
10 to 30 hours saved on not learning new things! Hurray!
Unfortunately thoughtful design and engineering doesn't get recognised
Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower, less panicked when production was down and could reason their way (slowly) through pretty much anything thrown at them.
Opposite experience, I've sit next to developers who are trying their fastest to restore production and then making more mistakes to make it even worse, or developers who rush through the first implementation idea they had for a feature, missing to consider so many things and so on.
To me, none of this feels like "going faster", it feels like "opening up possibilities to try more things, with a lot less tedious work".
Unfortunately, a lot of workplaces are ignoring this, believing their engineers are assembly line workers, and the ones who complete 10 widgets per minute are simply better than the ones who complete 5 widgets per minute.
Companies want workflows that work with mediocre programmers because they are more like interchangeable parts. This is the real secret to why AI programming will work in a lot of places. If you look at the externalities of employing talented people, shitty code actually looks better than great code.
This is the earworm the leaders of these companies have allowed into their minds. Like Agent Mulder, they Want To Believe in this so badly...
If you assume they are not idiots and analyze the FOMO incentives via a little game-theory, it becomes clear why.
Assuming the competition has adopted AI, leadership can ignore it, or pursue it. If they adopt it, then they are level with the completion whether AI actually succeeds or fails - they get to keep their executive job.
If leadership ignores AI, and it actually delivers the productivity gains to the competition, they will be fired. If they ignore AI and it's a bust, they gain nothing.
However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out. It's precisely that speed that enables a process like "let's try X, hmm, how about Y, no... ok, Z is nice; ok team, here are the tradeoffs...". Then they remember their experience with X, Y, and Z, and use it to shape their thinking going forward.
Meanwhile, other engineers have gotten X to finally mostly work and are invested in shipping it because they just want to be done. In my experience, this is how a lot of coding agents seem to act.
It's not obvious to me how to apply the expert loop to agentic coding. Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier...
> Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier
The ideal solution increasingly seems to be encoding everything that differentiates a good engineer from a bad engineer into your prompt.
But at that point the LLM isn’t really the model as much as the medium. And I have some doubts that LLMs are the ideal medium for encoding expertise.
The way you apply the expert loop is to be the expert. "Can we try this...", "have you checked that...", "but what about...".
To some degree you can try to get agents to work like this themselves, but it's also totally fine (good, actually) to be nudging the work actively.
That's not my experience... mostly it's about first interrogating the actual problem with the customer and conditions under which it occurs. Maybe we even have appropriate logging in our production application? We usually do, because you know, we usually need to debug things that have already happened.
(If it's new/unreleased code, sure fine, let's find a debugger.)
The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.
The risk isn't that agents write bad code. It's that developers lose the sense that tells them where code is bad. Code review is perception. Writing code is proprioception. They're different senses and one doesn't substitute for the other.
The question for the agent era isn't "is the code good enough to ship" — it's "do I still have enough coupling to the codebase to know when it isn't?"
Loss of discipline can be a result of panic or greed.
Perhaps believing that your own costs or your competitors' costs are suddenly becoming 10x lower could inspire one of those conditions?
(Also for greenfield projects specifically, it can plausibly be an experiment just to verify what happens. Some orgs are big enough that of course they can put a couple people on a couple-month project that'll quite likely fall flat.)
I figure if it cant code when it has all of the necessary context available and when obscure failures are easily detected then why would i trust it when building features and fixing bugs?
It never did get good enough at refactoring.
I do this too, but then I sit and observe how agent gets very creative by going around all of these layers just to get to the finish line faster.
Say, for example, if I needlessly pass a mutable reference and the linter screams at me, I know it's either linter is wrong in this case, or I should listen to it and change the signature. If I make the lazy choice, I will be dissatisfied with myself, I might even get scolded, or even fired if I keep making lazy choices.
LLM doesn't get these feelings.
LLM will almost always go for silencing it because it prevents it from reaching the 'reward'. If you put guardrails so that LLM isn't allowed to silence anything, then you get things like 'ok, I'll just do foo.accessed = 1 to satisfy the linter'.
Same story with tests. Who decides when it's the test that should be changed/deleted or the implementation?
I can generate a lot of tests amounting to assert(true). Yeah, LLM generated tests aren't quite that simplistic, but are you checking that all the tests actually make sense and test anything useful? If no, those tests are useless. If yes, I don't actually believe you.
It's the typical 10 line diff getting scrutinized to death, 1000 line diff: Instant LGTM.
Pay attention to YOUR OWN incentives.
Lead engineer says something is not workable? Pm overrides saying that Claude code could do it. Problems found months later at launch and now the engineers are on the hook.
New junior onboardee declares that their new vision is the best and gets management onto it cuz it’s trendy -> broken app.
It’s made collaboration nearly unbearable as you are beholden to the person with the lowest standards.
If the code doesn't compile, that's easy to spot. If the code compiles but doesn't work, that's still somewhat easy to spot.
If the code compiles and works, but it does the wrong thing in some edge case, or has a security vulnerability, or introduces tech debt or dubious architectural decisions, that's harder to spot but doesn't reduce the review burden whatsoever.
If anything, "truthy" code is more mentally taxing to review than just obviously bad code.
Honest question: what about the counter-argument that humans make subtle mistakes all the time, so why do we treat AI any differently?
A difference to me is that when we manually write code, we reason about the code carefully with a purpose. Yes we do make mistakes, but the mistakes are grounded in a certain range. In contrast, AI generated code creates errors that do not follow common sense. That said, I don't feel this differentiation is strong enough, and I don't have data to back it up.
But another answer is that human autonomy is coupled to responsibility. For most line employees, if they mess up badly enough, it's first and foremost their problem. They're getting a bad performance review, getting fired, end up in court or even in prison. Because you bear responsibility for your actions, your boss doesn't have to watch what you're up to 24x7. Their career is typically not on the line unless they're deeply complicit in your misbehavior.
LLMs have no meaningful responsibility, so whoever is operating them is ultimately on the hook for what they do. It's a different dynamic. It's probably why most software engineers are not gonna get replaced by robots - your director or VP doesn't want to be liable for an agent that goes haywire - but it's also why the "oh, I have an army of 50 YOLO agents do the work while I'm browsing Reddit" is probably not a wise strategy for line employees.
Isn’t this just because you have seen a lot of PRs from inexperienced engineers? People learn LLM behavior over time, too.
Yes, as an engineer I make mistakes, but I could never make as many mistakes per day as an LLM can
Their mental model doesn't map cleanly enough to yours, and so where for a human you'd have some way to follow their thought patterns and identify mistakes, here the alien makes mistakes that don't add up.
Like the alien has encyclopedic knowledge of op codes in some esoteric soviet MCU but sometimes forgets how to look for a function definition, says "It looks like the read tool failed, that's ok, I can just make a mock implementation and comment out the test for now."
Software developers get paid big money because they can speak alien, the only thing that is changing is the dialect.
I'm an engineers engineer: I get the job isn't LOC but being able to communicate and translate meatspace into composable and robust sustems.
So when I mean an alien when I say an alien.
Not human.
Not in the cute "oh that guy just hears what everyone else hears and somehow interprets it entirely differently like he's from a different planet" alien way, but in the, "it is a different definition of intelligence derived from lacking wetware" alien way.
Intelligence is such multidimensional concept that all of humanity as varied as we are, can fit in a part of the space that has no overlap with an LLM.
-
Now none of that is saying it can't be incredibly useful, but 99% of the misuse and misunderstanding of LLMs stems from humans refusing to internalize that a form of intelligence can exist that uses their language but doesn't occupy the same "space" of thinking that we all operate in, no matter how weird or unqiue we think we are.
People used to like them and they used to be legends (even if not everyone liked them)
Notch, Woz, Linus and Geohot come to mind
The Metasploit creator Dean McNamee worked for me and he was just like that and a total monster at engineering hard tech products
I have no strong idea why people can't accept that intelligence formed separately of a human brain can truly be alien: not in the hyperbolic sense of "that person is so unique it's like they're a different species", but "that thing does not have a brain, so it can have intelligence that is not human-like".
A human without a brain would die. An LLM doesn't have a brain and can do wonderous things.
It just does them in ways that require first accepting that there is no homo sapien thinks like an LLM.
We trained it on human language so often times it borrows our thought traces so to speak, but effective agentic systems form when you first erase your preconceived notions of how intelligence works and actually study this non-human intelligence and find new ways to apply it.
It's like the early days of agents when everyone thought if you just made an agent for each job role in a company and stuck them in a virtual office handing off work to each other it'd solve everything, but then Claude Code took off and showed that a simple brain dead loop could outperform that.
Now subagents almost always are task specific, not role specific.
I feel like we could leap ahead a decade if people could divorce "we use language, and it uses language so it is like us", but I think there's just something really challenging about that because it's never been true.
Nothing had this level of mastery over human language before that wasn't a human. And funnily enough, the first times we even came close (like Eliza) the same exact thing happened: so this seems like a persistent gap in how humans deal with non-humans using language.
The current fever pitch mandates from above seem to want it applied liberally, and pushing back against that is so discouraging and often career-limiting as to wear the fabric of one's psyche threadbare. With all the obvious problems being pointed out to people, there are just as many workarounds; and these workarounds, as is often revealed shortly thereafter, have their own problems, which beget new solutions, ad infinitum.
At some point it genuinely seems like all this work is for the sake of the machine itself. I suppose that is true: The real goal has become obscured at so many firms today, that all that remains is the LLM. Are the people betting the farm and helping implement the visions of those who have done so guaranteed a soft exit to cushion them from the consequences, or is rationality really being discarded altogether?
Sure, sound engineering principles can help work around these problems, but what efficiency is truly gained, in terms of cognitive load, developer time, money, or finite resources? Or were those ever an earnest concern?
It’s an absolute game changer, and it can now multiply your productivity fivefold if it’s a solo greenfield project.
Maybe half a year ago it was as you said. You had to wait for the agent to finish, you had to review carefully, and often the result was not that great. You did not save a lot of time.
Now I can spin up 3+ parallel conversations in Codex, each in a git worktree. My work is mainly QA testing the features, refining the behavior, and sometimes making architectural decisions.
The results are now undeniable. In the past I could not have developed a product of that scope in my free time.
That is what is possible today. I suspect many engineers have not yet tried things that became feasible over the last months. Like parallel agents, resolving merge conflicts, separating out functionality from a large branch into proper PRs.
I have heard this statement every single day for 2 years and yet we still have no companies compressing 10 years into 1 year thus exploding past all the incumbents who don't "get it".
> if it’s a solo greenfield project
which is a pretty large caveat. Anecdotally, I've found my side projects (which are solo greenfield projects, and don't need to be supported to the same standards as enterprise software) have gained the boost the GP was talking about.
At work, it's different, since design, review, and maintenance is much more onerous.
The first line of code was written on November 25th. It achieved adoption in the "personal agents" space that far exceeded the other companies that had tried the same thing.
(Whether or not you trust the quality of the software you can't deny the impact it had in such a short time. It defined a new category of software.)
Like, look at e.g. YC minus the AI and AI ajacent companies. Are those startups meaningfully more impressive or feature-rich as compared to a couple years ago?
Which is exactly why you can't use it as an example, there is no control. This is basic stuff.
If agents could really compress 10 years of development into 1 year, you'd see people making e.g. HFT platforms and becoming obscenely rich, not making a fun open-source project and getting hired by OpenAI as an employee.
If that were true, all of these anti-AI greybeards who have been in the game for 30 years would all own their own jets.
Cryptocurrencies? Barely any other use than money laundering, buying drugs and betting on the outcome of battles in war. And NFTs? No use at all other than money laundering and setting money ablaze.
That's a big if. I don't have numbers but most professional engineers are not working on such projects
The degenerate side is clueless upper management and fad-driven engineering. We have talked extensively about this.
There is a more rational side to it that I've seen in my org: some engineers absolutely refuse to use AI and as a consequence they are now, clearly and objectively, much less productive than other engineers. The thing is, you still need to learn how to use the tool, so a nontrivial percentage of obstinate engineers need to be driven to use this in the same way that some developers have refused to use Docker or k8s or whatever.
Perhaps these “obstinate” engineers have good reason in their decision. And it should be their decision!
To be so confident in what is “the right way (TM)” and try to force it onto others is... revealing.
After 18 months the hard evidence is in place. And much like replacing bare-metal servers for many use cases where evidence shows that the burden of k8s or the substitution of shell scripts for Terraform, it's time to move on.
I don't really see a place for no AI usage in line-of-business software apps anymore.
I swear I'm living through mass hysteria.
If I get pwned because my AI agent wrote code that had a security vulnerability, none of my users are going to accept the excuse that I used AI and it's a brave new world. I will get the blame, not Anthropic or OpenAI or Google but me.
The same goes for if my AI generated code leads to data loss, or downtime, or if uses too many resources, or it doesn't scale, or it gives out error messages like candy.
The buck stops with me and therefore I have to read the code, line-by-line, carefully.
It's not even a formality. I constantly find issues with AI generated code. These things are lazy and often just stub out code instead of making a sober determination of whether the functionality can be stubbed out or not.
You could say "just AI harder and get the AI to do the review", and I do this a lot, but reviewing is not a neutral activity. A review itself can be harmful if it flags spurious issues where the fix creates new problems. So I still have to go through the AI generated review issue-by-issue and weed out any harmful criticism.
First of all, building a system that constrains the output of the AI sufficiently, whether that's typing, testing, external validation, or manual human review in extremis. That gets you the best result out of whatever harness or orchestration you're using.
Secondly, there's the level at which you're intervening, something along the hierarchy of "validate only usage from the customer perspective" to "review, edit, and validate every jot and tiddle of the codebase and environment". I think for relatively low importance things reviewing at the feature level (all code, but not interim diffs) is fine, but if you're doing network protocol you better at least validate everything carefully with fuzzing and prop testing or something like that.
And then you've got how you structure your feedback to the LLM itself - is it an in-the-loop chat process, an edit-and-retry spec loop, go-nogo on a feature branch, or what? How does the process improve itself, basically?
I agree with you entirely that the responsibility rests on the human, but there are a variety of ways to use these things that can increase or decrease the quality of code to time spent reviewing, and obviously different tasks have different levels of review scrutiny, as well.
My nonexistent backend isn’t going to be pwned if there is a bug in the thumbnail generation.
After the QA testing on my device, a quick scroll through of the code is enough.
Maybe prompt „are errors during thumbnail generation caught to prevent app crashes?“ if we‘re feeling extra cautious today.
And just like that it saved a day of work.
Hmm. Historically image editing was one of the easier to exploit security holes in many systems. How do you feel about having unknown entities having shell inside your datacenter or vpc?
It is so embarrassing that LOC is being used as a metric for engineering output.
Objectives change; timeliness matters. The speed at which you deliver value is incredibly important, which is why it matters to measure your process. Deceptively dense is what I’d call software engineers who can’t accept that the process is actually generalizable to a degree and that lines of code are one of the few tangible things that can be used as a metric. Can you deliver value without lines of code?
This assumes that shorter code is faster to write. To quote Blaise Pascal, "I would have written a shorter letter, but I did not have the time."
> Can you deliver value without lines of code?
No, but you can also depreciate value when you stuff a codebase full of bloated, bug-ridden code that no man or machine can hope to understand.
“All models are wrong, some are useful”. What’s not useful is constantly bitching about how there’s no way to measure your work outside of the binary “is it done” every time process efficiency is brought up.
Very far from the truth in practice, every line of code isn't as difficult/easy to review as the other.
I have worked with code where 1000s of lines are very straightforward and linear.
I’ve worked on code where 100 lines is crucial and very domain specific. It can be exceptionally clean and well-commented and it still takes days to unpack.
The skills and effort required to review and understand those situations are quite different.
One is like distance driving a boring highway in the Midwest: don’t get drowsy, avoid veering into the indistinguishable corn fields, and you’ll get there. The other is like navigating a narrow mountain road in a thunderstorm: you’re 100% engaged and you might still tumble or get hit by lightning.
So I’m pretty skeptical that reviewing 2000 lines of code won’t take any more time than reviewing 200 lines of code.
Furthermore how do you know the AI generated lines are the open highway lines of code and not the mountain road ones? There might be hallucinations that pattern match as perfectly reasonable with a hard to spot flaw.
It depends on the code. If you’re comparing code of the same complexity then, sure, 2000 lines will take longer than 200.
I was comparing straight linear code to far more complex code. The bug/line rate will be different and the time to review per line will be different.
> Furthermore how do you know the AI generated lines are the open highway lines of code and not the mountain road ones?
Again, it depends on the code. Which was my point.
Linear code lacks branches, loops, indirection, and recursion. That kind of code is easy to reason about and easy to review. The assumptions are inherently local. You still have to be alert and aware to avoid driving into the cornfields.
It’s a different beast than something like a doubly-nested state machine with callbacks, though. There you have to be alert and aware, and it’s inherently much harder to review per line of code.
It's still useful, however, because that is the only metric that is instantly intuitively understandable and comparable across a wide variety of contexts, i.e. across companies and teams and languages and applications.
As we know, within the same team working on the same product, a 1000 LoC diff could take less time than a 1 line bug fix that took days to debug. Hence we really cannot compare PRs or product features or story points across contexts. If the industry could come up with a standard measure of developer productivity, you'd bet everyone would use it, but it's unfeasible basically for this very reason.
So, when such comparisons are made (and in this case it was clearly a colloquial usage), it helps to assume the context remains the same. Like, a team A working on product P at company C using tech stack T with specific software quality processes Q produced N1 lines of code yesterday, but today with AI they're producing N2 lines of code. Over time the delta between N1 and N2 approximates the actual impact.
(As an aside, this is also what most of the rigorous studies in AI-assisted developer productivity have done: measure PRs across the same cohorts over time with and without AI, like an A/B test.)
I rewrote the same program using my own brain and just using ChatGPT as google and autocomplete (my normal workflow), I produced the same thing in 1500 LOC.
The effort difference was not that significant either tbh although my hand coded approach probably benefited from designing the vibe coded one so I had already though of what I wanted to build.
My experience was the same as you when I started using agents for development about a year ago. Every time I noticed it did something less-than-optimal or just "not up to my standards", I'd hash out exactly what those things meant for me, added it to my reusable AGENTS.md and the code the agent outputs today is fairly close to what I "naturally" write.
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
We should have gone the other way; generated a lot of code and demanded pay raises; look at the LOC I cranked out! Company is now in my debt!
If they weren't going to care enough as managers to learn and line go up is all that matters to them, make all lines go up = winning
You all think there's more to this than performative barter for coin to spend on food/shelter.
Although this requires you to take pride in your profession and what you do.
Got it.
...ok fine; lack of political action to put us all on the hook for your healthcare is your choice to take a gamble on a paycheck. It's a choice to say your own existence is not owed the assurance of healthcare.
So I will honor your choice and not care you exist.
Good way of putting it.
AI helps eng ship more and faster, I think that’s the takeaway.
Do you reject all stats that treat the number of people involved (eg. 2 million pepole protested X) as "embarrassing" ... because they lump incredibly varied people together and pretend they're equal?
We're also assuming LOC vibe coded by competent engineers who should be able to tell when something is overengineered.
If we shift the paradigm of how we approach a coding problem, the coding agents can close that gap. Ten years ago every 10 or 15 minutes I would stop coding and start refactoring, testing, and analyzing making sure everything is perfect before proceeding because a bug will corrupt any downstream code. The coding agents don't and can't do this. They keep that bug or malformed architecture as they continue.
The instinct is to get the coding agents to stop at these points. However, that is impossible for several reasons. Instead, because it is very cheap, we should find the first place the agent made a mistake and update the prompt. Instead of fixing it, delete all the code (because it is very cheap), and run from the top. Continue this iteration process until the prompt yields the perfect code.
Ah, but you say, that is a lot of work done by a human! That is the whole point. The humans are still needed. The process using the tool like this yields 10x speed at writing code.
You could get to "something that works" rather fast but it took a long time to 1) evaluate other options (maybe before, maybe after), 2) refine it, 3) test it and build confidence around it.
I think your point stands but no one really knows where. The next year or so is going to be everyone trying to figure that out (this is also why we hear a lot of "we need to reinvent github")
I believe the llm providers went with the wrong approach from the off - the focus should’ve been on complementing labour not displacement. And I believe they have learned an expensive lesson along the way.
And it's not just easier because it's cheap, it's easier because you're not emotionally attached to that code. Just let it produce slop, log what worked, what didn't, nuke the project and start over.
It just gets incredibly boring.
Vibe coding: one shot or few shot, smoke test the output, use it until it breaks (or doesn't). Ideal for lightweight PoC and low stakes individual, family or small team apps.
Agentic engineering: - You care about a larger subset of concerns such as functional correctness, performance, infrastructure, resilience/availability, scalability and maintainability. - You have a multi-step pipeline for managing the flow of work - Stages might be project intake, project selection, project specification, epic decomposition, d=story decomposition, coding, documentation and deployment. - Each stage will have some combination of deterministic quality gates (tests must pass, performance must hit a benchmark) and adversarial reviews (business value of proposed project, comprehensiveness of spec, elegance of code, rigor and simplicity of ubiquitous language, etc)
And it's a slider. Sometimes I throw a ticket into my system because I don't want to have to do an interview and burn tokens on three rounds of adversarial reviews, estimating potential value and then detailed specification and adversarial reviews just to ship a feature.
I've been using Opus, GPT-5.5, and some lesser models on a daily basis, but not having them handle entire tasks for me. Even when I go to significant effort to define and refine specs, they still do a lot of dumb things that I wouldn't allow through human PR review.
It would be really easy to just let it all slide into the codebase if I trusted their output or had built some big agentic pipeline that gave me a false sense of security.
Maybe 10 years from now the situation will be improved, but at the current point in time I think vibe coding and these agentic engineering pipelines are just variations of a same theme of abdicating entirely to the LLM.
This morning I was working on a single file where I thought I could have Opus on Max handle some changes. It was making mistakes or missing things on almost every turn that I had to correct. The code it was proposing would have mostly worked, but was too complicated and regressed some obvious simplifications that I had already coded by hand. Multiply this across thousands of agentic commits and codebases get really bad.
Let's assume AI is 10x perfect than humnas in accuracy and produces 10x less bugs and increases the speed by 1000x compared to a very capable software engineer.
Now imagine this: A car travels at a road that has 10x more bumps but it is traveling 1000x slower pace so even though there are 10x bumps, your ride will feel less bumpy because you're encountering them at far lower pace.
Now imagine a road that has 10x less bumps on the road but you're traveling at 1000x the speed. Your ride would be lot more bumpy.
That's the agentic coding for you. Your ride would be a lot more painful. There's lots of denial around that but as time progresses it'll be very hard to deny.
Lastly - vibe coding is honest but agentic coding is snake oil [0] and these arguments about having harnesses that have dozens of memory, agent and skill files with rules sprinkled in them pages and pages of them is absolutely wrong as well. Such paradigm assumes that LLMs are perfect reliable super accurate rule followers and only problem as industry that we have is not being able to specify enough rules clearly enough.
Such a belief could only be held by someone who hasn't worked with LLMs long enough or is a totally non technical person not knowledgeable enough to know how LLMs work but holding on to such wrong belief system by highly technical community is highly regrettable.
[0]. https://news.ycombinator.com/item?id=48018018
I maxxed out Claude Max $200 subscription and before I justified spending $100/day.
And it was worth it, but not because it wrote me so good code, but because I learnt the lessons of software engineering fast. I had the exact ride you are describing. My software was incredible broken.
Now I see all the cracks, lies and "barking the wrong tree" issues clearly.
NOW i treat it as an untrustworyth search engine for domains I’m behind at. I also use predict next edit and auto-complete, but I don’t let AI do any edit on my codebase anymore.
Favorite quote:" There are a whole bunch of reasons I’m not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you’re doing, you can run so much faster with them. [...]
I’m constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we’re trying to achieve here is still really difficult. [...]"
Note: I still review pretty much every line of code that I own, regardless of who generates it, and I see the problems with agents very clearly... but I can also see the trends.
My take: Instead of crafting code, engineering will shift to crafting bespoke, comprehensive validation mechanisms for the results of the agents' work such that it is technically (maybe even mathematically) provable as far as possible, and any non-provable validations can be reviewed quickly by a human. I would also bet the review mechanisms would be primarily visually, because that is the highest bandwidth input available to us.
By comprehensive validations I don't mean just tests, but multiple overlapping, interlocking levels of tests and metrics. Like, I don't just have an E2E test for the UI, I have an overlapping test for expected changes in the backend DB. And in some cases I generate so many test cases that I don't check for individual rows, I look at the distribution of data before and after the test. I have very few unit tests, but I do have performance tests! I color-code some validation results so that if something breaks I instantly know what it may be.
All of this is overkill to do manually but is a breeze with agents, and over time really enables moving fast without breaking things. I also notice I have to add very few new validations for new code changes these days, so once the upfront cost is paid, the dividends roll in for a long time.
Now, I had to think deeply about the most effective set of technical constraints that give me the most confidence while accounting for the foibles of the LLMs. And all of this is specific to my projects, not much can be generalized other than high-level principles like "multiple interlocking tests." Each project will need its own custom validation (note: not just "test") suites which are very specific to its architecture and technical details.
So this is still engineering, but it will be vibe coding in the sense that we almost never look at the code, we just look at the results.
Other than for your own pet projects, almost all of what you said has no place for "vibe engineering" / or "vibe coding" on serious software engineering products that are needed in life and death situations.
And not all "production-grade, hundred billion dollar systems" are that critical. Like, Claude Code as we all know is clearly vibe-coded and is already a 10-billion (and rapidly increasing!) dollar system. Google Search and various Meta apps meet those criteria and people are already using LLMs on that code, and will soon be "vibe coding" as I described it.
AWS meets that criteria and has already had an LLM-caused outage! But that's not stopping them from doing even more AI coding. In fact I bet they will invest in more validation suites instead, because those are a good idea anyways. After all, all the cloud providers have been having outages long before the age of LLMs.
The thing most people are missing is that code is cheap, and so automated validations are cheap, and you get more bang for the buck by throwing more code in the form of extensive tests and validations at it than human attention.
Edited to add: I think I can rephrase the last line better thus: you get more bang for the buck by throwing human attention at extensive automated tests and validations of the code rather than at the code itself.
There are people who write software for hedge funds, quant firms, aviation and defense systems, data center providers, major telecom services used by hospitals and emergency services and semiconductor firms and the big oil and energy companies and that is NOT "almost no-one" and these companies see and make hundreds of billions of dollars a year on average.
This is even before me mentioning big tech.
Perhaps the work most here on this site are doing is not serious enough that can be totally vibe-coded and are toy projects and bring in close to $0 for the company to not care.
What I am talking about is the software that is responsible for being the core revenue driver of the business and it being also mission critical.
This is spot on. I think the tooling is evolving so much particularly on the design side that its not worth the "translation cost" to stay (or even be) on the Figma side anymore.
It's the bad, semi-coherent submissions that eat up your time, because you do want to award some points and tell students where they went wrong. It's the Anna Karenina principle applied to math.
Code review is the same thing. If you're sure Claude wrote your endpoint right, why not review it anyway? It's going to take you two minutes, and you're not going to wonder whether this time it missed a nuance.
Pretty soon there is no code reuse and we're burning money reinventing the wheel over and over.
With LLMs, you can race right for that horizon, go right through, and continue far beyond! But then of course you find yourself in a place without reason (the real hell), with all the horror and madness that that entails.
They really are bad for creating a healthy codebase
Isn't this a bit like old Java or IDE-heavy languages like old Java/C#? If you tried to make Android apps back in the early days, you HAD to use an IDE, writing the ridicolous amount of boilerplate you had to write to display a "Hello Word" alert after clicking a button was soul destroying.
If the barrier is too high, code is refactored.
It's seriously the thing that worries (and bothers) me the most. I almost never let unedited LLM comments pass. At a minimum.
Most of the time, I use my own vibe-coded tool to run multiple GitHub-PR-review-style reviews, and send them off to the agent to make the code look and work fine.
It also struggles with doing things the idiomatic way for huge codebases, or sometimes it's just plain wrong about why something works, even if it gets it right.
And I say this despite the fact that I don't really write much code by hand anymore, only the important ones (if even!) or the interesting ones.
Also, don't even get me started on AI-generated READMEs... I use Claude to refine my Markdown or automatically handle dark/light-mode, but I try to write everything myself, because I can't stand what it generates.
"Ugh, no! Why would you say it like that? That's not even how it works! Now, I need to write a full paragraph instead of a short snippet to make sure that no future agents get confused in the same way."
Opus 4.7 built it about 90% the same way I would, but had way more convenience methods and step-validations included.
It's great, and really frees me up to think about harder problems.
Just having ~13yrs experience heavily weighted in one language with some formal studying of others makes directing llms a lot simpler.
Learning syntax, primitives, package managers, testing, etc isn't that much of a lift compared to how I used to program.
Was helping a non-dev colleague who's using claude cowork/code to automate reporting the other day. They understand the business intelligence side well, but were struggling with basic diction to vibe code a pyautogui wrapper to pull up RDP and fill out a MS Access abstraction on a vendor DB.
Think we'll be fine for another 5-10 years as a profession
Repeat after me: it follows that most of the money the software makes occurs during the maintenance phase.
Repeat after me: our industry still does not understand this after almost 100 years of being in existence.
Alan Kay was 100% right when he said that the computer revolution hasn't occurred yet. For all of our current advancements all tools are more or less in the Stone Age.
My great hope is that AI will actually accelerate us to a point where the existing paradigm fully breaks beyond healing and we can finally do something new, different, and better.
So for now - squeee! - put a jetpack on your SDLC with AI and go to town!!! Move fast and break things (like, for real).
I find the LLM as interactive tutor reviewing my work in a proof checker to be a really killer combo.
Because most of the complexity in software comes from interfacing with external components, when you don't need to adapt to this you can write simpler and better code.
Rather than relying on an external library, you just write your own and have full control and can do quality control.
Linux kernel is 30 000 000 LOC. At 100 tokens /s, let's say 1 LOC per second produced for a single 4090 GPU, in one year of continuous running 3600 * 24 * 365 = 31 536 000 everyone can have its own OS.
It's the "Apps" story all over again : there are millions of apps, but the average user only have 100 max and use 10 daily at most.
Standardize data and services and you don't need that much software.
What will most likely happen is one company with a few millions GPUs will rewrite a complete software ecosystem, and people will just use this and stop doing any software because anything can be produced on the fly. Then all compute can be spent on consistent quality.
Property-based testing in particular has uncovered a number of invariants in every code base I've introduced it to.
tbf depending on the agent/model a lot of the tests end up being thrown out so it's possible I _should_ handwrite more tests, but having better prompts and detailed plans seems to mitigate that somewhat
> If another team hands over something and says, “hey, this is the image resize service, here’s how to use it to resize your images”... I’m not going to go and read every line of code that they wrote.
The distance of accountability of the output from its producer is an important metric. Who will be held accountable for which output: that's important to maintain and not feel the "guilt".
So, organizations would need to focus on better and more granular building incentives and punishment mechanisms for large-scale software projects.
I am not a developer and have very basic code knowledge. I recently built a small and lightweight Docker container using Codex 5.5/5.4 that ingests logs with rsyslog and has a nice web UI and an organized log storage structure. I did not write any code manually.
Even without writing code, I still had to use common sense in order to get it in a place I was happy with. If i truly knew nothing, the AI would have made some very poor decisions. Examples: it would have kept everything in main.go, it would have hardcoded the timezone, the settings were all hardcoded in the Go code, the crash handling was non existent, and a missing config would have prevented start. And that is on a ~3000 line app. I cannot imagine unleashing an AI on a large, complex. codebase without some decent knowledge and reviewing.
But using an agentic LLM to complete boilerplate is attractive simply because we've created a mountain of accidental and intentional complexity in building software. It's more of a regression to the mean of going back to the cognitive load we had when we simply built desktop applications.
The future is going to dynamically budget and route different parts of the SLDC through different models and subagents running on the cloud. Over time, more and more of that process will be owned by robots and a level of economic thinking will be incorporated into what is thought of today as "software engineering." At some point vibe coding _is_ coding and we're maybe closer to that point than popularly believed.
* The first agent's claim that was 3.x-only was wrong * is nice-to-have but doesn't target our exact case as cleanly as the agent claimed. * The agent's "direct fix for yyy" is overstated. * not 57% as the earlier agent claimed
etc etc etc
And I forgot how many times my session with claude starts: did you read my personal CLAUDE.md and use background agents for long running operations?
I use enterprise subscription, max effort, was with both 4.6 and 4.7.
And please refrain from comments like "you're using it wrong", as the drop in output quality is very clear and noticeable.
What standard of result are you pursuing and are you willing to discipline yourself enough to achieve it?
AI can't make you un-lazy, no matter how many tokens you pay for.
No one is suggesting that.
Claude Code in particular seems really uninterested in this aspect of the problem and I've stopped using entirely because of this.
So the number of bugs to find remains constant but the amount of code to review scales with the capability of the agent.
[1] https://github.com/mohsen1/tsz
e.g, I change velocity of player to '200' and of bullets to '300', and it only updated the bullet velocity. Then told me the player was already 'at the correct value' even though it was set to 150. Things like that.. :)
If you mean 'passes tests', that can be tackled by AI. Although AI writing its own tests and then implementing its own code is definitely not a foolproof strategy.
How do you manage/orchestrate this? I'm genuinely curious.
The most important part and why slop isn't the same as a code written by someone else. The model doesn't care, it just produces whatever it is asked to produce. It doesn't have pride, it doesn't have ego, it doesn't artisanal qualities, it doesn't have ownership.
I believe this is a common fault of not being able to zoom out and look at what trade offs are being made. There’s always trade-offs, the question is whether you can define them and then do the analysis to determine whether the result leaves you in a net benefit state.
Coding agents are also upending how software development works, in a way that we are still very much figuring out.
I don't think anyone has a confident answer for how best to apply them yet, especially on larger production-ready projects.
Can agentic engineers adhere to a similar code of ethics that a professional engineer is sworn to uphold?
https://www.nspe.org/career-growth/nspe-code-ethics-engineer...
Can software engineers?
You can use these tools wisely without letting it run unverified carelessly.
There are certain codebases and pieces of code we definitely want every line to be reasoned and understood. But like his API endpoint example, no reason to fuss with the boilerplate.
This has definitely been my shift over the past few months, and the advantage is I can spend much more time and energy on getting the code architecture just right, which automatically prevents most of the subtle bugs that has people wringing their hands. The new bar is architecting code to be defined as well as an API endpoint->service structure so you can rely on LLMs to paint by numbers for new features/logic.
Spend a lot more time on architecting and testing than hand rolling most repos now.
Hats off to people who enjoy the minutia of programming everything by hand, but turns out I enjoy the other aspects of software development more.
> Claude Code does not have a professional reputation!
how come?
Do this enough times, and I will have forgotten how to think.
Like many people I have used AI to generate crap I really don't care about. I need an image. Generate something like, whatever. Great hey a good looking image! No that's done I can do something I find more interesting to do.
But it's slop. The image does not fit the context. Its just off. And you can tell that no one really cared.
This isn't good.
You can't do that for images and text.
Makes me want to just give up programming forever and never use a computer again.
If LLMs stop improving at the pace of the last few years (I believe they already are slowing down) then they will still manage to crank out billions lines of code which they themselves won’t be able to grep and reason through, leading to drop in quality and lost revenue for the companies that choose to go all-in with LLMs.
But let’s be realistic - modern LLMs are still a great and useful tool when used properly so they will stay. Our goal will be to keep them on track and reduce the negative impact of hallucinations.
As a result software industry will move away from large complex interconnected systems that have millions of features but only a few of them actively used, to small high quality targeted tools. Because their work will be easier to verify and to control the side effects.
Depending on how you measure "improvement" they already have or they never will :-/
Measuring capability of the model as a ratio of context length, you reach the limits at around 300k-400k tokens of context; after that you have diminishing returns. We passed this point.
Measuring capability purely by output, smarter harnesses in the future may unlock even more improvements in outputs; basically a twist on the "Sufficiently Smart Compiler" (https://wiki.c2.com/?SufficientlySmartCompiler=)
That's the two extremes but there's more on the spectrum in between.
you can also execute larger tasks than this using subagents to divide the work so each segment doesn’t exceed the usable context window. i regular execute tasks that require hundreds of subagents, for example.
in practice the context window is effectively unlimited or at least exceptionally high — 100m+ tokens. it just requires you to structure the work so it can be done effectively — not so dissimilar to what you would do for a person
How to organize code like you said, and how agents interact with it, to keep the actual context window small is the fundamental challenge.
Doesn't change my point: the amount of code the agent can operate on is very large, if not unlimited, as long as you put even a little bit of thought into structuring things so it can be divided along a boundary.
If you let the codebase degrade into spaghetti, then the LLM is going to have the same problem any engineer would have with that. The rules for good code didn't disappear.
I looked at that response by GP (rgbrenner) and refrained from replying because if someone is both running hundreds of agents at a time AND oblivious to what "context window" means, there is no possible sane discourse that would result from any engagement.
Assistant: “I propose A”
User: “Actually B is better”
Assistant: “you’re absolutely right”
User: “actually let’s go with C”
Assistant: “Good choice, reasons”
User: “wait A is better”
Assistant: “Great decision!”
Eh, what a waste. Can't we just stimulate the optic nerve? Or better yet, whatever region of the brain is responsible for me being able to 'see' anything? And perhaps we can finally get smell-o-vision too.
Second, LLM code can be less of a hot mess than human written code if you put in the time to train/prompt/verify/review.
Generating perfect well patterned SOLID and unit tested code with no warnings or anti-patterns has never been easier.
Write lots of code now and statistically look great, while the impact won’t be felt for a much larger range of time.
With the job search and whatnot then yeah, caring becomes a lot more important. That’s true.
It's not immediate, it still takes weeks if you want to actually do QA and roll out to prod, but it's definitely better than the pre-LLM alternatives.
AI will make this dynamic worse, and it's got the extra danger of the default banal way of applying the technology in fact encourages it's application to that end.
I also don't think that the commodification of programming is a substitute for things like understanding your customers, having good taste for design, and designing software in a way that is maximally iterable.
With the right investment, we could certainly have tooling that creates and maintains very good designs out of the box. My bet is that we'll continue chasing quick and hacky code, mostly because that's the majority of the code that it was trained on, and because the majority of people seem to be interested in a quick result vs a long-term maintainable one.
That the industry was already routinely dealing with fires of it's own creation is not a valid reason to start cooking with gasoline.
What would normally be considered overengineered gold plating is "free" now.
[0] https://en.wikipedia.org/wiki/Wirth%27s_law
Same thing happens in other fields. A rich country and a poor country might build equivalent roads, but they won't pay the same price for them.
The system that makes it have an opinion about good vs bad architecture or engineering sensibilities will be something on top of the transformer and probably something more deterministic than a prompt.
"Shit's in the Game!"
"Chunder Everything"
"Maddening NFL 26"
"FIFiAsco 26"
"UFC 26 (Un Finished Code)"
"The Shits 4"
"Battlefailed"
"Need for Greed"
What you're suggesting is a negative flywheel where quality spirals down, but I'm hoping it becomes a positive loop and the quality floor goes up. We had plenty of slop before LLMs, and not all LLM output is slop. Time will tell, but I think LLMs will continue to improve their coding abilities and push overall quality higher.
With the rise of LLMs that do all of that... those people shutup and shutup real fast.
Whether that happens or not is a different question, but I believe that's what they're suggesting.
Programming is taking ambiguous specs and turning them into formal programs. It’s clerical work, taking each terms of the specs and each statements, ensuring that they have a single definition and then write that definition with a programming language. The hard work here is finding that definition and ensuring that it’s singular across the specs.
Software Engineering is ensuring that programming is sustainable. Specs rarely stay static and are often full of unknowns. So you research those unknowns and try to keep the cost of changing the code (to match the new version of the specs) low. The former is where I spend the majority of my time. The latter is why I write code that not necessary right now or in a way that doesn’t matter to the computer so that I can be flexible in the future.
While both activities are closely related, they’re not the same. Using LLM to formalize statements is gambling. And if your statement is already formal, what you want is a DSL or a library. Using LLM for research can help, but mostly as a stepping stone for the real research (to eliminate hallucinations).
https://en.wikipedia.org/wiki/Dune:_The_Butlerian_Jihad
We are used to thinking about software like in the article, a program that runs deterministically in an OS. Where we are headed might be more like where the LLM or AI system is the OS, and accomplishes things we want through a combination of pre-written legacy software, and perhaps able to accomplish new things on the fly.
That's what the Tech-Priests are for.
How many of us remember that VSCode is actually a browser wrapped inside a native frame?
The new standard, Web Apps. Why update 3 seperate binaries for Win/Lin/Mac when you can do 1 for a web framework and call it a day?
a) The stuff output by the existing LLMs is too unwieldy even for them to handle , even if the product itself is a glorified chatbot.
b) If all software is throwaway, then the value of all software drops to, effectively, the price of an AI subscription. We'll all be drowning in a market of lemons (https://en.wikipedia.org/wiki/The_Market_for_Lemons), whilst also being producers in said market.
With such a low baseline, there is an optimistic perspective that LLMs could improve the situation. LLMs can produce excellent code when prompted or reviewed well. Unlike human employees, the model does not worry about getting a 'partially meets expectations' rating or avoid the drudgery of cleaning up other people's code.
AI certainly has the potential to make the underlying code/design a lot cleaner. We will also be working with dramatically more code, at a much higher rate of change. That alone will be a big challenge to keep sustainable.
The ones making the decision to under-invest on design are either are unaware of the real costs, or are aware and are deliberately choosing that path - that's not new, and I don't expect it to change.
As a piece of meat, I look forward to charge rates of $10,000 an hour, to fix code out the vibe code generation.
--
It's just as likely that people will be surprised that we used to have billions of lines of human generated code, that no LLM ever approved.
To make my comment more on-topic: why do you think this is going to be the case? What newer LLMs will be trained on?
LLMs aren’t the first thing to come along and change how people develop applications.
You had the rise of frameworks like Django, Rails, etc. Also the rise of SPAs. And also the rise of JS as a frontend+backend language.
In a 3-5 yeats we’ll have adapted to the new norm like we have in the past
Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech.
Companies have been enforcing technology mandates since time immemorial. In the early 2000s there were definitely a lot of mandates to move away from commercial UNIX to Linux. Lots of companies began enforcing the switch to PHP, Ruby and Python for new projects.
Good luck disliking LLM babysitting these days
I use AI tools daily (because they feel like they're helping me) but it's not exactly hard to imagine scenarios where an explosion of slop piling up plus harm to learning by outsourcing all thinking results in systemic damage that actually slows the pace of technological progress given enough time.
History of new technologies tend to average into a positive trend over a long enough time scale but that doesn't mean there aren't individual ups and downs. Including WTF moments looking back at what now seems like baffling decision-making with benefit of hindsight.
If it is, the fall out will be way worse than if AI ends up living up to (reasonable) expectations.
If it doesn’t, we are going to see over a trillion dollars of capital leave the tech sector, which I think will have worse impacts on the livelihood of tech workers than if AI ends up panning out.
This is something the naysayers need to grapple with. We’ve crossed a line where this tech needs to work simply because of the amount of money depending on that fact.
I don't think it will be worse; if AI pans out the world would be able to continue without a single programmer left. If a trillion dollars leave the tech sector, all those programmers employed outside of the tech sector will still have jobs.
The damage would come much later, well beyond the point where it could be simply pulled out and replaced without spending massive amounts of money and would also basically necessitate training an entire new generation of engineers.
Then the AI giants would start appearing vulnerable like cigarette companies in the 90s while an AI Superfund and interstate class action are being planned but Sam Altman would already be a centitrillionaire at that point so it would be someone else's problem.
Now with LLM we are talking of millions and millions of line of code that could be generated in a single day. The scale of the problem might not be the same at all.
I think this highlights a problem that has always existed under the surface, but it's being brought into the light by proliferation of vibeslop and openclaw and their ilk. Even in the beforetimes you could craft a 100.0% pure, correct looking github repo that had never stood the test of production. Even if you had a test suite that covers every branch and every instruction, without putting the code in production you aren't going to uncover all the things your test suite didn't--performance issues, security issues, unexpected user behavior, etc.
As an observer looking at this repo, I have no way to tell. It's got hundreds of tests, hundreds of commits, dozens of stars... how am I to know nobody has ever actually used it for anything?
I don't know how to solve this problem, but it seems like there's a pretty obvious tooling gap here. A very similar problem is something like "contributor reputation", i.e. the plague of drive-by AI generated PRs from people (or openclaws) you've never seen before. Stars and number of commits aren't good enough, we need more.
> where you fully give in to the vibes, embrace exponentials, and forget that the code even exists [...] It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
So clearly we need a term for what happens when experienced, professional software engineers use LLM tooling as part of a responsible development process, taking full advantage of their existing expertise and with a goal to produce good, reliable software.
"Agentic engineering" is a good candidate for that.
Its shifted so much for me. I used to think that I had a solemn duty to read every line and understand it, or to write all the test cases. Then I started noticing that tools like CodeRabbit, or Cursor would find things in my code that I would rarely find myself.
I think right now, its shifted my perception of my role to one where I am responsible for "tilting" the agentic coding loop; ultimately the goal is a matter of ensuring the agent learns from its mistakes, self-organize and embrace a spirit of Kaizen.
Btw thank you for your work on Django, last 20 years with it were life changing (I did .NET before).
I'm not checking the code since the code doesn't really matter anymore anyways - I just have the agent write passing tests for the changes or additions I make, and so even if something breaks I can just point to the tests.
Some days, the tickets are completed much faster than I expect and I don't hit my daily token expenditure goal, so I have my own custom harness that actually hooks up an agent to TikTok, basically it splits up the reel into 1 second increments and then feeds those frames to the LLM for it's own consumption. I can easily burn 10m tokens a day on this, and Claude seems to enjoy it.
Personally I want to thank you Simon for putting me onto this "vibe engineering" concept, I really didn't expect an archaeology major like myself to become a real engineer but thanks to AI now I can be! Truly gatekeeping in tech is now dead.
My side project is 80% vibe code. Every now and then I look and see all the bad stuff, then I scold Codex a bit and it refactors it for me. So I do see the author's point.
I took a rock carving course in school that really enlightened me about software engineering, and it still applies today, especially to AI. You can't just decide what you want to carve, hold the chisel in just the right spot, and whack it with a hammer just perfectly so all the rock you want falls away leaving a perfect statue behind.
"I saw the angel in the marble and carved until I set him free." -Michelangelo
It's a long drawn out iterative process of making millions of tiny little chips, and letting the statue inside find its way out, in its natural form, instead of trying to impose a pre-determined form onto it.
Vibe coding is hoping your first whack of the hammer is going to make a good statue, then not even looking at the statue before shipping it!
But AI assisted conscientious coding (or agentic engineering as Simon calls it) is the opposite of that, where you chip away quickly and relentlessly, but you still have to carefully control where you chisel and what you carve away, and have an idea in your mind what you want before you start.
> But I’m not reviewing that code. And now I’ve got that feeling of guilt: if I haven’t reviewed the code, is it really responsible for me to use this in production?
Answer: it wholly depends upon what management has dictated be the goal for GenAI use at the time.
There seems to be a trend of people outside of engineering organizations thinking that the "iron triangle" of software (and really, all) engineering no longer holds. Fast, cheap, good: now we can pick all three, and there's no limit to the first one in particular. They don't see why you can't crank out 10x productivity. They've been financially incentivized to think that way, and really, they can't lose if they look at it from an "engineer headcount" standpoint. The outcomes are:
1) The GenAI-augmented engineer cranks out 10x productivity without any quality consequences down the line, and keeps them from having to pay other people
or
2) The GenAI-augmented engineer cranks out 10x productivity with quality consequences down the line, at which point the engineer has given another exhibit in the case as to why they should no longer be employed at that organization. Let the lawyers and market inertia deal with the big issues that exist beyond the 90-day fiscal reporting period.
Either way, they have a route to the destination of not paying engineers, and that's the end goal.
If you don't like that way of running a software engineering organization, well, you're not alone, but if nothing else, you could use GenAI to make working for yourself less risky.
Just piggy backing on this post since I'm early:
Would love to see your take on how the AI and Django worlds will collide.
My dad (now retired) was always super practical about stuff. He'd tell me pretty nonchalantly things like "yeah we're dealing with xyz constraint, we may have to cut a corner over here, but that's ok", when I asked him about it he gave me a little spiel that you can be thoughtful about how you do things, including when you can cut a corner and more importantly, what corners are ok to cut.
I really took that to heart - especially the "be thoughtful about the corners you cut"
If an LLM has consistently one shotted certain tasks and they are rote/mechanical - not reviewing that code is probably ok.
Are you getting lazy and not reviewing stuff that should be reviewed even if a human wrote it? That's probably not ok
I can live with some basic code that broke because it used outdated syntax somewhere (provided the code isn't part of a mission critical application), but I can't live with it fucking JWT signing etc
Rather, I just feel like I have to constantly remind myself of the impermanence of all things. Like snow, from water come to water gone.
Perhaps I put too much of my identity in being a programmer. Sure, LLMs cannot replace most us in their current state, but what about 5 years, 10 years, ..., 50 years from now? I just cannot help be feel a sense of nihilism and existential dread.
Some might argue that we will always be needed, but I am not certain I want to be needed in such a way. Of course, no one is taking hand-coding away from me. I can hand-code all I want on my own time, but occupationally that may be difficult in the future. I have rambled enough, but all and all, I do not think I want to participate in this society anymore, but I do not know how to escape it either.
The job, as you have done it at least, was also not here 50 years before you started doing it.
Did you have any of the same feelings knowing that you were doing a job that has not existed in the world very long? That seems like a strange requirement for a meaningful job, that it should remain the same for 50+ years.
In truth, our world and what we do for our careers is entirely shaped by the time that we live in. Even people that ostensibly do the same thing people have done for centuries (farmer, teacher, etc) are very different today than 100 years ago.
I don't buy this argument at all. I think if we could pay $20/month to a service that would send over a junior plumber/carpenter/electrician with an encyclopedic knowledge of the craft, did the right thing the majority of the time, and we could observe and direct them, we'd all sign up for that in a heartbeat. Worst case, you have to hire an experienced, expensive person to fix the mess. Yes, I can hear everyone now, "worst case is they burn your house down." Sure, but as we're reminded _constantly_ when we read stories about AI agent catastrophes -- a human could wipe your prod database too. wHy ArE yOu HoLdInG iT tO a DiFfErEnT sTaNdArD???
The business side of the house is getting to live that scenario out right now as far as software goes. Sure you've got years of expertise that an LLM doesn't have _yet_. What makes you think it can't replace that part of your job as well?
But that's not what the author is talking about in that passage you quoted. What he's saying is that, if you can pay $20 for an AI plumber, then it stands to reason that eventually you will be able to pay $30 to a company that manages AI plumbers for you, so that you don't even have to go to the trouble of supervising the plumber. Most people will choose the $30.
The implication here is software engineer jobs are still safe despite basically free labor/material being available to do said jobs because he thinks other people would prefer to pay experienced professionals to do it right at a significantly higher cost. My point is, I think most people will take the low-stakes gamble of having the cheap AI agent do it with self-supervision[0]. He's naive in thinking people are really going to care about artisanal software built by experienced professionals in the future.
0: Even if you subscribe to the "your job will be to supervise the agents" train of thought, you're kinda glossing over the fact that it's probably gonna involve a pretty significant pay cut and the looming problem of "how do new experienced professionals get created if they don't have to/don't need to get their hands dirty"?
I don’t think this comparison quite works (or maybe I think it works and is wrong) and I think it has something to do with creativity or the initial ideation.
I would do this, but I’m a jack of all trades. I built my own diner booth in my kitchen recently. But my wife, who loves the diner booth, just doesn’t really want to get over the hump of figuring out what she might want. I think most people want to offload the mental load of figuring out where to start.
Most people aren’t just bored by coding, they’re bored or overwhelmed by the idea of thinking about software in the first place. Same with plumbing or construction, most people aren’t hiring someone to direct, they’re hiring a director.
Even I have this about some things, sometimes I choose to outsource the full stack of something to give me more space to do creativity elsewhere.
And AI generated code should be different than human code. AI has infinite memory for details. AI doesn’t need organizational patterns like classes. Potentially AI can write code that is more performant than any human.
Will it look like garbage? Sure. Will the code be more suited to the task? Yes.
The code produced will only be understandable by AI. You could use locally hosted LLMs, but it won't be as performant as AI run by big guys. And there is nothing stopping greedy companies implementing some ridiculous pattern that only their model can reasonably work with.
So what you'll do in situation when you can't understand "your" codebase and you have to make changes or fix a bug?
It will be a black box, and the code will be generated just in time by ai for each api request
The open weight models are nipping on the heels of frontier models. The frontier labs have to make forward progress and keep tokens cheap in order to maintain marketshare.
Eventually, we'll have a Mythos-level model running on integrated hardware on every PC.
Code that is organized well and operates coherently in the first place, by an LLM or not, will be easier to iterate on, by an LLM or not.
No, just no.