It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.
My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.
In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.
If you’re writing simple code, it’s often a one-shot. With medium-complexity code, it gets the first 90% done in a snap. Easily faster than I could ever do it. The problem is that 90% is never the part that sucks up a bunch of time— it’s the final 10%, and in many cases for me, it’s been more hindrance than help. If I’d just taken the driver’s wheel, making heavy use of autocomplete, I’d have done better and with less frustration. Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.
Yeah that’s been my experience. The generators are shockingly good. But they don’t get it all the way, and then you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.
> you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.
SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.
I’m very confused by this as in my domain space I’ve been able to nearly one-shot most coding assignments since this summer (really Sonnet3.5h) by pointing specific models at well-specified requirements. Things like breaking down a long functional or technical spec document into individual tasks, implementing, testing, deployment and change management. Yes, it’s rather straightforward scripting, like automation on Salesforce. That work is toast and spec-driven development will surge as people go more hands-off the direct manipulation of symbols representing machine instructions, on average.
There is vast difference between writing glue code and engineering systems. Who will come up with the next Spring Boot, Go, Rust, io_uring, or whatever, once the profession has completely reduced itself to pleasing short outcomes?
Maybe some day we'll collectively figure it out. I'm confused how people are getting so much success out of it. That hasn't been my experience. I'm not sure what I'm doing wrong.
Try using the brainstorming and execute plan loops with the superpowers plugin in Claude Code. It encapsulates the spec driven development process fairly well.
LLMs can traverse codebases and do research faster. But I can see this one backfiring badly as structural slop becomes more acceptable since you can throw an LLM at it an fix the bug. Eventually you'll reach a stage of stasis where your tech debt is so high, you can't pay the interest even with an LLM.
The TDD community loves tests and finds writing code without tests more painful than writing tests before code.
Is your point that the TDD community is a minority?
> It does a better job at writing unit test (not perfect) then the fellow human programmer
I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.
I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
>It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?
> It is fascinating that it can bootstrap moderately complex projects form a single shot.
Similar to "git clone [email protected]"? There is nothing fascinating about creating something that has existed around the training set. It is fascinating that the AI can make some variations from the original content though.
> If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms.
This is where all the "vibe-coders" disappears. LLMs can write code fast but so does copy-paste. Most of the "vibe-coded" stuff I see on the Internet is non-functional slop that is super-unoptimized and has open supabase databases.
To be clear, I am not against LLMs or embracing new technologies. I also don't have this idea that we have some kind of "craft" when we have been replacing other people for the last couple decades.
I've been building a game (fully vibe-coded, that is the rule is that I don't write/read any lines of code) and it has reached a stage where any LLM is unable to make any change without fully breaking it (for the curious: https://qpingpong.codeinput.com). The end result is quite impressive but it is far from replacing anyone who has been doing serious programming anytime soon.
It never was about code. What was the last time you said to yourself "I feel like grabbing some code right now!"?
Last time you struggled to accomplish something and then screamed "if only I had some code!!!!".
This sounds very, very stupid, doesn't it?
English is not my native language, but I find this wording very annoying.
People/business suffer for having needs unattended or requirements unsatisfied.
Latency too high? Have some code!
Database locked with long running query? Here, take some code!
You want to price exotic financial assets to calculate your risks? Have you tried to generate some code??
This is so strange.
I honestly do not think in terms of "code".
If your kid asked you about your job, would you say "I use the computer"?
It doesn’t need to do all of a job to reduce total jobs in an area. Remove the programming part then you can reduce the number of people for the same output and/or bring people who can’t program but can do the other parts into the fold.
> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?
Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.
> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.
These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.
And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
> And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.
I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.
> It’s like saying calculators replaced accountants. Calculators automated arithmetic, but arithmetic was never the job. The job was understanding financials, advising clients, making judgment calls, etc. The calculator just made accountants faster at the mechanical part.
Mechanical and later electical calculators replaced human calculators. Accountants switched from having to delegate computation to owning a calculator.
I used AI to write some python code and some bazel rules to generate some python code around that to do some new workflow system I wanted to prototype. It just did it, it would make mistakes but since I had it running tests it would fix the code after running the tests.
The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.
This is my experience as well. It's been great to make an application that's small in scope, doesn't require access to my main project repo and is basically a nice to have value add for the client.
I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's. Had I not been it would have sucked.
For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.
Also it's once in a blue moon I can think of a program suitable for agentic coding so I wouldn't ever consider purchasing a personal license.
being able to properly deal with scale and security. Being able to be confident that if I am capturing PII data into my application, its as secure as it can be and as secure as if a principal developer had put the architecture together etc.
Mass market SAAS will generally just use other products to handle this stuff. And if there does happen to be a leak, they just say sorry and move on, there are very few consequences for security failures.
It's decent at explaining my code back to me, so I can make sure my intent is visible within code/comments/tracing messages. Not too bad at writing test cases either. I still write my code.
Software engineering will get automated, all of the issues with the current models will get worked out in time. People can beg and wish that its not true, but it is. We have a few more good years left and then this career is over
I think this is a not insane prediction, but much like truck driving and radiology the timeline is likely not that short.
Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.
So far there are mainly horrible "AI-coded" websites that look like they were produced by the Bootstrap framework in 2014. They use 100% CPU in Firefox despite having no useful functionality.
It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.
Reasons why the attempted cursor acquisition might not be about replicating cursor (with or without human help):
shutting down competition; market share; understanding user behavior; training data
There are many different ways to write code. The more code there is, the more possible versions of the system could have existed to solve that same set of problems; each with different tradeoffs.
The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.
The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.
I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."
This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.
When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.
I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.
Whenever someone gets into Important Reasons why Software Engineering is Different from Programming, I hear a bunch of things that should just be considered “competent programming”.
> Programming is just churning out code without much regard for how everything fits together
What you’re describing is a beginner, not a programmer
> There is little to no planning involved
> trying to discover an optimal solution for now and for the foreseeable future
I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?
I think it’s undeniable as an engineering profession when performance and algorithm choice come into play, that or safe control of embedded or industrial devices. There are other contexts where its engineering too, it’s just engineering in the sense that someone designing commodity headphones is doing electronics engineering.
When we talk about code, you think it's about code, but it's communication _about solving problems_ which happens to use code as a language.
If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.
It becomes this entity, "the code". A fantasy.
Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.
It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.
My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.
In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.
SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.
This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?
> (few people like writing unit tests)
The TDD community loves tests and finds writing code without tests more painful than writing tests before code.
Is your point that the TDD community is a minority?
> It does a better job at writing unit test (not perfect) then the fellow human programmer
I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?
Similar to "git clone [email protected]"? There is nothing fascinating about creating something that has existed around the training set. It is fascinating that the AI can make some variations from the original content though.
> If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms.
This is where all the "vibe-coders" disappears. LLMs can write code fast but so does copy-paste. Most of the "vibe-coded" stuff I see on the Internet is non-functional slop that is super-unoptimized and has open supabase databases.
To be clear, I am not against LLMs or embracing new technologies. I also don't have this idea that we have some kind of "craft" when we have been replacing other people for the last couple decades.
I've been building a game (fully vibe-coded, that is the rule is that I don't write/read any lines of code) and it has reached a stage where any LLM is unable to make any change without fully breaking it (for the curious: https://qpingpong.codeinput.com). The end result is quite impressive but it is far from replacing anyone who has been doing serious programming anytime soon.
> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?
Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.
> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.
These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.
And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.
I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.
Mechanical and later electical calculators replaced human calculators. Accountants switched from having to delegate computation to owning a calculator.
The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.
it's purely for myself, no one else.
I think this is what AI can do at the moment, in terms of mass market SaaS vibe codes, it will be harder. Happy to be proven wrong.
I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's. Had I not been it would have sucked.
For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.
Also it's once in a blue moon I can think of a program suitable for agentic coding so I wouldn't ever consider purchasing a personal license.
Or a more meta point that “LLMs are capable of a lot”?
We need regulations to prevent such large scale abuse of economic goods especially if the final output is mediocre.
Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.
It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.
The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.
The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.
I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."
This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.
When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.
I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.
> Programming is just churning out code without much regard for how everything fits together
What you’re describing is a beginner, not a programmer
> There is little to no planning involved > trying to discover an optimal solution for now and for the foreseeable future
I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?
Thus, the root cause of the meetings' existence is BS mostly. That's why you have BS meetings.
The fastest way to drive AI adoption is thus by thinning out org layers.
If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.
It becomes this entity, "the code". A fantasy.
Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.