> There is so much fun, beauty and pleasure in writing code by hand. You can still handcraft code. Just don’t expect this to be your job. This is your passion.
Can people keep a good mental model of the repo without writing code? I always feel like I lose my thoughts if I let an LLM do it for me. I totally get that LLMs can do stuff faster and (Given the right context) sometimes keep better track of this than humans can.
Even musicians had to go digital, but that doesn't mean people stopped playing raw instruments. Will company culture shift towards one senior that has the context + 7 LLMs that work for him? is that where we're heading towards?
which then affords a PDF w/ a ToC, indices, and sidebars and other navigational aids all hyperlinked so as to make moving through the code and its documentation quick and fluid.
Then, when I arrive at the section of code which needs to be updated, the documentation and reasoning about its current state is right there.
Not sure if this scales up to multiple developers though....
> Can people keep a good mental model of the repo without writing code?
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
Eventually AI will get so good that why do we even need context? That’s the direction we are heading towards.
Digitization of music has a wall. There was an aspect of intelligence that simply digitizing things can’t replace. AI is rapidly climbing over that wall.
Traditional engineering involves more than talking to people, teams, and trade offs.
Where I live the engineering society here does license software engineers now and enforces the term, "Software Engineer," as a protected term. You can't just call yourself a software engineer. You have to have educational credentials, pass exams, and be licensed. You have to keep up with your educational requirements. You have to be insured.
It boils down to the same thing, people and teams, but the difference is liability.
Personally I think we're better off pair programming with actual people than GenAI chat bots. We get to teach each other and learn together which improves our skills. We actually need to socialize and be around people to remain healthy. You miss out on all of these benefits with chat bots.
And there's growing evidence that you don't learn as much when using them [0].
Consider when using them is appropriate and maybe don't rely on them for everything.
> ... "Software Engineer," as a protected term. You can't just call yourself a software engineer.
In my irrelevant opinion, this is good. To me at least, the word engineer represents someone with a big, heavy responsibility on their hands.
I never liked being called an engineer and only have it on my resume because that's the keyword recruiters search on linkedin nowadays. One reason is that I don't have formal education. The other is that, in almost 15 years of experience, I witnessed very few occasions of software receiving proper care to the extent that I could call it "engineering".
Advocates claim this helps, but imply that programmers should now be constantly evaluating a dozen or more assistants in various ways and honing prompting strategies. Does that not take time and effort? Wouldn't it make more sense to let the technology develop more?
And then there is this claim that it makes things easier and faster, but what is actually being coded? Has anything remarkable ever come out of vibe coding? Seems like nothing but a lot of unremarkable applications that push no boundaries.
Really coming up with new ideas often means coming up with variations and then honing the results. When iterating like that the speed of coding is already a minor input because what really matters is the judgement calls of knowing what kinds of things to try and when to abandon approaches that are not working out.
And what about all the negative implications that we are still trying to figure out. Just because no vibe code has yet triggered intellectual property lawsuits doesn't mean that won't happen because there is already a lot of that kind of thing going on in the LLM space. And what about the damage that data centers are doing to the environment, to the grid, to markets for processors and memory? When I code things by hand the amount of liability and wreckage I generate is minimal.
Can people keep a good mental model of the repo without writing code? I always feel like I lose my thoughts if I let an LLM do it for me. I totally get that LLMs can do stuff faster and (Given the right context) sometimes keep better track of this than humans can.
Even musicians had to go digital, but that doesn't mean people stopped playing raw instruments. Will company culture shift towards one senior that has the context + 7 LLMs that work for him? is that where we're heading towards?
http://literateprogramming.com/
which then affords a PDF w/ a ToC, indices, and sidebars and other navigational aids all hyperlinked so as to make moving through the code and its documentation quick and fluid.
Then, when I arrive at the section of code which needs to be updated, the documentation and reasoning about its current state is right there.
Not sure if this scales up to multiple developers though....
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
[0] Programming as Theory Building, Peter Naur. https://pages.cs.wisc.edu/~remzi/Naur.pdf
Digitization of music has a wall. There was an aspect of intelligence that simply digitizing things can’t replace. AI is rapidly climbing over that wall.
Where I live the engineering society here does license software engineers now and enforces the term, "Software Engineer," as a protected term. You can't just call yourself a software engineer. You have to have educational credentials, pass exams, and be licensed. You have to keep up with your educational requirements. You have to be insured.
It boils down to the same thing, people and teams, but the difference is liability.
Personally I think we're better off pair programming with actual people than GenAI chat bots. We get to teach each other and learn together which improves our skills. We actually need to socialize and be around people to remain healthy. You miss out on all of these benefits with chat bots.
And there's growing evidence that you don't learn as much when using them [0].
Consider when using them is appropriate and maybe don't rely on them for everything.
[0] https://arxiv.org/abs/2506.08872
In my irrelevant opinion, this is good. To me at least, the word engineer represents someone with a big, heavy responsibility on their hands.
I never liked being called an engineer and only have it on my resume because that's the keyword recruiters search on linkedin nowadays. One reason is that I don't have formal education. The other is that, in almost 15 years of experience, I witnessed very few occasions of software receiving proper care to the extent that I could call it "engineering".
Advocates claim this helps, but imply that programmers should now be constantly evaluating a dozen or more assistants in various ways and honing prompting strategies. Does that not take time and effort? Wouldn't it make more sense to let the technology develop more?
And then there is this claim that it makes things easier and faster, but what is actually being coded? Has anything remarkable ever come out of vibe coding? Seems like nothing but a lot of unremarkable applications that push no boundaries.
Really coming up with new ideas often means coming up with variations and then honing the results. When iterating like that the speed of coding is already a minor input because what really matters is the judgement calls of knowing what kinds of things to try and when to abandon approaches that are not working out.
And what about all the negative implications that we are still trying to figure out. Just because no vibe code has yet triggered intellectual property lawsuits doesn't mean that won't happen because there is already a lot of that kind of thing going on in the LLM space. And what about the damage that data centers are doing to the environment, to the grid, to markets for processors and memory? When I code things by hand the amount of liability and wreckage I generate is minimal.