Ask HN: How are you automating your coding work?

With the increase of vibe coding I am interested in knowing some creative ways people are automating their coding work.

76 points | by manthangupta109 1 day ago

32 comments

  • cadamsdotcom 23 hours ago
    The biggest principle is codification. Codify everything.

    For instance, this skill of web development: https://raw.githubusercontent.com/vercel-labs/web-interface-...

    That’s too much for a model to carry in its context while it’s trying to do actual work.

    Far better is to give that skill.md to a model and have it produce several hundred lines of code with a shebang at the top. Now you haven’t got a skill, you’ve got a script. And it’s a script the model can run any time to check its work, without knowing what the script does, how, or why - it just sees the errors. Now all your principles of web dev can be checked across your codebase in a few hundred milliseconds while burning zero tokens.

    TDD is codification too: codifying in executable form the precise way you want your logic to work. Enforce a 10ms timeout on every unit test and as a side effect your model won’t be able to introduce I/O or anything else that prevents parallel, randomized execution of your test suite. It’s awesome to be able to run ALL the tests hundreds of times per day.

    Constantly checking your UI matches your design system? Have the model write a script that looks at your frontend codebase and refuses to let the model commit anything that doesn’t match the design system.

    Codification is an insanely powerful thing to build into your mindset.

    • wrs 23 hours ago
      As one often finds with “effective LLM usage” advice, all of those things would help humans on the team as well! As would other advice like keeping the architecture docs up to date, writing down important design decisions with rationale, breaking big features down into steps, etc.

      Maybe one should just search for advice from the last 20 years on how to make a human development team more effective, and do that stuff.

      It’s funny how this advice has always been around, but we needed to invent this new kind of idiot savant developer to get the human developers to want to do it…

      • g-b-r 19 hours ago
        So depressing that this got downvoted
        • fclairamb 14 hours ago
          Up voted it. It's very true, especially regarding docs (short, effective, with examples), proper cli commands, and... testability.
    • ozim 22 hours ago
      Watch out for Waste of context, whatever can be checked by existing linting/testing tooling and returned as an exact message to the model/agent the better.
    • jason_s 19 hours ago
      > have it produce several hundred lines of code with a shebang at the top.

      Am I the only one who worries about agents creating malicious/unsafe code to execute?

      • cadamsdotcom 19 hours ago
        My friend, let me introduce you to a very simple technique, it’s called .. reading the code.
        • adrianwaj 18 hours ago
          If all you're doing is checking for blackhatting, shouldn't you run like a Dr.Web or McAfee for code... if it existed?
  • simonw 1 day ago
    One of my biggest unlocks has been embracing Claude Code for web - the cloud version - and making sure my projects are setup to work with it.

    I mainly work in Python, and I've been ensuring that all of my projects have a test suite which runs cleanly with "uv run pytest" - using a dev dependency group to ensure the right dependencies are installed.

    This means I can run Claude Code against any of my repos and tell it "run 'uv run pytest', then implement ..." - which is a shortcut for having it use TDD and write tests for the code it's building, which is essential for having coding agents produce working code that they've tested before they commit.

    Once this is working well I can drop ideas directly into the Claude app on my iPhone and get 80% of the implementation of the idea done by the time I get back to a laptop to finish it off.

    I wrote a bit about "uv run pytest" and dependency groups here: https://til.simonwillison.net/uv/dependency-groups

    • athrowaway3z 23 hours ago
      Half my visits to HN are to check out a comment that explains the right uv inline dep syntax

         #!/usr/bin/env -S uv run --script
         # /// script
         # dependencies = [
         #   "requests<3",
         #   "rich",
         # ]
         # ///
         import requests, rich
         # ... script goes here`
         
      
      so i can tell claude to write a self contained script it can later use.
      • wrs 23 hours ago
        Huh. I just tell Claude “write a self contained uv script” and it does that fine by itself.
    • senko 3 hours ago
      > The dev dependency group is a special case for uv run - it will always install those dependencies as well

      Huh, this is...unexpected. TIL, indeed!

    • jarrettcoggin 22 hours ago
      Are you doing this for non-library projects as well? I'm trying to wrap my head around the idea of packaging a microservice using this mechanism (I haven't heard of this before now).
      • simonw 19 hours ago
        I have templates for Datasette plugin, Python libraries and Python CLI tools.

        I often have Claude Code build little HTML+JavaScript tools which are simple enough that I don't need a template ("Vanilla JavaScript, no dependencies or build script, test it with Playwright").

        I've built a couple of Go apps which works great because Go has such strong default patterns and a full featured standard library, so again no need for a template.

    • throwup238 1 day ago
      > and making sure my projects are setup to work with it.

      MESA drivers are a godsend in Claude Code for web if working on non-web GUIs. It can take screenshots and otherwise interact with them.

      • corysama 1 day ago
        What does "MESA drivers" refer to here? I'm guessing it's not GPU drivers from https://mesa3d.org/
        • throwup238 23 hours ago
          No those don’t work in most cloud VMs but MESA provides llvmpipe/softpipe implementations for Vulkan, OpenGL, etc. They’re software renders so relatively slow but work in headless VMs like Claude Code for web environments.
    • sanderjd 23 hours ago
      I've been trying to use workflows like this, but I quickly run into token limits.

      I'm curious, for those of you who work like this, what level of subscription do you have?

      • simonw 23 hours ago
        I'm on the $200/month Max plan.
  • mjr00 1 day ago
    If I know what I want to code and it's a purely mechanical exercise to code it, I'll just tell Claude what to do and it does it. Pretty neat.

    When I don't know what I want to do, I read existing code, think about it, and figure it out. Sometimes I'll sketch out ideas by writing code, then when I have something I like I'll get Claude to take my sketch as an example and having it go forward.

    The big mistake I see people make is not knowing when to quit. Even with Opus 4.5 it still does weird things, and I've seen people end up arguing with Claude or trying to prompt engineer their way out of things when it would have been maybe 30 seconds of work to fix things manually. It's like people at shopping malls who spend 15 minutes driving in the parking lot to find a spot close to the door when they could have parked in the first spot they saw and walked to the door in less than a minute.

    And as always, every line of code was written by me even if it wasn't written by me. I'm responsible for it, so I review all of it. If I wouldn't have written it on my own without AI assistance I don't commit it.

    • mountain_peak 23 hours ago
      > The big mistake I see people make is not knowing when to quit.

      This is sage advice. I spent the better part of a day trying to steer Gemini into correcting an inconsistency when I likely could have solved it in under an hour. I think persevering with Gemini was due to a number of factors, including novelty, stubbornness, and (unfortunately) not knowing in detail what Gemini had written up to that point.

      I eventually studied the resulting code, which ended up having a number of nested 'hacks' and required refactoring - more time wasted, but still much faster overall.

    • fclairamb 14 hours ago
      And it's probably better in the long term for your cognitive functions and your mental health.
    • sanderjd 22 hours ago
      Yep, I've always tried to live by the two classic XKCDs about automating things: https://xkcd.com/1319/ and https://xkcd.com/1205/.

      I think these calculations are very different now that we have these very good LLMs, but they aren't irrelevant.

    • randomNumber7 22 hours ago
      If you see a good parking lot, look for a better one. - Mikhail Tal
  • jbreckmckye 23 hours ago
    I actually kind of do the opposite to most developers.

    Instead of having it write the code, I try to use it like a pair reviewer, critiquing as I go.

    I ask it questions like "is it safe to pass null here", "can this function panic?", etc.

    Or I'll ask it for opinions when I second guess my design choices. Sometimes I just want an authoritative answer to tell me my instincts are right.

    So it becomes more like an extra smart IDE.

    Actually writing code shouldn't be that mechanical. If it is, that may signify a lack of good abstractions. And some mechanical code is actually quite satisfying to write anyway.

    • wreath 22 hours ago
      I actually started doing this for side projects that I use as a vehicle for learning and not necessarily to solve a problem and it's been great. 1) I don't feel like I just commissioned someone to do something and I half-ass checking it and 2) I actually do learn about new stuff as well even though sometimes it distracts me from the goal (but that's on me)
    • randomNumber7 22 hours ago
      Idk what models you are using. I'm pretty sure I tried the best avialable and arguing with them about pointers in non trivial cases seems like a recipe for disaster.

      > Sometimes I just want an authoritative answer to tell me my instincts are right.

      You realize that LLM answers highly depend on how you frame your question?

      • Exoristos 22 hours ago
        They also depend on several categories of chance circumstance.
      • jbreckmckye 7 hours ago
        > You realize that LLM answers highly depend on how you frame your question?

        Of course I do. I'm not a moron.

        In these cases, I already know the most likely answer to my question. The LLM just helps me reduce my self doubt

  • growthloops 1 day ago
    ML Engineer here. For coding, I mostly use Cursor/Claude Code as a fast pair. I'll detail what I want at a high level, let it draft, then I incrementally make changes.

    Where I've automated more aggressively is everywhere around the code. My main challenge was running experiments repeatedly across different systems and keeping track of the various models I ran and their metrics, etc. I started using Skyportal.ai as an ops-side agent. For me, it's mostly: take the training code I just iterated on, automatically install and configure the system with the right ML stack, run experiments via prompt, and see my model metrics from there.

  • coffee_am 12 hours ago
    On the other side of the equation I've been spending much more time on code-review on an open source project I maintain, because developers are much more productive and I still code-review at the same speed.

    The real issue is that I can't trust the AI generated code, or trust the AI to code-review for me. Some repeated issues I see:

    - In my experience the AI doesn't integrate well with the code that there is already there: it often rewrites functionality and tend not to adhere to the project's conventions, but rather use what it is trained on.

    - The AI often lacks depth into more complex issues. And because it doesn't see the broader implication of changes, it often doesn't write the tests that would cover them. Developers that wrote the PRs accept the AI tests without much investigation into the code-base. Since the changes passes the (also insufficient) tests, they send the PR to code-review.

    - With AI I think (?) I'm more often the one careful deep diving into the project and re-designing the generated code in the code-review. In a way it's an indirect re-prompting.

    I'm very happy with the increased PRs: they push the project forward, with great ideas of what to implement, and I'm very happy about AI increased productivity. Also, with AI developers are bolder in their contributions.

    But this doesn't scale -- or I'll spend all my time code-reviewing :) I hope the AIs get better quickly.

    • bird0861 1 hour ago
      With respect to the first issue you raise, I would perhaps start including prompts in comments. This is a little sneaky sure. And maybe explicitly putting them in a markdown would be better. But there's the risk that markdown won't be loaded. Perhaps it might be possible to inject the file into context via a comment, I've never tried that though and I doubt every assistant will act in a consistent way. The comment method is probably the best bet IMO.

      Forgive me because this is a bit of a tangential rant on the second issue, but Gemini Pro 3 was absolutely heinous about this so I cancelled my sub. I'm completely puzzled what it's supposed to be good for.

      To your third issue, you should maybe consider building a dataset from those interactions... you might be able to train a LoRA on them and use it as a first pass before you lift a finger to scroll through a PR.

      I think a really big issue is that there is a lack of consistency in the use of AI for SWE. There are a lot of models and poorly designed agents/assistants with really unforgivable performance and people just blindly using them without caring about the outputs amounts to something that is kind of Denial-of-Service-y and I keep seeing this issue be raised over and over again.

      At the risk of sounding elitist, the world might be a better place for project maintainers when the free money stops rolling into the frontier labs to offer anyone and everyone free use of the models...never give a baby powertools and so on.

    • dragonwriter 1 hour ago
      That basically matches, in broad outline, what I see from AI use in an enterprise environment; absent a radical change, I think the near term impact of AI on software development is going to be too increase velocity while shifting workload to less (but not zero) code writing and more code reviewing and knowing when you need to prompt-and-review vs. hand-code.
  • al_borland 1 day ago
    It’s usually just a slightly faster web search. When I try to have it do more, I end up spinning my wheels and then doing a web search.

    I’ll sometimes have it help read really long error messages as well.

    I got it to help me fix a reported security vulnerability, but it was a long road and I had to constantly work to keep it from going off the rails and adding insane amounts of complexity and extra code. It likely would have been faster for me to read up on the specific vulnerability, take a walk, and come back to my desk to write something up.

  • scuff3d 23 hours ago
    I find it most useful for getting up to speed on new libraries quickly, and for bouncing design ideas. I'll lay out what my goals are and the approaches I'm considering, and ask it to poke holes in them or to point out issues or things to keep in mind. Found it shockingly helpful in covering my blind spots
  • onlyrealcuzzo 1 day ago
    For my real work? It has not been helpful so far.

    For side projects? It's been a 10x+ multiplier.

    • onlyrealcuzzo 22 hours ago
      I will add that it seems:

      1. The context window size to output accuracy problem seems like a genuinely hard problem

      2. The vast majority of resources are being poured into driving down costs, not increasing output accuracy (and definitely not on gigantic context windows)

      I will also add:

      * I'm astounded at how terrible LLMs are at writing assembly in my experience. This seems like something they should excel at. So I'm genuinely confused on that piece.

    • jason_s 19 hours ago
      curious: which field are you in professionally?
  • onel 8 hours ago
    The biggest unlock for me happened because of claude code. It has allowed me and my team to ship features much faster. I use it to work on the main product, but also to build a lot of in-house tools. We have observability and testing tools for our AI agent that helps us improve the main product. With Claude we can iterate much faster on them and add the features that we specifically need. And not use off-the-shelf products. These are not user facing products/code bases so maybe the quality bar is not that high. But without claude we would have taken resources away from working on the main product to build them.

    Testing is no yet the main focus, so we haven't looked into automating that. But we will in the future. We have automated most of our documentation updates though. After releases or big merges we use askmanu to automatically update/create the docs. These are internal docs, but super useful for us and Claude.

    Disclaimer: I'm the founder of askmanu

  • anditherobot 23 hours ago
    We're overlooking a critical metric in AI-assisted development: Token and Context Window to Utility Ratio.

    AI coding tools are burning massive token budgets on boilerplate thousands of tokens just to render simple interfaces.

    Consider the token cost of "Hello World":

    - Tkinter: `import tkinter as tk; tk.Button(text="Hello").pack()`

    - React: 500MB of node_modules, and dependencies

    Right now context windows token limits are finite and costly. What do you think?

    My prediction is that tooling that manage token and context efficiency will become essential.

    • tomduncalf 23 hours ago
      But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:

      function Hello() { return <button>Hello</buttton> }

      • anditherobot 22 hours ago
        Fair challenge to the idea. But what i am saying is that every line of boilerplate, every import statement, every configuration file consumes precious tokens.

        The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.

        Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?

        Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?

        • lucid-dev 12 hours ago
          But why are you considering tokens so precious?

          At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.

          The way to use LLM's for development is to use the API.

  • margorczynski 23 hours ago
    At work unfortunately (?) we don't use any AI but there is movement to introduce it in some form (it is a heavily regulated area so it won't be YOLO coding using an agent for sure).

    But my side projects which I kinda abandoned a long time ago are getting a second life and it is really fun just to direct the agent instead of slowly re-aquire all of the knowledge and waste time typing in all the stuff into the computer.

  • rcarmo 12 hours ago
    Well, I built https://github.com/rcarmo/agentbox - which I run in a VM, with a dedicated container per project. Right now most of them are running Copilot CLI, others Mistral Vibe or Toad, and I have just built a small web front end based on https://github.com/rcarmo/textual-webterm that lets me see a dashboard of all running tmux sessions inside the containers, click through, and prompt the agents every hour or so.

    Looks like this: https://mastodon.social/@rcarmo/115937685095982965

    SyncThing syncs their workspaces to my desktop/laptop, I make small adjustments (I don’t do stupid wasteful things like the Ralph approach, I prefer clear SPEC documents and TODO checklists, plus extensive testing and switching models for doing code audits on each other), my changes sync back, etc.

    I’m considering calling it “million monkeys”, really.

  • pbohun 1 day ago
    I'm not. I'm learning a little bit each day, making my brain better and myself more productive as I go.
  • bravura 1 day ago
    We use beads for everything. We label them as "human-spec" needed if they are not ready to implement. We label them as "qa-needed" if they cannot be verified through automatic tests.

    I wrote beads-skills for Claude that I'll release soon to enforce this process.

    2026 will be the year of agent orchestration for those of us who are frustrated having 10 different agents to check on constantly.

    gastown is cool but too opinionated.

    I'm excited about this promising new project: https://github.com/jzila/canopy

    We're writing an internal tool to help with planning, which most people don't think is a problem but I think is a serious problem. Most plans are either too long and/or you end up repeating yourself.

    • Yodel0914 23 hours ago
      Out of interest, what sort of products/systems are you building?
  • theturtletalks 20 hours ago
    Instead of building features from scratch, I basically rebuilt my open-source alternatives project[0] from five years ago to track features right in the code itself. So if you spot an open-source project with a feature you want in your app, you can create a "skill" that points exactly to where it lives: specific files, functions, modules, plus docs and notes. This turns OSS into a modular cookbook you can pull from across stacks.

    AI excels at finding the "seams," those spots where a feature connects to the underlying tech stack, and figuring out how the feature is really implemented. You might think just asking Claude or Cursor to grab a feature from a repo works, but in practice they often miss pieces because key code can be scattered in unexpected places. Our skills fix that by giving structured, complete guides so the AI ports it accurately. For example, if an e-commerce platform has payments built in and you need payments in your software, you can reference the exact implementation and adapt it reliably.

    [0] https://github.com/junaid33/opensource.builders

  • epolanski 1 day ago
    I haven't automated anything to be honest, but LLMs are invaluable in connecting dots in repositories or exploring dependencies source codes.

    The first saves me days of work/month by sparing me endless paper pages of notes trying to figure out why things work in a certain way in legacy work codebases. The second spares me from having to dig too much in partially outdated or lacking documentation or having to melt my brain understanding the architecture of every different dependency.

    So I just put major deps in my projects in a `_vendor` directory that contains the source code of the dependencies and if I have doubts LLMs dig into it and their test to shed light.

    What I haven't seen anybody yet accomplish is produce quality software by having AI write them. I'm not saying they can't help here, but the bottleneck is still reviewing and as soon as you get sloppy, codebase quality goes south, and the product quality follows soon after.

  • kordlessagain 18 hours ago
  • jackfranklyn 1 day ago
    Claude Code has genuinely changed my workflow. Not in a "write the whole thing for me" way - more like having a really fast pair who knows the codebase.

    The pattern that works best for me: I describe what I want at a high level, let it scaffold, then I read through and course-correct. The reading step is crucial. Blindly accepting generates technical debt faster than you can imagine.

    Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions. Things that would take me 45 minutes of context-switching I can knock out in 5.

    The automation piece I've landed on: I let it handle file operations and running commands, but I stay in the loop on architecture decisions. The moment you start rubber-stamping those, you end up with a codebase you don't understand.

    • lordnacho 1 day ago
      I have similar observations. The time saved is things like going to some library I wrote to find the exact order of parameters, or looking up some API on the internet and adjusting my code to it. Inevitably if I did that the old way, I would screw up something trivial and get annoyed.

      I rarely let it run for over 10 minutes unattended, but the benefits are not just pure time.

      Being able to change the code without getting bogged down allows you to try more things. If I have to wait half an hour between iterations, I'm going to run into bedtime quite fast.

      On top of this, I'm finding that the thing that takes the deepest attention is often, amazingly, trivial things. Fiddling with a regex takes attention, but it doesn't often decide the success of the project.

      By contrast, the actual work, which is making technical decisions, is something I can do without concentrating in the same way. It's strange that the higher value thing feels less stressful.

      Put these together and I'm more productive. I can string together a bunch of little things, and not have to be at my sharpest. Work can be scheduled for whenever, which means days are more flexible. More gets done, with less attention.

    • sjm-lbm 1 day ago
      Pretty similar here. Another thing I keep thinking is a phrase pilots use when flying airplanes using FMSes and autopilot: "never fly your airplane to a place you haven't already been to in your mind" - that is, don't ever just sit back and let the automation work, stay a step ahead of the automation and drop down to less automation when you aren't certain that it is doing the right thing.

      When you send Claude Code something and already have an idea for what an acceptable solution looks like, you're a massive step ahead when it's time to read the diff and decide what you think of it. This does mean that every so often my productivity drops to basically zero as I try to understand what is actually happening before I just put the AI on the job, but so far it seems to be a good rule to keep in mind that allows me to use AI effectively while generating a code base that I still understand.

    • lucabraccani 1 day ago
      Which Claude Code model do you usually use? Any noticeable differences?
      • langarus 1 day ago
        I've began using Opus and I felt it was a class above all the rest. Used cursor and teste different models, but opus somehow was always much much better. Bought the max for 100$, totally worth it.
    • catigula 1 day ago
      Having AI generate tests is technical debt unless what you're doing is extremely trivial and well-trodden in which case you can basically gen all of the code and not care at all.

      Tests are where the moat still exists because prior to creating tests the outcomes are unverifiable.

      • Yodel0914 23 hours ago
        As somewhat of an AI-agnostic, I disagree. Writing tests is one of the things I find most useful about copilot. Of course you need to review them first correctness, but (especially for unit tests) it’s pretty good and getting it right first-time.
    • mandeepj 1 day ago
      > The moment you start rubber-stamping those, you end up with a codebase you don't understand.

      Yeah, treat it like an intern or junior engineer who needs constant feedback and reviews.

  • sqircles 22 hours ago
    I recently took a job maintaining and extending the functionality of an enterprise Enterprise Asset Management product through its own scripting and xml-soup ecosystem. Since it is such a closed system with a much smaller dataset of documentation and examples, it has been great at using what it does know to help me navigate and understand the product as a whole, and how to think of things in regard to how this product works behind the scenes in the code I cannot see.

    It doesn't write the code for me, but I talk to it like it is a personal technical consultant on this product and it has been very helpful.

  • lucid-dev 12 hours ago
    By using the *API*

    True I spent a year making a platform for using the API.. but the results... are stupendous!! Very cheap and unlimited access and custom tooling, etc... to get large amounts done of anything you want to do with an LLM!

  • denysvitali 1 day ago
  • geiser 22 hours ago
    I prepare custom AGENTS.md with the help of https://lynxprompt.com (Disclaimer: I'm the dev)

    The more time you spend making guidelines and guardrails, the more success the LLM has at acing your prompt. There I created a wizard to get it right from the beginning, simplifying and "guiding" you into thinking what you want to achieve.

  • hmokiguess 1 day ago
    Each project gets its own share of supervision depending on how critical human intervention is needed.

    I have some complex large and strict compliance projects that the AI is a pair programmer but I make most of the decisions, and I have smaller projects that, despite great impact on the bottom line, can be entirely done unsupervised due to the low risk factor of "mistakes" and the easiness of correcting them after the fact they are caught by the AI as well.

  • jmathai 1 day ago
    The most useful automation for me has been a few simple commands. Here are some examples I use for repos in GitHub to resolve issues and PRs.

    /gh-issue [issue number]

    /gh-pr [pr number]

    Edit: replaced links to private github repo to pastebin.

    https://pastebin.com/5Spg4jGu

    https://pastebin.com/hubWMqGa

  • maCDzP 22 hours ago
    I use CLAUDE.md to describe the project. I use Claude to help write the spec. Then I let it run. When the context gets too crammed, I have it build SKILLS.md. I’ll probably have it rewrite CLAUDE.md after a while. Then it will write tests, deployment scripts, commit messages. Yeah, everything.
  • toddmorrow 22 hours ago
    I'm using a modified version of open Dev. it's just a chat interface for open router. I load a combo box of all the free models. it's fast and it's on a tab where it can't hurt my code
  • randomNumber7 22 hours ago
    I think of the biggest chunk of task where I expect the model currently available to do well. I try to describe it precisely and give it all relevant content by uploading the relevant code files. Then I hit enter.
  • t1234s 22 hours ago
    I find it very useful to make quick CLI scripts to pipe data in and out of.
  • grigio 10 hours ago
    opencode, then save your important dev info to AGENTS.md
  • brador 17 hours ago
    Just imagine when we have quantum computing. Unlimited context windows, instant answers.

    What would you make if you could make anything? Does it all just lose meaning?

    • OutOfHere 14 hours ago
      You don't need quantum computing to have unlimited context. There already exist attention mechanisms for it. Even if you have it, it has nothing much to do with making sophisticated software.
  • deterministic 19 hours ago
    I have used custom code generators at work for 25+ years.

    The generators typically generates about 90% of the code I need to write a biz app. Leaving the most important code to me: the biz logic.

    No AI. Just code that takes a (simple) declarative spec file and generates Typescript/C++/Java/... code.

    I am also using AI's daily. However the code generators are still generating more productivity for me than AI's ever have.

    • premek 15 hours ago
      How come such a large portion of the code is generated?
      • OutOfHere 14 hours ago
        It sounds like boilerplate Java.