6 comments

  • SOLAR_FIELDS 345 days ago
    A feature comparison to https://github.com/paul-gauthier/aider would be great.

    Is this just a non interactive version of this kind of agent?

    • rohansood15 345 days ago
      Aider is great, but the use case is different:

      1. You use Aider to complete a novel task you're actively working on. Patchwork completes repetitive tasks passively without bothering you. For e.g. updating a function v/s fixing linting errors.

      2. Aider is agentic, so it figures out how to do a task itself. This trades accuracy in favor of flexibility. With patchwork, you control exactly how the task is done by defining a patchflow. This limits the set of tasks to those that you have pre-defined but gives much higher accuracy for those tasks.

      While the demo shows CLI use, the ideal use case patchwork is as part of your CI or even a serverless deployment triggered via event webhooks. Hope this helps? :)

  • lifeisstillgood 344 days ago
    Ok the video explains this way better - and it looks awesome.

    Do you accept PRs yourself :-)

    • rohansood15 344 days ago
      We do. We haven't done a very good job of listing good first issues, but please feel free to create and contribute.
  • danielhanchen 344 days ago
    Oh this is really cool and great name! Will definitely try this out!
  • meiraleal 344 days ago
    PR reviews are the one thing you sure don't want a LLM doing.
    • Carrok 344 days ago
      Please elaborate.

      While obviously a LLM might miss functional problems, it feels extremely well suited for catching “stupid mistakes”.

      I don’t think anyone is advocating for LLMs merging and approving PRs on their own, they can certainly provide value to the human reviewer.

      • cuu508 344 days ago
        They can lull the human reviewer into a false sense of security.

        "Computer already looked at it so I only need to glance at it"

        • throwthrowuknow 344 days ago
          I don’t know what your process is but if someone else has reviewed a PR before I take my turn I don’t ignore the code they’ve looked at. In fact I take the time to review both the original code as well as their comments or suggestions. That’s the point of review after all, to verify the thinking behind the code as well as the code itself and that applies equally to thoughts or code added by a reviewer.
      • spartanatreyu 342 days ago
        > LLM [...] feels extremely well suited for catching “stupid mistakes”.

        No.

        Linters are extremely well suited for catching stupid mistakes.

        LLMs are extremely well suited for the appearance of catching stupid mistakes.

        Linters will catch things like this because they can go through checking and evaluating things logically:

        > if (

        > isValid(primaryValue, "strict") || isValid(secondaryValue, "strict") ||

        > isValid(primaryValue, "loose" || isValid(secondaryValue, "loose"))

        > //...............................^^^^ Did we forget a closing ')'?

        > ) {

        > ...

        > }

        LLMs will only highlight exact problems they've seen before, miss other problems that linters would immediately find, and hallucinate new problems altogether.

        • luckilydiscrete 342 days ago
          While true in a subset of problems, linters will also miss stupid mistakes because not everything is syntactical.

          AI for example can catch the fact that `phone.match(/\d{10}/)` might break because of spaces, while a linter has no concept of a correct "regex" as long as it matches the regex syntax.

          I don't think anyone is arguing that replacing linters with AI is the answer, instead a combination of both is useful.

        • rohansood15 342 days ago
          Linters are great at finding syntactical errors like the case you mentioned. But LLMs do a better job at finding logical flaws or enforcing things like non-syntactic naming conventions. The idea is not to replace linters, but to complement them. In fact, one of the flows we're building next is fixing linting issues that linters struggle to fix automatically.
    • rohansood15 344 days ago
      I agree and disagree. You definitely need someone competent to take a look before merging in code, but you can do a first pass with an LLM to provide immediate feedback on any obvious issues as defined in your internal engineering standards.

      Especially helpful if you're a team with where there's a wide variance in competency/experience levels.

      • aaomidi 344 days ago
        Until that immediate feedback is outright wrong feedback and now you’ve sent them down a goose chase.
        • rohansood15 344 days ago
          This is where prompting and context is key - you need to keep the scope of the review limited and well-defined. And ideally, you want to validate the review with another LLM before passing it to the dev.

          Still won't be perfect, but you'll definitely get to a point where it's a net positive overall - especially with frontier models.

        • throwthrowuknow 344 days ago
          That happens with human review too and often serves as an opportunity to clarify your reasoning to both the reviewer and yourself. If the code is easily misunderstood then you should take a second look at it and do something to make it easier to understand. Sometimes that process even turns up a problem that isn’t a bug now but could become one later when the code is modified by someone in the future.
    • meiraleal 344 days ago
      I stand corrected: LLMs are great to block PRs by raising issues. A lack of issues should not be taken as a good PR tho.
    • datashaman 344 days ago
      We're trialing ellipsis.dev for exactly this, and it's pretty good most of the time.
  • bsima 344 days ago
    Yall know there’s a popular oss project called patchwork right https://patchwork.readthedocs.io/en/latest/
  • honesto 343 days ago
    [flagged]