Launch HN: Jazzberry (YC X25) – AI agent for finding bugs

Hey HN! We are building Jazzberry (https://jazzberry.ai), an AI bug finder that automatically tests your code when a pull request occurs to find and flag real bugs before they are merged.

Here’s a demo video: https://www.youtube.com/watch?v=L6ZTu86qK8U#t=7

We are building Jazzberry to help you find bugs in your code base. Here’s how it works:

When a PR is made, Jazzberry clones the repo into a secure sandbox. The diff from the PR is provided to the AI agent in its context window. In order to interact with the rest of the code base, the AI agent has the ability to execute bash commands within the sandbox. The output from those commands is fed back into the agent. This means that the agent can do things like read/write files, search, install packages, run interpreters, execute code, and so on. It observes the outcomes and iteratively tests to pinpoint bugs, which are then reported back in the PR as a markdown table.

Jazzberry is focused on dynamically testing your code in a sandbox to confirm the presence of real bugs. We are not a general code review tool, our only aim is to provide concrete evidence of what's broken and how.

Here are some real examples of bugs that we have found so far.

Authentication Bypass (Critical)” - When `AUTH_ENABLED` is `False`, the `get_user` dependency in `home/api/deps.py` always returns the first superuser, bypassing authentication and potentially leading to unauthorized access. Additionally, it defaults to superuser when the authenticated auth0 user is not present in the database.

Insecure Header Handling (High)” - The server doesn't validate header names/values, allowing injection of malicious headers, potentially leading to security issues.

API Key Leakage (High)” - Different error messages in browser console logs revealed whether API keys were valid, allowing attackers to brute force valid credentials by distinguishing between format errors and authorization errors.

Working on this, we've realized just how much the rise of LLM-generated code is amplifying the need for better automated testing solutions. Traditional code coverage metrics and manual code review are already becoming less effective when dealing with thousands of lines of LLM-generated code. We think this is going to get more so over time—the complexity of AI-authored systems will ultimately require even more sophisticated AI tooling for effective validation.

Our backgrounds: Mateo has a PhD in reinforcement learning and formal methods with over 20 publications and 350 citations. Marco holds an MSc in software testing, specializing in LLMs for automated test generation.

We are actively building and would love your honest feedback!

24 points | by MarcoDewey 4 hours ago

6 comments

  • jdefr89 3 hours ago
    Ton of work already being done on this. I am a Vulnerability Researcher @ MIT and I know of a few efforts, just at my lab alone, being worked on. So far nearly everything I have seen seems to do nothing but report false positives. They are missing bugs a fuzzer could have found in minutes. I will be impressed when it finds high severity/exploitable bugs. I think we are a bit too far from that if its achievable though. On the flip side LLMs have been very useful reverse engineering binaries. Binary Ninja w/ Sidekick (their LLM plugin) can recover and name data structures quite well. It saves a ton of time. Also does a decent job providing high level overviews of code...
    • MarcoDewey 1 hour ago
      I definitely agree that there's a lot of research happening in this space, and the false positive issue is a significant hurdle. From my own research and experimentation, I have also seen how challenging it is to get LLM-powered tools to consistently find real.

      Our approach with Jazzberry is specifically focused on the dynamic execution aspect within the PR context. I am seeing that by actually running the code with the specific changes, we can get a clearer signal about functional errors. We're very aware of the need to demonstrate our ability to find those high-severity/exploitable bugs you mentioned, and that's a key metric for us as we continue to develop it.

      Given your background, I'd be really interested to hear if you have any thoughts on what approaches you think might be most promising for moving beyond the false positive problem in AI-driven bug finding. Any insights from your work at MIT would be incredibly valuable.

    • mp0000 57 minutes ago
      We largely agree, we don't think pure LLM-based approaches are sufficient. Having an LLM automatically orchestrate tools, like a software fuzzer, is something we've been thinking about for a while and we view incorporating code execution as the first step.

      We think that LLMs are able to capture semantic bugs that traditional software testing cannot find in a hands-off way, and ideas from both worlds will be needed for a truly holistic bug finder.

    • hanlonsrazor 3 hours ago
      Agree with you on that. There is nothing about LLMs that makes them uniquely suited for bug finding. However, they could excel re:bugs by recovering traces as you say, and taking it one step further, even recommending fixes.
      • MarcoDewey 59 minutes ago
        Correct, what is unique about LLMs is their ability to match an existing tool or practice to a unique problem.
      • winwang 2 hours ago
        One possibility is crafting (somewhat-)minimal reproductions. There's some work in the FP community to do this via traditional techniques, but they seem quite limited.
  • sublinear 4 hours ago
    I'm kinda curious how this compares to GitLab's similar offering: https://docs.gitlab.com/user/project/merge_requests/duo_in_m...
    • mp0000 4 hours ago
      We are laser focused on bug finding, and aren't targeting general code review, like comments on code style and variable names. We also run your code as part of bug finding instead of only having an LLM inspect it
  • decodingchris 4 hours ago
    Cool demo! You mentioned using a microVM, which I think is Firecracker? And if it is, any issues with it?
    • mp0000 4 hours ago
      Thanks! We are indeed using Firecracker. No issues so far
  • bigyabai 4 hours ago
    > Jazzberry is focused on dynamically testing your code in a sandbox to confirm the presence of real bugs.

    That seems like a waste of resources to perform a job that a static linter could do in nanoseconds. Paying to spin up a new VM for every test is going to incur a cost penalty that other competitors can skip entirely.

    • MarcoDewey 4 hours ago
      You are right that static linters are incredibly fast and efficient for catching certain classes of issues.

      Our focus with the dynamic sandbox execution is aimed at finding bugs that are much harder for static analysis to detect. These are bugs like logical flaws in specific execution paths and unexpected interactions between code changes.

      • winwang 2 hours ago
        Do you guide the LLM to do this specifically? So it doesn't "waste" time on what can be taken care of by static analysis? Would be interesting if you could also integrate traditional analysis tools pre-LLM.
  • bananapub 4 hours ago
    how did and do you validate that this is of any value at all?

    how many test cases do you have? how do you score the responses? how do you ensure random changes by the people who did almost all of the work (training models) doesn't wreck your product?

    • winwang 2 hours ago
      Not the OP but -- I would immediately believe that finding bugs would be a valuable problem to solve. Your questions should probably be answered on an FAQ though, since they are pretty good.