Vibe Coding Killed Cursor

(ischemist.com)

39 points | by hiddenseal 3 hours ago

12 comments

  • throw310822 1 hour ago
    Not sure, after reading so many times that Cursor was cooked, I got a license from my company and I'm loving it. I had tried Claude Code before, though only briefly and for small things; I don't really see much difference between one and the other. Cursor (Opus 4.5) has been able to perform complex changes across multiple files, implement whole new features, fix issues in code and project setup... I mean, it just feels like peer programming, and I never got the feeling of running into hard limits. Am I missing much, or Cursor has simply improved recently (or it depends on the license)?
    • esafak 37 minutes ago
      People are realizing that you don't need Cursor to review the diffs CC generates. You can use any tool!
  • tcdent 1 hour ago
    This is a fairly well written article which captures the current state of the art correctly.

    And then goes on to recommend AI Studio is a primary dev tool?! Baffling.

    • esafak 1 hour ago
      There is a rationale:

      > Second, and no less important, AI Studio is genuinely the best chat interface on the market. It was the first platform where you could edit any message in the conversation, not just the last one, and I think it's still the only platform where you can edit AI responses as well! So if the model goes on an unnecessary tangent, you can just remove it from the context. It's still the only platform where if you have a long conversation like R(equest)1, O(utput)1, R2, O2, R3, O3, R4, O4, R5, O5, you can click regenerate on R3 and it will only regenerate O3, keeping R4 and all subsequent messages intact.

      • badsectoracula 1 hour ago
        Isn't discussion editing a standard feature in chat interfaces? I've been using koboldcpp since i first tried LLMs (mainly because it is written in C++ and largely self-contained) and you can edit the entire discussion as a single text buffer, but even the example HTTP server for llama.cpp allows editing the discussion.

        And yeah it can be useful for coding since you can edit the LLM's response to fix mistakes (and add minor features/tweaks to the code) and pretend it was correct from the get go instead of trying to roleplay with someone who makes mistakes you then have to correct :-P

      • NitpickLawyer 1 hour ago
        > you can click regenerate on R3 and it will only regenerate O3, keeping R4 and all subsequent messages intact.

        What's a use case for this? I'm trying to imagine why you'd want that, but I can't see it. Is it for the horny people? If you're trying to do anything useful, having messages edited should re-generate the following conversation as well (tool calls, etc).

        • hiddenseal 23 minutes ago
          in R3 you ask to implement feature 1, then you move on to building stuff on top, you request feature 2 in R4 and after looking at O4 you see that there was an unintended consequence of a particular design choice in O3, so you can go back, update prompt R3, regenerate O3, and have your detailed prompt R4 remain in place.
        • badsectoracula 1 hour ago
          Imagine in R2 you ask it to write a pong game in C using SDL, in R3 you ask it to write a CMakefile, in R4 you ask it to make the paddles red and green but then around R6 you want to modify the structure and you realize what a catastrophic mistake on your sanity cmake was, so you ask it to use premake for R3 instead so that R6 will only show how to update the premake file for that, wiping clean the existence of cmake (from the discussion and your project).
    • hoppp 1 hour ago
      Im sceptical of these google made Ai builders, I just had a bad experience with firebase studio that was stuck on a vulnerable version of nextjs and gemini couldn't update it to a non vulnerable version properly. Its tries to force vendor lock in from the start. Guh.. avoid.
    • margalabargala 1 hour ago
      It's advertising for AI studio, masquerading as an insightful article.
  • noo_u 1 hour ago
    "You should remain in charge, and best way to do that is to either not use agentic workflows at all (just talk to Gemini 2.5/3 Pro in AI Studio) or use OpenCode, which is like Claude Code, but it shows you all the code changes in git diff format, and I honestly can't understand how anyone would settle for anything else."

    I 100% agree with the author here. Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with - they end up tangled into a net of barely vetted code that was generated for them.

    • rootusrootus 1 hour ago
      > Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with

      This seems to be something both sides of the debate agree on: Their opponents are wrong because they are subpar developers.

      It seems uncharitable to me in both cases, and of course it is a textbook example of an ad hominem fallacy.

      • noo_u 1 hour ago
        Well both sides could be right, no? I don't think it is necessarily uncharitable to note that lack of experience could cause developers to have strongly held, yet potentially invalid opinions about the practical applications of a technology.
    • phpnode 1 hour ago
      I think it’s actually a combination of people who have seen bad results from ai code generation (and have not looked deeper or figured out how to wield it properly yet) and another segment of the developer population who are now feeling threatened because it’s doing stuff they can’t do. Different groups
    • trinix912 1 hour ago
      > Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with - they end up tangled into a net of barely vetted code that was generated for them.

      This might be your anecdotal experience but in mine, reviewing large diffs of (unvetted agent-written) code is usually not much faster than writing it yourself (especially when you have some mileage in the codebase), nor does it offset the mental burden of thinking how things interconnect and what the side effects might be.

      What IMO moves the needle towards slower is that you have to steer the robot (often back and forth to keep it from undoing its own previous changes). You can say it's bad prompting but there's no guarantee that a certain prompt will yield the desired results.

      • forty 1 hour ago
        That's my feeling as well, I work on a fairly large and old code base I know pretty well, and generally Claude doesn't build things really faster than I would and then I spend more time reviewing. I end up using it for the most boring tasks (like dumb refactoring/code reorganization) where review is dumb as well and try to have it work when I'm not myself coding (like during meetings etc), this way I never lose time.
      • noo_u 1 hour ago
        It's definitely anecdotal - and I agree about steering the robot. I find that analysis is harder than creation usually.
        • XenophileJKO 57 minutes ago
          I think that is the skill that separates agentic power users from others.

          You have to be really good at skimming code quickly and looking at abstractions/logic.

          Through my career, I almost never ask another team a question about their services, etc. I always built the habit of just looking at their code first. 9 times out of 10, I could answer my question or find a workaround for the bug in their code.

          I think this built the skill of holding a lot of structure in my head and quickly accumulating it.

          This is the exact thing you need when letting an agent run wild and then making sure everything looks ok.

    • abhgh 1 hour ago
      I use Claude Code within Pycharm and I see the git diff format for changes there.

      EDIT: It shows the side-by-side view by default, but it is easy to toggle to a unified view. There's probably a way to permanently set this somewhere.

    • eulers_secret 1 hour ago
      This is a part of why I (sometimes, depending) still use Aider. It’s a more manual AI coding process.

      I also like how it uses git, and it’s good at using less context (tool calling eats context like crazy!)

      • hrimfaxi 1 hour ago
        I too have observed that aider seems to use significantly less context than claude code though I have found myself drifting from its use more and more in favor of claude code as skills and such have been added. I may have to revisit it soon. What are you using now instead (as you had said sometimes, depending)?
      • noo_u 1 hour ago
        Absolutely - one of my favorite uses of Aider is telling it to edit config files/small utility scripts for me. It has prompted me to write more comments and more descriptive variable names to make the process smoother, which is a win by itself. I just wish it could analyze project structure as well as Claude Code... if I end up with less work at work I might try to poke at that part of the code.
    • petesergeant 1 hour ago
      > which is like Claude Code, but it shows you all the code changes in git diff format

      Claude Code does this, you just have to not click “Yes and accept all changes”

      • hiddenseal 21 minutes ago
        why do i have to choose? in opencode, i can have both, let it run autonomously, but also look at the diff and whenever it goes off rails, i can stop it
  • leerob 1 hour ago
    Hi. I'm an engineer at Cursor.

    > By prioritizing the vibe coding use case, Cursor made itself unusable for full-time SWEs.

    This has actually been the opposite direction we're building for. If you are just vibing, building prototypes or throwaway code or whatever, then you don't even need to use an IDE or look at the code. That doesn't really make sense for most people, which is why Cursor has different levels of autonomy you can use it for. Write the code manually, or just autocomplete assistance, or use the agent with guardrails - or use the agent in yolo mode.

    > One way to achieve that would be to limit the number of lines seen by an LLM in a single read: read first 100 lines

    Cursor uses shell commands like `grep` and `ripgrep`, similar to other coding agents, as well as semantic search (by indexing the codebase). The agent has only been around for a year (pretty wild how fast things have moved) and 8 months or so ago, when models weren't as good, you had to be more careful about how much context you let the agent read. For example, not immediately putting a massive file into the context window and blowing it up. This is basically a solved problem today, more or less, as models and agents are much better are reliably calling tools and only pulling in relevant bits, in Cursor and elsewhere.

    > Try to write a prompt in build mode, and then separately first run it in plan mode before switching to build mode. The difference will be night and day.

    Agree. Cursor has plan mode, and I generally recommend everyone start with a plan before building anything of significance. Much higher quality context and results.

    > Very careful with asking the models to write tests or fix code when some of those tests are failing. If the problem is not trivial, and the model reaches the innate context limit, it might just comment out certain assertions to ensure the test passes.

    Agree you have to be careful, but with the latest models (Codex Max / Opus 4.5) this is becoming less of a problem. They're much better now. Starting with TDD actually helps quite a bit.

    • hiddenseal 34 minutes ago
      Hello Lee, incredibly honored, huge fan of your work at vercel. The nextjs tutorial is a remarkable s-tier educational content; it helped me kickstart my journey into full-stack dev to ship my research tools (you might appreciate the app router love in my latest project: https://github.com/ischemist/syntharena).

      On substance: my critique is less about the quality of the retrieval tools (ripgrep/semantic search are great) and more about the epistemic limits of search. An agent only sees what its query retrieves. For complex architectural changes, the most critical file might be one that shares no keywords with the task but contains a structural pattern that must be mirrored. In those cases, tunnel vision isn't a bug in the search tool but in the concept of search vs. full-context reasoning.

      One other friction point I hit before churning was what felt like prompt-level regression to the mean. For trivial changes, the agent would sometimes spin up a full planning phase, creating todo lists and implementation strategies for what should have been a one-shot diff. It felt like a guardrail designed for users who don't know how to decompose tasks, ergo the conclusion about emphasis on vibe coders.

      That said, Cursor moves fast, and I'll be curious to see what solution you'll come up with to the unknown unknown dependency problem!

  • samuelknight 1 hour ago
    These complaints are about technical limitations that will go away for codebase-sized problems as inference cost continues its collapse and context windows grow.

    There are literally hundreds of engineering improvements that we will see along the way like a intelligent replacement to compacting to deal with diff explosion, more raw memory availability and dedicated inference hardware, models that can actually handle >1M context windows without attention loss, and so on.

  • Havoc 1 hour ago
    > ask it explicitly to separate the implementation plan in phases

    This has made a big difference my side. prompt.md that is mostly very natural language markdown. Then ask LLM to turn that into a plan.md that contains phases emphasising that each should be fairly selfcontained. This usually needs some editing but is mostly fine. And then just have it implement each phase one by one.

  • readthenotes1 11 minutes ago
    "All those moments..."

    Appropriate.

    https://youtu.be/HU7Ga7qTLDU?si=8BL4vOTJ9DLacu_V

  • chollida1 1 hour ago
    This seems like I just read an advertisement. Or submarine article as PG would say.

    AI studio is just another IDE like cursor so its a very odd choice to say one is bad and the other is the holy grail:)

    But I guess this is what guerilla advertising is these days.

      Just another random account with 8 karma points that just happens to post an article about how one IDE is bad and its almost identical cousin is the best
    • walthamstow 1 hour ago
      AI Studio isn't an IDE, it's just a web page with a chat interface. It's not even a product, really.

      OP is actually advocating against Google's latest products here. Surely a submarine would hype Antigravity and Gemini 3 Pro instead?

    • hiddenseal 29 minutes ago
      lmao, per Occam's razor a much simpler explanation - I'm a grad student, so of course I'll spend more time exploring free tools, and it just happened that AI Studio with Gemini is really great.

      if google wants to send a check, my email is open, lmao, but for now i'm optimizing for tokens per dollar

    • Havoc 1 hour ago
      > AI studio is just another IDE like cursor so its a very odd choice to say one is bad and the other is the holy grail:)

      Google does tend to have large contexts and sometimes reasonable prices for it. So if one of the main takeaway is load everything into context then I can certainly understand why author is a fan

  • manishsharan 1 hour ago
    Gemini's large context window is incredible. I concatenate the my entire repo and repos of supporting libraries and then ask it questions.

    My last use case was like this : I had a old codebase code that was using bakbone.js for ui with jquery and a bunch of old js with little documentation to generat UI for a clojure web application.

    Gemini was able to unravel this hairball of code and guiding me step by step to htmx. I am not using AI studio; I am using Gemini subscription.

    Since I manually patch the code, its like pair programming with an incredibly patient and smart programmer.

    For the record, I am too old for vibe coding .. I like to maintain total control over my code and all the abstractions and logic.

  • boredtofears 1 hour ago
    This article makes a lot of definitive claims about capabilities of different models that don't align with my experience with them. Its hard to take any claim serious without completely understanding the state of the context when the behavior was observed. I don't think its useful to extrapolate a single observation into generalized knowledge about a particular model.

    Can't wait until we have useful heuristics for comparing LLM's. This is a problem that comes up constantly (especially in HN comments...)

  • submeta 1 hour ago
    > The context is king

    Agree

    > and AI Studio is the only serious product for human-in-the-loop SWE

    Disagree. I use Claude Code and Codex daily, and I couldn’t be happier. Had started with Cursor, switched to CLI based agents and never looked back. I use WezTerm, tmux, neovim, Zoxide, and create several tabs and panes and run claude code not only for vibe coding, scripting, analysing files, letting it write concepts, texts, documentation. Totally different kind of computing experience. As if I have several assistants 24/7 at my fingertips.

    • moralestapia 1 hour ago
      +1 to Codex.

      I was always hesitant to jump into the vibe coding buzz.

      A month ago I tried Codex w/ CLI agents and they now take care of all the menial tasks I used to hate that come w/ coding.