Claude Code Best Practices

(anthropic.com)

45 points | by sqs 5 hours ago

8 comments

  • zomglings 52 minutes ago
    If anyone from Anthropic is reading this, your billing for Claude Code is hostile to your users.

    Why doesn’t Claude Code usage count against the same plan that usage of Claude.ai and Claude Desktop are billed against?

    I upgraded to the $200/month plan because I really like Claude Code but then was so annoyed to find that this upgrade didn’t even apply to my usage of Claude Code.

    So now I’m not using Claude Code so much.

    • twalkz 4 minutes ago
      I’ve been using codemcp (https://github.com/ezyang/codemcp) to get “most” of the functionality of Claude code (I believe it uses prompts extracted from Claude Code), but using my existing pro plan.

      It’s less autonomous, since it’s based on the Claude chat interface, and you need to write “continue” every so often, but it’s nice to save the $$

    • fcoury 49 minutes ago
      I totally agree with this, I would rather have some kind of prediction than using the Claude Code roulette. I would definitely upgrade my plan if I got Claude Code usage included.
    • replwoacause 26 minutes ago
      Their API billing in general is hostile to users. I switched completely to Gemini for this reason and haven’t looked back.
    • ghuntley 34 minutes ago
      $200/month isn't that much. Folks, I'm hanging around with are spending $100 USD to $500 USD daily as the new norm as a cost of doing business and remaining competitive. That might seem expensive, but it's cheap... https://ghuntley.com/redlining
      • oytis 17 minutes ago
        When should we expect to see the amazing products these super-competitive businesses are developing?
      • sbszllr 22 minutes ago
        All can be true at the same time:

        1. My company cannot justify this cost at all.

        2. My company can justify this cost but I don't find it useful.

        3. My company can justify this cost, and I find it useful.

        4. I find it useful, and I can justify the cost for personal use.

        5. I find it useful, and I cannot justify the cost for personal use.

        That aside -- 200/day/dev for a "nice to have service that sometimes makes my work slightly faster" is much in the majority of the world.

      • m00dy 30 minutes ago
        Seriously? That’s wild. What kind of CS field could even handle that kind of daily spend for a bunch of people?
        • ghuntley 26 minutes ago
          Consider L5 at Google: outgoings of $377,797 USD per year just on salary/stock, before fixed overheads such as insurance, leave, issues like ramp-up time and cost of their manager. In the hands of a Staff+ engineer, these tools enable replication of Staff+ engineers and don't sleep. My 2c: the funding for the new norm will come from either compressing the manager layer or engineering layer or both.
        • mmikeff 24 minutes ago
          The kind of field where AI builds more in a day than a team or even contract dev does.
          • ghuntley 21 minutes ago
            correct; utilised correctly these tools ship teams of output in a single day.
    • cypherpunks01 36 minutes ago
      Claude Pro and other website/desktop subscription plans are subject to usage limits that would make it very difficult to use for Claude Code.

      Claude Code uses the API interface and API pricing, and writes and edits code directly on your machine, this is a level past simply interacting with a separate chat bot. It seems a little disingenuous to say it's "hostile" to users, when the reality is yeah, you do pay a bit more for more reliable usage tier, for a task that requires it. It also shows you exactly how much it's spent at any point.

      • fcoury 34 minutes ago
        > ... usage limits that would make it very difficult to use for Claude Code.

        Genuinely interested: how's so?

        • cypherpunks01 31 minutes ago
          Well, I think it'd be pretty irritating to see the message "3 messages remaining until 6PM" while you are in the middle of a complex coding task.
          • fcoury 26 minutes ago
            No, that's the whole point: predictability. It's definitely a trade off, but if we could save the work as is we could have the option to continue the iteration elsewhere, or even better, from that point on offer the option to fallback to the current API model.

            A nice addition would be having something like /cost but to check where you are in regards to limits.

    • dist-epoch 24 minutes ago
      Claude.ai/Desktop is priced based on average user usage. If you have 1 power user sending 1000 requests per day, and 99 sending 5, many even none, you can afford having a single $10/month plan for everyone to keep things simple.

      But every Claude Code user is a 1000 requests per day user, so the economics don't work anymore.

      • fcoury 19 minutes ago
        Well, take that into consideration then. Just make it an option. Instead of getting 1000 requests per day with code, you get 100 on the $10/month plan, and then let users decide whether they want to migrate to a higher tier or continue using the API model.

        I am not saying Claude should stop making money, I'm just advocating for giving users the value of getting some Code coverage when you migrate from the basic plan to the pro or max.

        Does that make sense?

  • 0x696C6961 4 minutes ago
    I mostly work in neovim, but I'll open cursor to write boilerplate code. I'd love to use something cli based like Claude Code or Codex, but neither of them implement semantic indexing (vector embeddings) the way Cursor does. It should be possible to implement an MCP server which does this, but I haven't found a good one.
  • zoogeny 23 minutes ago
    So I have been using Cursor a lot more in a vibe code way lately and I have been coming across what a lot of people report: sometimes the model will rewrite perfectly working code that I didn't ask it to touch and break it.

    In most cases, it is because I am asking the model to do too much at once. Which is fine, I am learning the right level of abstraction/instruction where the model is effective consistently.

    But when I read these best practices, I can't help but think of the cost. The multiple CLAUDE.md files, the files of context, the urls to documentation, the planning steps, the tests. And then the iteration on the code until it passes the test, then fixing up linter errors, then running an adversarial model as a code review, then generating the PR.

    It makes me want to find a way to work at Anthropic so I can learn to do all of that without spending $100 per PR. Each of the steps in that last paragraph is an expensive API call for us ISV and each requires experimentation to get the right level of abstraction/instruction.

    I want to advocate to Anthropic for a scholarship program for devs (I'd volunteer, lol) where they give credits to Claude in exchange for public usage. This would be structured similar to creator programs for image/audio/video gen-ai companies (e.g. runway, kling, midjourney) where they bring on heavy users that also post to social media (e.g. X, TikTok, Twitch) and they get heavily discounted (or even free) usage in exchange for promoting the product.

  • jasonjmcghee 21 minutes ago
    Surprised that "controlling cost" isn't a section in this post. Here's my attempt.

    ---

    If you get a hang of controlling costs, it's much cheaper. If you're exhausting the context window, I would not be surprised if you're seeing high cost.

    Be aware of the "cache".

    Tell it to read specific files (and only those!), if you don't, it'll read unnecessary files, or repeatedly read sections of files or even search through files.

    Avoid letting it search - even halt it. Find / rg can have a thousands of tokens of output depending on the search.

    Never edit files manually during a session (that'll bust cache). THIS INCLUDES LINT.

    The cache also goes away after 5-15 minutes or so (not sure) - so avoid leaving sessions open and coming back later.

    Never use /compact (that'll bust cache, if you need to, you're going back and forth too much or using too many files at once).

    Don't let files get too big (it's good hygiene too) to keep the context window sizes smaller.

    Have a clear goal in mind and keep sessions to as few messages as possible.

    Write / generate markdown files with needed documentation using claude.ai, and save those as files in the repo and tell it to read that file as part of a question. I'm at about ~$0.5-0.75 for most "tasks" I give it. I'm not a super heavy user, but it definitely helps me (it's like having a super focused smart intern that makes dumb mistakes).

    If i need to feed it a ton of docs etc. for some task, it'll be more in the few $, rather than < $1. But I really only do this to try some prototype with a library claude doesn't know about (or is outdated). For hobby stuff, it adds up - totally.

    For a company, massively worth it. Insanely cheap productivity boost (if developers are responsible / don't get lazy / don't misuse it).

    • bugglebeetle 6 minutes ago
      If I have to spend this much time thinking about any of this, congratulations, you’ve designed a product with a terrible UI.
  • sbszllr 31 minutes ago
    The issue with many of these tips is that they require you use to claude code (or codex cli, doesn't matter) to spend way more time in it, feed it more info, generate more outputs --> pay more money to the LLM provider.

    I find LLM-based tools helpful, and use them quite regularly but not 20 bucks+, let alone 100+ per month that claude code would require to be used effectively.

    • dist-epoch 26 minutes ago
      > let alone 100+ per month that claude code would require

      I find this argument very bizarre. $100 is pay for 1-2 hours of developer time. Doesn't it save at least that much time in a whole month?

      • owebmaster 19 minutes ago
        No, it doesn't. If you are still looking for product market fit, it is just cost.

        After 2 years of GPT4 release, we can safely say that LLMs don't make finding PMF that much easier nor improve general quality/UX of products, as we still see a general enshittification trend.

        If this spending was really game-changing, ChatGPT frontend/apps wouldn't be so bad after so long.

  • remoquete 1 hour ago
    What's the Gemini equivalent of Claude Code and OpenAI's Codex? I've found projects like reugn/gemini-cli, but Gemini Code Assist seems limited to VS Code?
  • bugglebeetle 24 minutes ago
    Claude Code works fairly well, but Anthropic has lost the plot on the state of market competition. OpenAI tried to buy Cursor and now Windsurf because they know they need to win market share, Gemini 2.5 pro is better at coding than their Sonnet models, has huge context and runs on their TPU stack, but somehow Anthropic is expecting people to pay $200 in API costs per functional PR costs to vibe code. Ok.
  • m00dy 40 minutes ago
    well, the best practice is to use gemini 2.5 pro instead :)
    • replwoacause 25 minutes ago
      Yep I learned this the hard way after racking up big bills just using Sonnet 3.7 in my IDE. Gemini is just as good (and not nearly as willing to agree with every dumb thing I say) and it’s way cheaper.