OpenClaw Creator Spent $1.3M on OpenAI Tokens in 30 Days

(twitter.com)

64 points | by eamag 2 hours ago

27 comments

  • Tiberium 1 hour ago
    This is quite a misleading title because this is the raw API cost, but he (obviously) has unlimited usage as an OpenAI employee. Moreover, if you use e.g. the $200 Codex sub, you get about ~$5k-$6k monthly API usage if you spend every week of your usage, if not more, which shows that the raw API cost is not how much it (likely) costs to OpenAI, unless they're subsidizing all this.

    He did clarify that it was with fast mode. Without fast mode it'd "only" be $300k in raw API cost, or ~60 $200 Codex subscriptions.

    • MattDaEskimo 39 minutes ago
      How is it misleading if this would be the consumer's cost?

      Eventually Codex's subscription subsidization will diminish to near-zero, like the rest of the providers.

      It's extremely important that people understand how expensive these models currently are. Even $300k in raw API costs is alarming for the output.

      • pama 14 minutes ago
        Peter shows the near-term future. Raw API consumer price cost is arbitrary. (The frontier labs can put a 100x markup to cover other operational expenses.) The true cost of inference with same-capability models keeps dropping at dizzying rates, especially at the data-center batch size. (Due to both NVidia hardware and algorithmic changes.) So the developments that Peter can achieve today with internal support from OpenAI will be doable by anyone in a few years without breaking the bank.
      • namenotrequired 26 minutes ago
        > How is it misleading if this would be the consumer's cost?

        Because it does not say “equivalent of”, it literally says he spent money that he did not spend

    • Terretta 46 minutes ago
      Even at unlimited budget, there is a crossover where outsourcing thinking to the machine costs more than the machine.

      What I mean by this:

      1. Intern, analyst, junior, or offshore level coding is cheaper when done by the machine.

      // Side note: There is good reason the industry invests in suboptimal output from this set which moves to the "cost" column when using an LLM, but nobody's accounting for that.

      2. For the interns, analysts, junior, or offshoring to do the right thing costs a multiple of the coding effort: the PdM/PjM stuff of course, but also the Stakeholder, Product Owner, Architect, Principal Engineer, QA, and SRE stuff.

      3. If you are not a principal or staff engineer level engineer, you are likely unqualified to catch and fix the errors LLMs make across engineering, much less these other PDLC (product development lifecycle, which includes SDLC and SRE) loop.

      4. For LLM output to be useful, your 'harness' has to incorporate all of that as well, which because it's so much harder than transliterating spec-to-code, balloons tokens exponentially.

      5. Today it is faster, more efficient, and costs less, to work with LLMs "XP" (eXtreme Programming) style, pairing with the LLM actively co-creating and co-reviewing, steering for more effective turns.

      So, your options are:

      - ship garbage while costing less than a median first world SWE

      - pair with the LLM actively for the benefits of XP

      - add enough harness and steering the LLM costs more than SWEs, and still needs a human loop “move fast and break things to find out what's broken” style

      I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.

    • otabdeveloper4 12 minutes ago
      > unless they're subsidizing all this

      They literally are. (If by "all this" you mean the subscription future bait-and-switch plans.)

    • rvz 49 minutes ago
      But even going with the $5k - $6k monthly usage on a $200 codex subscription even going over their limits is also unrealistic in the long term and that is just ONE person.

      Lets say I was at the casino and was spending a lot on casino chips but I also happen to work at the casino. I'm not really losing money whether if I win / lose since I'm using the houses money and there's little risk involved on every dice roll or press of the button. The risk is far higher if I don't have that level of access and continue to spend the same amount of money on lots of tokens (or casino chips, spins or button presses.)

      The same is true here with these agents. Some companies will realize that they can no longer afford to spend millions a month on tokens or even startups spending $5k - $6k per person per month on tokens.

      I can only see local efficient models making sense on recovering from this unnecessary spending or even light gambling on tokens.

  • ExTv 2 minutes ago
    thank god im broke lol

    i built my personal app mostly with ollama and it’s been smooth sailing so far. basically openclaw + hermes-style agents running on android phones, and the stuff it can do is kinda insane

  • zxornand 1 hour ago
    And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

    Doubtful lol, dudes killing the environment just for fun at this point.

    • wiseowise 52 minutes ago
      > And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

      He was. When it comes to marketing. This is was most people don't understand. Peter is a great marketing guy who got hired because of a hype vision, not because he is an outstanding engineer. Think of it like OpenAI hiring MrBeast of the coding world.

      • loandbehold 12 minutes ago
        What makes an "outstanding engineer" is subjective. He has a long history of creating popular products. Is homebrew creator an "outstanding engineer"? Google employees didn't think so because he couldn't solve a leetcode problem.
      • Iolaum 9 minutes ago
        I 'd say he is an outstanding engineer as well. He may favor output over security more than outstanding engineers at 2025 but in the 2026 world what he does is impressive. And with OpenAI's resources he has turned OpenClaw's security woes around. Latest versions are much more secure than 2 months ago.
        • sdevonoes 5 minutes ago
          It’s not impressive. He’s a celebrity. Celebrities are not impressive
    • vessenes 1 hour ago
      If you review the openclaw release schedule and code output you will see that yes, he was. I’m not saying you’ll like what you see, but the openclaw release schedule is well faster than human ability to assess it.
      • Philip-J-Fry 1 hour ago
        With a lot of these AI tools yea, they release very often. But half the features they add aren't even that useful. They just add shit because they can and they introduce bugs and change behaviour all the time.

        Opencode has the same problems. They often do multiple releases of that app a day, yet within the span of a week or two I have had to update my config because some random change has altered the behaviour and my permissions broke. Or I've noticed the way the app renders is suddenly different.

        Yet, my day to day usage has barely changed since the version I installed last year. It's like everything changes but nothing changes.

        • freedomben 12 minutes ago
          Even claude code has this happen, though perhaps to a lesser extent. I'm getting really tired of having new bugs pop up on me or subtle behavior change near daily that requires me to change things. The most annoying thing ever that was just introduced is a giant spew of context mode crap that Claude aggressively adds to every CLAUDE.md file, and I can't find a way to turn it off. I just have to `git checkout CLAUDE.md` repeatedely right now. If I have to add a bash alias to work around your annoying bug, that's pretty bad.
      • risyachka 54 minutes ago
        Thats the single reason it is faster. Just pushing to prod whatever.

        All projects can become fast if they drop guardrails.

        This does not correlate with productivity increase

      • rowanG077 1 hour ago
        It's fast for sure. But not 5 years of dev time compressed into 30 days fast.
      • SecretDreams 1 hour ago
        That's a metric for management to pump AI if I've ever seen one.
      • minraws 52 minutes ago
        I am not joking when I say this, if you pay me 1.3 million dollars today, I will get so much more done with just a single 200$ codex sub in 30 days than he has in 30 days, I can promise you that.

        I just checked the code and feature outputs, and I can build all that in 15 days, for 1.3M USD. Fuck I would do it for 1M...

        Scratch that, if it's 300K then sure I could do the same too, if you paid me that for 30 days of work. Lmao, the quality and the feature volume is just not worth anything worth paying so much money for.

        I am not saying this because I don't like LLMs or I may think that AI coding can't work, but folks whatever openclaw has built for that much money is not worth nearly that much money...

        • stephbook 27 minutes ago
          I don't understand. Are you saying you're capable of building a rival to Openclaw in a few days, but you're just choosing not to? That's amazing.
          • hubertdinsk 17 minutes ago
            everyone can build toys. Most people just have enough shame about publishing it.

            The hard part is not building such toys, it's the convincing people with money to buy said toy. This is where he earned his applause.

          • therouwboat 14 minutes ago
            I assume there is already bunch of openclaw rivals, so why bother? Its not like they all become super popular and get bought by openai.
      • realusername 58 minutes ago
        > the openclaw release schedule is well faster than human ability to assess it.

        That doesn't sound very positive to me...

  • Robdel12 57 minutes ago
    Once you see how much crap they’re running to police the agents on the repo, you’ll ‘get’ the spend https://x.com/steipete/status/2055405041843052792

    I won’t lie, if I had the access to this, I’d do the same exact thing.

    • danpalmer 41 minutes ago
      "All that automation allows us to run extremely lean"

      He has a different opinion of what it means to be lean than almost everyone else. That's fine, he's allowed to, but it's something you have to understand to make sense of any of his comments on things. He has a radically different set of values to most people.

    • Philip-J-Fry 39 minutes ago
      But it's a self fulfilling prophecy. They need all this stuff because it's a vibe coded app where bugs are randomly introduced, the architecture is overcomplicated and sucks, and stuff is just added for the fun of it.

      Do existing companies run entire end-to-end product integration tests on every single change they make to a repo to make sure something hasn't broken? No, they just architect things in a way such that a minor change to something can be tested in isolation. And that can be automated, deterministically and efficiently.

      Where I work we can release changes to our production site in minutes almost completely autonomously with high confidence with absolutely zero AI agents in the loop. How did we do it? With lessons learned from the past 5 decades of professional software development experience.

      Lets not forget what OpenClaw is at it's core. It's a glorified cron scheduler. Why on earth does any of this effort need to exist. It's not that deep, it's not that complex, it's all AI for AI's sake.

    • tedggh 50 minutes ago
      Same mindset as Marc Andreessen when working on Mosaic: Design for infinite (Internet) bandwidth.
  • vslira 1 hour ago
    Regardless of one’s opinion about AI, from a product perspective this seems somewhat similar to the dev using his 48gb ram machine and latest iphone to test an app that will be used by consumers with entry-level devices
  • thomasahle 1 hour ago
    He used 600B tokens in 30 days.

    I use more than 150B/month with just 15 codex accounts.

    60 accounts is "just" $12,000/month. So Peter could "save" 100x by using monthly accounts.

    Of course, he doesn't have to, as he works at OpenAI now.

    • MadxX79 1 hour ago
      Sounds like a healthy industry, selling tokens at 1000x below cost.
      • impulser_ 13 minutes ago
        I would bet money Anthropic and OpenAI are actually profitable on inference. The problem is they have to spend large sums of money to train models that are essentially worthless after a few months.
      • wolttam 36 minutes ago
        API pricing isn’t cost, we don’t know what cost is.
      • SecretDreams 1 hour ago
        It's to build a moat, of course!

        Narrator: there was no moat

    • peteforde 40 minutes ago
      What I truly don't understand, as a daily heavy Opus 4.7 user, is how you can coherently prompt 15 different parallel conversations at the same time.

      For me it's not even a "what the hell are you working on" so much as complete inability to understand how you can keep so many different processes working on distinct tasks. It simply doesn't map on to how I use these tools.

      I spend most of my day writing extremely detailed prompts and that's how I'm able to get the sort of excellent results that confound skeptics. But I have to be honest with you: I don't think I can write (or think) fast enough to do two of these at a time, much less 15.

      I definitely could not review what they are generating with any degree of confidence.

      I'm really hoping you can explain what the heck your usage pattern actually looks like, because reading this makes me feel like I'm missing something.

      • thomasahle 30 minutes ago
        I'm trying to recreate all the commercial EDA stack in open source. (RTL simulators, synthesis, formal proof tools, etc.)

        Building compilers has a _lot_ of parallel tasks agents can work on.

        Wish me luck..

    • ianm218 1 hour ago
      What do you do with all those accounts?
  • tom1337890 1 hour ago
    After trying openclaw a bit myself, no wonder. Without the best models, capabilities drop significantly. And I guess he has a lot of automations and stuff, which explains the 19'000 daily spend. I hit my personal spend limit when it cost like 40 USD to get Google auth tokens working. Which is very complicated when you run openclaw on a vps. And it even broke like a week after. Maybe one could justify the 40usd if it would save my time instead. But I was babysitting openclaw doing it anyhow. So I actually double spend. Money plus time.

    Btw, same frustration for me setting up signal, Whatsapp or slack...

    • vessenes 1 hour ago
      It’s a moving target for sure. I’m excited for the LTS release series - keeping up with twice or three times weekly releases is not for humans :)
  • mtct88 1 hour ago
    It's a very peculiar way to flex.
    • Avicebron 1 hour ago
      It's like the nerd equivalent of rolling coal?
    • Ekaros 36 minutes ago
      Lot of online presence seems to be tied to consumerism. That is consuming anything, more ostentatious the better. This is just specific digital version of that.
    • discordance 1 hour ago
      I work at a bigtech and we’re being measured on how many tokens we consume.

      We know it’s totally stupid, but unfortunately tokenmaxxing is real. I know our management line isn’t that dumb, but this is what you get when the business is selling it.

  • yodakohl 52 minutes ago
    You can look at the output here https://github.com/steipete Sample commit from 5 minutes ago https://github.com/openclaw/crabbox/pull/113 May 2026: 8,826 commits in 94 repositories
  • Terretta 1 hour ago
    The mentioned menu bar app is a MITM (man in the middle) and rightly discloses that it gets all your session creds and uses them, along with keychain and full disk access:

    Privacy: Reuses existing provider sessions — OAuth, device flow, API keys, browser cookies, local files — so no passwords are stored.

    macOS permissions: Full Disk Access for Safari cookies, Keychain access for cookie decryption and OAuth flows...

    It's excellent this is disclosed as a reminder of how things work and the tradeoffs you're making to use it.

  • athrow 1 hour ago
    What does he have to show for it?
  • wolttam 54 minutes ago
    Nobody here talking about what this represents for demand on these models, if these numbers aren’t made up.

    One person using 600B tokens in a month. The most I’ve hit is around 500M tokens and I thought that was a huge amount.

    We’re going to have some major compute shortages for a while

    • onion2k 44 minutes ago
      Jensen Huang was saying humanity is going to need 1000x the current energy production in the future. He might not be wrong.
    • voidfunc 48 minutes ago
      500m tokens is easy... I'm burning about 2b a week.
      • danpalmer 39 minutes ago
        Anyone can burn tokens. Using them for something useful is the hard part.
        • voidfunc 37 minutes ago
          Im pretty confident its useful :p
  • hansmayer 1 hour ago
    What product or feature did he build with it and how much ARR did it generate for OpenAI?
  • faangguyindia 1 hour ago
    how many of those tokens were spent to buy fake stars using fake email signups?
  • 0gs 59 minutes ago
    you have to admit: he is not as difficult to project paratechnical admiration onto as sama is. maybe the board wants him to be the next ceo
  • lofaszvanitt 26 minutes ago
    The OpenClown.
  • Nzen 1 hour ago
    tl;dr Peter Steinberger shared a product demo for CodexBar [0] with a graph of OpenAI token usage. This graph shows one million spent, prefers gpt-5.5 and spent twenty thousand today.

    [0] https://github.com/steipete/CodexBar

    However, I do not see a strong reason to believe that this is his actual, personal usage. It could be all openclaw usage or some subset of openai usage, given that he is inside them. I suspect it is far more likely to be fake data [1] that exercises the graph library in a visually satisfying way. Notice that it has no usage for a 'week' after April 15 (a Wednesday), but picks up a bunch later. As marketing copy it needn't have any basis in reality [2]. I should hope openai would put a procedure in front of their entrepreneur acquisition that prevents accidentally exposing trade secrets [3].

    [1] https://github.com/faker-js/faker

    [2] https://www.reddit.com/r/proceduralgeneration/comments/lf2n4...

    [3] https://tvtropes.org/pmwiki/pmwiki.php/Main/PostingWhatYouSh...

    • christoph 39 minutes ago
      I view this type of post (his, not yours) as meta deception. I only became aware of this type of deception and its power from a bit of reading in to magicians and stage craft in the last few months. There’s a video on YouTube as well that does a great job of breaking down a Derren Brown stunt that uses it to great effect manipulating the TV viewing audience.

      I’d actually seen the original DB episode years before when it first aired and it definitely had an affect on me through this form of manipulation - it altered my internal understanding of marketing/advertising, which was the actual underlying purpose of the episode.

      It’s altered how I internally accept and process information from any 2nd or 3rd hand source. BTW, people aren’t necessarily always aware they’re doing it. We all suffer from our own internal biases and deceptions, and sometimes we spread them unknowingly!

  • boesboes 1 hour ago
    He should be brought to the hague XD
  • comboy 1 hour ago
    worth mentioning that openai hired him some time ago
  • Philip-J-Fry 1 hour ago
    So he's spent $20k in one day. There's not a chance in hell he's actually doing productive work with all these tokens.

    Grifters gonna grift. What a state of affairs.

    • Ekaros 34 minutes ago
      At this point token spend is in itself the product.

      Hopefully eventually we will go back to evaluating the output. Not that I am very hopeful that we learn to do it in sensible way.

    • malshe 1 hour ago
      Come on, he is very productive on twitter /s
  • malshe 1 hour ago
    AI bros love hyping about their insanely inefficient token usage. It's become some sort of a dick-measuring contest. And if you work for OpenAI, of course you can claim insane measurements.

    Just last week I saw a dude boasting about how they used their $20/month ChatGPT subscription to earn $15 (or similar trivial amount) in a bug bounty by running the model the whole day. Sam Altman replied to that tweet but not entirely positively.

    OpenAI has been removing limits on token usage to take on Anthropic but I'm sure most of the users they are acquiring are these AI bros who are burning tokens for the sake of it. Massive price hikes are coming after OpenAI and Anthropic IPOs probably an order of magnitude larger than what happened to ride sharing.

  • myaccount1337 36 minutes ago
    [dead]
  • jdw64 59 minutes ago
    [dead]
  • k3y5 24 minutes ago
    [dead]
  • oefrha 1 hour ago
    [dead]
  • MadxX79 1 hour ago
    [dead]
  • wiseowise 1 hour ago
    What a clown. And Twitter bozos will cheer and clap. As far as money spent, this is still much better than rounding up and/or bombing brown people, but shows insanity of the current market. The saddest part is that bootlickers/temporarily embarrassed AI millionaires will defend this.

    And of course I'm just yet another envious hater from "the orange website". Your conscience is clear, AI bros. /s

    • vessenes 1 hour ago
      OpenClaw is the fastest growth open source project ever. This isn’t clowning.
      • orphea 1 hour ago
        Yep, and surely it has nothing to do with buying GitHub stars. Very organic growth.
      • wiseowise 1 hour ago
        > OpenClaw is the fastest growth open source project ever.

        By which metrics?

        > This isn’t clowning.

        Why?

      • backscratches 51 minutes ago
        Lol if your only metric is "I say so"
      • boxed 1 hour ago
        Both things can be true. The Chinese communist party was one of the biggest social movements ever. Millions died.
        • phpnode 1 hour ago
          Goodness me that’s quite a comparison
          • vrganj 14 minutes ago
            Agreed. The Chinese Communist Party lifted a billion people out of poverty. What good has OpenClaw done?