I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.
Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.
and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.
One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?
A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).
A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
I don't think this race to superintelligence idea should be taken too seriously. It is great for headlines and get peoples imaginations up. It is mostly a marketing gag.
I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?
> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.
I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.
> Before the acquisition, Bun had to figure out how to monetize at some point.
I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.
Are there any situations you would compare this to historically?
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
It's a shame that VCs have corrupted a $200MM/year business into the perception as a failure. Who cares if the VCs didn't get a large return, or if the outsized impact of the software didn't quite fully capture the value created. $200MM/yr without aggressive R&D or operational costs could be an incredibly healthy business.
Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.
I partially agree with you, but I also think that it's good that people can make something they want, that seems to have no monetization path, and have some hope of being bailed out.
It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.
> I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point".
Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.
I know people say it is unprofitable but I wonder if there is a way to verify it is truly is.
I will not say any details but I worked for a giant company which was barely making money YoY but somehow the bonuses for heads were bigger and bigger given a proxy metric related to profit.
There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.
"Profitable" is the wrong metric, really, it's whether it is sustainable - can development continue indefinitely given the current financial situation?
I'm thinking about your comment...
It put many wheels to spin...
Tldr; I think the don't care about what will happen to the company in medium or long term.
---
Are any of those companies looking for stability or sustainability?
I have the impression they are completely aware of the diminished return effects and they will explore the moment to the fullest of their capabilities promising even more absurd things when the results are even smaller.
I do agree there is a considerable improvement comparing from a year ago but definitely not ground shaking as it was from the year before to the last.
Many of the promises turns out to be empty or at least having huge number of asterisks to it.
I think there are flags everywhere.
From minor things such as everyone using different benchmarks or plotting performance differences on weird choices os axis and ordering.
Other mild things such as promoting the "system" created a compiler from scratch when such compiler does not even do a hello world and runs and gave output binaries running 300x then the counterparts.
(I am aware there was a misusage of the agentic benchmark to build a compiler but there was an active choice on how to tell the story. Given other movements I am not quite sure if I believe it was an accident)
There are other red flags such as people rolling back to previous versions of models because they can't get the new one to work properly.
Other situations such as the affirmations that they have such "dangerous" model that apparently seems to be more of a benchmark trick than real results with <100B models being able to replicate the benchmark results only by changing the methodology.
I don't think we are yet in the turning point where everything will collapse but my feeling is that we are going in that direction unless something that makes these models much more intelligent AND efficient.
It makes sense to not hire a person when you can have a machine for the same job for the same price. But AI prices are getting higher than the returns do the margins for it to be a sensible choice are getting smaller.
That all said, I say again that I think that they are completely aware of this effect. Not because they understand the technology but because this happens more frequently than not.
Because of this, I don't think they care to be sustainable. All of them,smell that they will take the money and leave the ship to sink.
You might be underestimating the effect that corporate policies and culture have on the product.
Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.
No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.
If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.
"Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer. Because AI in a good codebase actually does really, really well."
Once bad code starts to compound on itself, it's going to be really hard to break out of it.
I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?
I'm not the biggest fan of the influencer community, but I think that it mostly boils down to many learners preferring video content over written material. I've gotten used to reading documentation now, but I remember it being extremely intimidating when I was first learning. It was nice to have someone break stuff down into simple terms for me.
To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.
No, look into his actual work history (sorry being a paid marketer isn't working as a dev). Was only a dev consultant for like two years before pivoting into full time influencer. Trust me, I know more about these types than any normal human should.
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.
Bun is not a "product" at Anthropic though, it's a tool for its developers to build products. IMO as long as it remains that way, the incentives for its developers will remain fairly aligned with the incentives of people who use it outside the company.
A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.
Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.
This is all independent of any shitty practices with their other software. And this has been for decades at this point.
One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.
It remains to be seen how things will actually unfold.
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?
Nope. The need to monotize and the fact that an acquihire cost some money is exactly why relying on a specific runtime is where people should have concern.
I agree with OP, and understand why to some it feels premature.
We live in a vastly different world than before, where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes.
It might be premature from a tech standard, but it makes sense from an ethical concern. I don't think misconduct is as easily backtracked as it was before and preemptive measures are needed to avoid the large impact that those decisions make.
> where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes
Would be interested to hear what makes you say that. I don't see anyone being conscious of ethical concerns more than they were before. I can see slightly more BDS people, for example, but outside of that not much.
Given the complaints about Firefox and Safari not adopting Chrome OS Platform APIs, and shipping Chrome all over the place, I am not sure about people standing on the ground and ethical considerations.
The author closes by enumerating some of the things they like about Bun which are not included in pnpm. The list is basically: native TS support, a vite-style bundler and a vitest/jest style test runner.
Other than a bundler, Node already has all of these. Different test runner syntax maybe but otherwise TS "just works" out of the box and their built in test runner is totally capable. Not sure I see the need for such a lament over Bun.
To be fair, Node didn't have any of these things until Deno & Bun challenged it. Deno didn't seem to move the needle by itself very much for whatever reason, but Bun's existence has had a tangible effect on the Node Technical Steering Committee. I would even argue that much of the current impetus has been driven by Jarred Sumner's savvy social media marketing. It got people talking, and Node is better because of it.
Additionally, Bun's push for covering as much of the Node API as possible has pushed Deno towards the same level of compatibility, and now most code is basically runtime agnostic. I'm not sure if I'll ever actually use Bun in production, but I'm glad it exists because the JavaScript ecosystem has been much improved simply due to its existence.
TypeScript is a wide umbrella. For instance, Experimental Decorators are shunned by many (including me), but they are still used by millions. If I don't use any syntax that requires transpilation, am I not still using TypeScript?
Now that we have `satisfies` and `as const`, there's really no reason to ever use an enum. In my opinion, TypeScript is best when it is simply used as Language Server, and it should never have had runtime implications in the first place.
enums and decorators mainly. There are also subtleties such as having the ts file extension in imports. Also imports aren't transpiled in cjs so you need to need es modules.
This is cool! But AFAIK bun promises to be a one-stop-shop for all your JS/TS dev needs, while Perry is "just" a compiler from Typescript to native executables.
I would too ... but not as the winning competitor.
For their first year two of existence, bun tried to do npm, but better. For the first year or two of their existence, Deno tried to reinvent npm.
The key result is that after that first year or two Deno had to walk back their decisions, to create a Node-ecosystem-compatible tool .. and as a result, they're now significantly behind bun (at least by all metrics I've seen).
I know, early deno was rough and frustrating. But it is now _the_ main competitor to Bun. What makes you say it is behind? Are you talking about features or usage?
This isn't anything new and I feel the same way about Deno. We can argue about exactly how much trouble any runtime is in today vs yesterday vs tomorrow but VC funding of a javascript runtime feels inherently unstable to me.
The key question is how much unique tooling you're relying on. If you can switch to Node tomorrow, great. If you can't, make sure you have a contingency plan.
Im on pay-per-use plans so its not the limits thats the issue directly, although the product development process could lead to them trying to fix limit issue and breaking the product as a whole.
The main issue is side effects of effort/thinking it seems. It hallucinates at a much higher rate and skips research in a ton of edge cases even with effort of MAX and disabling adaptive thinking, even on 4.6. Ive said before, but using opus today feels like using sonnet from ~October timeframe. Its not anywhere near what opus 4.5 in January felt like, or even opus 4.6 on release (notably 4.6 on release _really_ over-researched even simple tasks and that behavior is almost entirely gone now even with max effort so they are definitely re-tuning these things on the fly and degrading the experience as a result).
EDIT: I also have a very high suspicion that the way they hydrate thinking is buggy and/or lossy (or maybe unintentionally lossy which leads to bugs). So many behaviors just make no sense at the level I have my setup tuned (I have everything set to "just charge me the most money to hopefully get the best results") and the fact that I havent changed anything while using it daily for months and months on end, but have been getting worse and worse results.
Yeah I have found worse results if I don't leave it on the highest setting. I have gotten by with Pro and a little overage buffer so far. I have found it working pretty well for what I'm using it for but I have really only been using it a couple months now.
Claude has definitely gotten stupider (even on the latest Opus).
I used to be able to give it certain commands, and reliably count on it to do the right thing. Lately I give it identical commands and it just starts doing something idiotic, instead of the correct thing (that it did 50 times prior).
To an earlier poster's point, it's probably the model, not the harness, and I understand Anthropic has to make money someday (and they're not now) ... but I'd rather see a visible doubling of price than a secret halving of the capabilities (which seems to be their current plan).
But are you saying the harness is driving you insane? Or the model? Because Bun is,only the harness, and that part has been improving over time if you stay on the stable channel
Its the product overall, and its impossible to say where the issues are but I tend to think not the model since the changes seem to be able to occur overnight. So likely a combo of harness and service-layer.
> But from the outside, Claude Code looks like a tool moving in the wrong direction. More restrictions, billing weirdness, surprise behavior based on text in commits. That is textbook enshittification.
I've never used Claude Code, but this person doesn't understand what "textbook enshittification" means. "Enshittification" is a feature of certain kinds of business models, progressing through the following stages:
1. Giving away a product free to users, subsidized by venture capital, to gain a monopoly
2. Switching to advertising, then abusing users on behalf of the real customers, advertisers
3. Using monopoly power to abuse real customers (advertisers) to extract as much money as possible
Anthropic's business model doesn't have a "user / customer" dichotomy; their paid users are their customers. And they don't have a monopoly they can use to extract money yet.
ETA: In other words, "Enshittification" isn't just random; you're making the user experience worse in order to make advertiser experience better; and then making advertiser experience worse in order to extract maximum profit. The only complaint that could vaguely be related to profit is the OpenClaw stuff, and that's entirely due to trying to keep the "all-you-can-eat" model for non-OpenClaw users, rather than having to switch everything to metered.
Does bun have a formal roadmap? I occasionally see some the changes that Jarred posts on X, and I wonder if they're really meaningful or not (perf improvements are always good). It also seems like a lot of the recent contributions are ai authored.
I tried using bun for a project earlier this year and learned that you can't use testcontainers(works fine w/ Deno).
I dont think so, but recent release includes a terminal markdown renderer built-in which means, even if handy, most of the focus is to make Claud Code great. I am not worried though, at least no yet.
OpenAI and Anthropic both are destined to doom for sure. There's no way around it and it is all in the math. Bun would be a causality. It is only a matter of time.
Only company that would survive the AI race - the one where the current wave was actually invented along with the research paper, the libraries and even specialised hardware: Google.
Google has a serious problem with its product management culture (long list of products and projects, people even skeptical of Flutter) otherwise they would have surpassed Anthropic long ago.
Bun is basically a wrapper over JSCore. I don't think it's that big of a feat. Furthermore they are heavily invested in vendor specific APIs which I think is not good.
I wonder why Anthropic chose to spend money on Bun when they could have easily spend that resource on Go which is fairly easy to use and fast. I'm sure their SWEs could easily everything things in Go. Anyone have insight on why?
If I had to guess, it comes down to speed of iteration. Claude Code is built on JavaScript, so Bun aligns well with their current stack.
Switching to Go or Rust would only make sense if performance were the main priority, which doesn’t seem to be the case. Their current setup lets them ship quickly. A rewrite in Go would likely slow that down.
Codex moved to Rust, and you can see the trade-off. Performance improved, but release velocity dropped. They’re also still catching up to Claude Code, so they don’t face the same pressure to ship as fast.
My guess: JavaScript runs in the Browser as well as on the OS. That way you can train a model to be able to interact with both fairly simple. You can also see that their harness, claude-code is also written in js. So I guess they are quite invested in that language anyway.
Yeah, it's the same pattern you saw in the early react days where open source devs would try to "woo" the react core team into getting recognition to sell consulting services or courses.
The bun people likely have some fucked up incestial business relationship with some >dev manager at Anthropic and the same pattern is repeating. Only this cycle it's going straight to acquisitions, which honestly seems like a worse strategy and Anthropic will def can the bun engineers in less than <3 years or whenever they face an actual budget crunch that they can't stave off with more gulf money.
I’m wondering why Anthropic, who has “the most powerful, hold me bro, AI in the world” just didn’t vibe code their own, better, version of bun? haven’t Dario said that coding is cooked in 6 month, like 12 months ago?
I made this exact same decisions (bun -> pnpm) for similar reasons, mostly bc I didn’t like how haphazardly a core part of the stack was being vibe coded. Too many changes too quickly for something that’s supposed to be stable
Why did you have to stop using Cursor? I ask this as someone that uses Cursor, but recently at a conference I heard it referred to negatively several times - but in a very vague sense. I don't really have a dog in the fight, I'm using it because thats what the other dev I work with is using.
There is the SpaceX acquisition rumor, but that's not why.
I only use Cursor through the CLI, and while the UX of the CLI is pretty bad, I've found their harness (the prompts they use and orchestration of LLMs) to be nothing short of incredible. I can't comment on their agent development environment given I haven't spent a lot of time with it.
The reason I'm moving away from Cursor is cost. Unfortunately, if you want to use the SOTA models from both OpenAI and Anthropic you basically have to go direct through their subsidized plans.
I agree with your assessment that the harness is incredible and so I get a ton of mileage out of Auto + Composer 2. This is my workhorse.
Admittedly, with Opus 4.6+, GPT 5.5 I just haven't used them much and as I gain more experience I can see what the hype is all about. But to me, the answer isn't $200 max plan, it's bifurcating the work. Call me a spendthrift!
I personally switched back to vscode as I started using Claude and Opencode more for the AI flow, and I didn't see much added value any longer. Also, I was incredibly frustrated that they decided to hide the close button and finally, there were weird issues with editor groups spawning at unwanted times. They might be able to fix it, but I felt that they were starting to reach the limits of what you can do with a "live fork".
I still see no monetization with Bun and Deno to keep them going.
You see this all over the place with other programming languages.
The ones that have bleeding edge features do so, because there are companies, or universities (for their PhD and Msc thesis), that invest into those ecosystems.
In the end nodejs will keep improving, with Microsoft and Google's baking, and that will be it.
> Will we see issues start popping up in Bun that make it seem like the team doesn't even dogfood their own product? I don't know, but I'm not sure I want to continue using it just in case.
I sympathize with the general premise. The reaction to move away seems pre-mature though.
It sounds like `bun` is still performing just as well as before, and this sentiment isn't based on concrete changes. I also wouldn't expect infrastructure like `bun` to evolve in the way a consumer-facing product, especially one scaling as quickly as Claude Code, can.
pnpm is even worse. There is no way to bootstrap it without binary blobs making it an easy target supply chain attack waiting to happen that could hide in plain sight indefinitely.
I see the word “enshittify” being thrown around casually about Claude Code. We’re far from that part of the Enshittification cycle still. This is just a mismanaged product and the result of an extremely competitive market that moves too fast.
Never attribute to malice that which can be adequately explained by incompetence, etc.
Yeah, I'm none too happy with anthropic right now, but what's happening to Claude code is just your typical garden variety mismanagement of a project that grew way too fast for its owners to reasonably handle.
One thing is sure. Claude has become terrible. Criticize any code Opus 4.7 created and it starts a blame game. Also. It denies that a version 4.7 even exists. Will look into moving back to ChatGPT that I quit because the mandatory spyware bs they added which I believe they nuked.
I still don't think Bun is production ready.
We just ripped bun out of a bunch of our production sevices. CPU runaway and memory leaks. All solved by switching back to nodejs.
I’m confident that any unhappiness with Claude Code is at least 95% downstream of Anthropic seeing demand scale their revenue by ~3X in 6 months from a $multi-billion annual base.
Their product focus, roadmap, or execution is likely a rounding error in the face of that tsunami.
Frankly, it’s shocking they’re doing so well relative to, say, GitHub.
What is there to worry about? If we believe AI crowd, Bun and entire JS ecosystem is done for. Dead. Nothing to worry about since nothing's left.
If as claimed everyone and his malnourished cellar rat can whip up a SaaS on a whim, then why that SaaS should be built upon chromium+js+http instead of tcp+native ui?
Remember, choice of ui is no longer a constraint. Nothing is a constraint or so they say.
So it follows that all this javascript stuff can at last die.
Let them cook. Anything that they can do to get rid of the absolute hell that is dependencies in the JS ecosystem is worthwhile. I really don't care what they add as long as it's maintained
I used to be a fan of Bun, but the way it keeps adding bloat makes me seriously doubt its future. Also, it seems like they are doing a lot of vibe coding without taking enough time, which raises other questions.
Node.js is also more stable, and it has started supporting TypeScript out of the box. I don’t think Bun will have many advantages after Node 26.
> and it has started supporting TypeScript out of the box
Node only does type stripping though. If you want proper TS support you still need a compiler.
> I don’t think Bun will have many advantages after Node 26
There are tons of advantages. For instance, Bun includes a lot of features that would need a third party dependency in Node: db driver, S3 client, watch mode, bundler, JSX support, etc.
Why would you want DB drivers and S3 clients in your runtime? That’s exactly what 3rd parties are for, you don’t want to have to update your runtime for a new version of your drivers
Mostly in my day to day routine, where is use Claude Code maybe 90% of the time, I don’t see that it’s become that bad. Yes they’ve made some questionable decisions on API usage and OpenClaw but I feel like this post is making it out to be worse than it is.
That being said I’ve been worried about the future of Bun anyway. Especially if the AI bubble pops. Then again, it’s open source.
The issues with Claude Code lately look to me like symptoms of being part of a service that is experiencing insane growth (fastest growth in history, by far [1]), while being severely constrained on adding capacity (GPUs are hard to get quickly right now, even if you have the money). I assume they're constantly fighting fires trying to keep the core use cases of Claude Code working, even if that means limiting OpenClaw usage in somewhat draconian ways.
It's annoying, but I don't see this as a bad thing at all for Bun.
No, all the issues are symptoms of trying to slop-code a functional product. Anthropic has admitted they dogfood heavily, and issues like [1] from the article could only be caused by a text generator.. I refuse to believe Anthropic employees are that stupid.
Aube[0] seems interesting to me, I have submitted it as show HN after hearing about your post. Its created by the same person who has made mise and I actually discovered it when I was browsing through on mise.en.dev website
I still use bun, but I think that there are some other pathways so I am not that worried about myself personally. But that's also because I most often than not code in golang rather than typescript/javascript
Personally, I suspect that Bun is a Silicon Valley attempt to lock some companies into its stack (similar to what cloud providers, Next.js + Vercel do). Especially now that Anthropic has become an owner, I'll be keeping Bun at a considerable distance.
The funniest part to me is that 10–15 years ago, companies were stuck in the development process due to binary (closed) dependencies. Now they're jumping into the same trap under a different name.
Maybe I’ve missed some scandals, but so far OpenJS Foundation is the best thing that has happened for the JavaScript ecosystem.
Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.
But that's all just sci-fi worldbuilding.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?
I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.
I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.
The ones that were first to market went all bankrupt, or were acquired by others that came later into the scene.
It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.
Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.
There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.
Tldr; I think the don't care about what will happen to the company in medium or long term.
---
Are any of those companies looking for stability or sustainability?
I have the impression they are completely aware of the diminished return effects and they will explore the moment to the fullest of their capabilities promising even more absurd things when the results are even smaller.
I do agree there is a considerable improvement comparing from a year ago but definitely not ground shaking as it was from the year before to the last.
Many of the promises turns out to be empty or at least having huge number of asterisks to it.
I think there are flags everywhere. From minor things such as everyone using different benchmarks or plotting performance differences on weird choices os axis and ordering.
Other mild things such as promoting the "system" created a compiler from scratch when such compiler does not even do a hello world and runs and gave output binaries running 300x then the counterparts.
(I am aware there was a misusage of the agentic benchmark to build a compiler but there was an active choice on how to tell the story. Given other movements I am not quite sure if I believe it was an accident)
There are other red flags such as people rolling back to previous versions of models because they can't get the new one to work properly.
Other situations such as the affirmations that they have such "dangerous" model that apparently seems to be more of a benchmark trick than real results with <100B models being able to replicate the benchmark results only by changing the methodology.
I don't think we are yet in the turning point where everything will collapse but my feeling is that we are going in that direction unless something that makes these models much more intelligent AND efficient.
It makes sense to not hire a person when you can have a machine for the same job for the same price. But AI prices are getting higher than the returns do the margins for it to be a sensible choice are getting smaller.
That all said, I say again that I think that they are completely aware of this effect. Not because they understand the technology but because this happens more frequently than not. Because of this, I don't think they care to be sustainable. All of them,smell that they will take the money and leave the ship to sink.
Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.
No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.
If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.
Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg
Once bad code starts to compound on itself, it's going to be really hard to break out of it.To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.
99% of the times, that's not learning, but productivity porn.
I consider this a hard rule, like ad-blocking (this is exactly that, blocking ads as each talk is an ad (or ad in disguise).
Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.
A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.
Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.
This is all independent of any shitty practices with their other software. And this has been for decades at this point.
One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.
It remains to be seen how things will actually unfold.
Regardless of what else is going on, kernel is a separate team, and has very strong incentives to remain competent and sane.
For me it's far from a stretch, in fact it matches closely a pattern that I've seen repeated many times over at this point.
Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?
If you start seeing the people that created bun leaving Anthropic, then I'd probably start to worry. And I haven't seen any sign of that yet.
We live in a vastly different world than before, where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes.
It might be premature from a tech standard, but it makes sense from an ethical concern. I don't think misconduct is as easily backtracked as it was before and preemptive measures are needed to avoid the large impact that those decisions make.
Would be interested to hear what makes you say that. I don't see anyone being conscious of ethical concerns more than they were before. I can see slightly more BDS people, for example, but outside of that not much.
Other than a bundler, Node already has all of these. Different test runner syntax maybe but otherwise TS "just works" out of the box and their built in test runner is totally capable. Not sure I see the need for such a lament over Bun.
Additionally, Bun's push for covering as much of the Node API as possible has pushed Deno towards the same level of compatibility, and now most code is basically runtime agnostic. I'm not sure if I'll ever actually use Bun in production, but I'm glad it exists because the JavaScript ecosystem has been much improved simply due to its existence.
https://nodejs.org/en/blog/release/v23.6.0
`node --experimental-transform-types example.ts`
As for whether this matches your definition of "native support" or not...
Source: https://nodejs.org/en/blog/release/v22.7.0
Now that we have `satisfies` and `as const`, there's really no reason to ever use an enum. In my opinion, TypeScript is best when it is simply used as Language Server, and it should never have had runtime implications in the first place.
Is there anything else that doesn't run as valid JS if you strip the types (and maybe some other extra keywords)out?
Genuine question, in my head there's not much, but TS has a few weird corners I maybe haven't used
I'm using it in my projects with no issues.
[1]: https://github.com/PerryTS/perry
For their first year two of existence, bun tried to do npm, but better. For the first year or two of their existence, Deno tried to reinvent npm.
The key result is that after that first year or two Deno had to walk back their decisions, to create a Node-ecosystem-compatible tool .. and as a result, they're now significantly behind bun (at least by all metrics I've seen).
https://github.com/PerryTS/perry/issues/139
:vomit:
The key question is how much unique tooling you're relying on. If you can switch to Node tomorrow, great. If you can't, make sure you have a contingency plan.
The main issue is side effects of effort/thinking it seems. It hallucinates at a much higher rate and skips research in a ton of edge cases even with effort of MAX and disabling adaptive thinking, even on 4.6. Ive said before, but using opus today feels like using sonnet from ~October timeframe. Its not anywhere near what opus 4.5 in January felt like, or even opus 4.6 on release (notably 4.6 on release _really_ over-researched even simple tasks and that behavior is almost entirely gone now even with max effort so they are definitely re-tuning these things on the fly and degrading the experience as a result).
EDIT: I also have a very high suspicion that the way they hydrate thinking is buggy and/or lossy (or maybe unintentionally lossy which leads to bugs). So many behaviors just make no sense at the level I have my setup tuned (I have everything set to "just charge me the most money to hopefully get the best results") and the fact that I havent changed anything while using it daily for months and months on end, but have been getting worse and worse results.
I used to be able to give it certain commands, and reliably count on it to do the right thing. Lately I give it identical commands and it just starts doing something idiotic, instead of the correct thing (that it did 50 times prior).
To an earlier poster's point, it's probably the model, not the harness, and I understand Anthropic has to make money someday (and they're not now) ... but I'd rather see a visible doubling of price than a secret halving of the capabilities (which seems to be their current plan).
That approach is enshitification.
I've never used Claude Code, but this person doesn't understand what "textbook enshittification" means. "Enshittification" is a feature of certain kinds of business models, progressing through the following stages:
1. Giving away a product free to users, subsidized by venture capital, to gain a monopoly
2. Switching to advertising, then abusing users on behalf of the real customers, advertisers
3. Using monopoly power to abuse real customers (advertisers) to extract as much money as possible
Anthropic's business model doesn't have a "user / customer" dichotomy; their paid users are their customers. And they don't have a monopoly they can use to extract money yet.
ETA: In other words, "Enshittification" isn't just random; you're making the user experience worse in order to make advertiser experience better; and then making advertiser experience worse in order to extract maximum profit. The only complaint that could vaguely be related to profit is the OpenClaw stuff, and that's entirely due to trying to keep the "all-you-can-eat" model for non-OpenClaw users, rather than having to switch everything to metered.
I tried using bun for a project earlier this year and learned that you can't use testcontainers(works fine w/ Deno).
They are not a runtime, but they do seem to be interested in wrapping a lot of tools with simple top-level commands
Always appreciated nuance.
Only company that would survive the AI race - the one where the current wave was actually invented along with the research paper, the libraries and even specialised hardware: Google.
Google has a serious problem with its product management culture (long list of products and projects, people even skeptical of Flutter) otherwise they would have surpassed Anthropic long ago.
bun file.ts
And it’s been this way for years.
Don’t care about what’s in a package.json file or if there is one. Can do this without tsconfig file as well.
Switching to Go or Rust would only make sense if performance were the main priority, which doesn’t seem to be the case. Their current setup lets them ship quickly. A rewrite in Go would likely slow that down.
Codex moved to Rust, and you can see the trade-off. Performance improved, but release velocity dropped. They’re also still catching up to Claude Code, so they don’t face the same pressure to ship as fast.
The bun people likely have some fucked up incestial business relationship with some >dev manager at Anthropic and the same pattern is repeating. Only this cycle it's going straight to acquisitions, which honestly seems like a worse strategy and Anthropic will def can the bun engineers in less than <3 years or whenever they face an actual budget crunch that they can't stave off with more gulf money.
I only use Cursor through the CLI, and while the UX of the CLI is pretty bad, I've found their harness (the prompts they use and orchestration of LLMs) to be nothing short of incredible. I can't comment on their agent development environment given I haven't spent a lot of time with it.
The reason I'm moving away from Cursor is cost. Unfortunately, if you want to use the SOTA models from both OpenAI and Anthropic you basically have to go direct through their subsidized plans.
Admittedly, with Opus 4.6+, GPT 5.5 I just haven't used them much and as I gain more experience I can see what the hype is all about. But to me, the answer isn't $200 max plan, it's bifurcating the work. Call me a spendthrift!
Otherwise if you are looking for and IDE first approach with great AI integration it's the best product out there. I prefer it over CC/Codex.
You see this all over the place with other programming languages.
The ones that have bleeding edge features do so, because there are companies, or universities (for their PhD and Msc thesis), that invest into those ecosystems.
In the end nodejs will keep improving, with Microsoft and Google's baking, and that will be it.
I sympathize with the general premise. The reaction to move away seems pre-mature though.
It sounds like `bun` is still performing just as well as before, and this sentiment isn't based on concrete changes. I also wouldn't expect infrastructure like `bun` to evolve in the way a consumer-facing product, especially one scaling as quickly as Claude Code, can.
Plus it’s not a huge lift right now
If Bun stays great, you saved yourself some time for switching, and got to keep using Bun.
If Bun worsens, you spend the same time for switching, just moved a bit later, and got to use Bun for a little longer.
:(
Never attribute to malice that which can be adequately explained by incompetence, etc.
For my projects I don’t even need any additional dependencies. I use vanilla dom and sqlite
Might as well just open our pants and wave our wangers, hoping for a better world
Personally my experience with Bun has been 100% positive so far.
I'm aware full Node support is not there yet and may never happen but with dependencies that support Bun it's been a smooth ride for me.
Their product focus, roadmap, or execution is likely a rounding error in the face of that tsunami.
Frankly, it’s shocking they’re doing so well relative to, say, GitHub.
If as claimed everyone and his malnourished cellar rat can whip up a SaaS on a whim, then why that SaaS should be built upon chromium+js+http instead of tcp+native ui?
Remember, choice of ui is no longer a constraint. Nothing is a constraint or so they say.
So it follows that all this javascript stuff can at last die.
Node.js is also more stable, and it has started supporting TypeScript out of the box. I don’t think Bun will have many advantages after Node 26.
Node only does type stripping though. If you want proper TS support you still need a compiler.
> I don’t think Bun will have many advantages after Node 26
There are tons of advantages. For instance, Bun includes a lot of features that would need a third party dependency in Node: db driver, S3 client, watch mode, bundler, JSX support, etc.
That being said I’ve been worried about the future of Bun anyway. Especially if the AI bubble pops. Then again, it’s open source.
Otherwise it's just FUD.
It's annoying, but I don't see this as a bad thing at all for Bun.
[1] https://www.axios.com/2026/04/13/anthropic-revenue-growth-ai
[1] https://youtu.be/J8O9LLpJNrg?t=1201
I still use bun, but I think that there are some other pathways so I am not that worried about myself personally. But that's also because I most often than not code in golang rather than typescript/javascript
[0]: https://aube.en.dev/
The funniest part to me is that 10–15 years ago, companies were stuck in the development process due to binary (closed) dependencies. Now they're jumping into the same trap under a different name.
Maybe I’ve missed some scandals, but so far OpenJS Foundation is the best thing that has happened for the JavaScript ecosystem.
> Claude Code appears to be enshittifying. So now I have to worry that Bun could enshittify too