Something is hilariously off here: Why should I pay $10 and be forced to use it by the end of the month, while I can pay $10 and have it last as long as I want?
How would that be? They are already charging as much as the underlying providers. They can hardly expect to have any customers if they are charging more.
Enterprise sales will be the answer. Microsoft will have some story that convinces an exec eight levels up the org chart from the normal users that this is an essential product they need to overpay for. Given their existing relationshipsand immense sales team they'll probably have success.
I'm wondering if they're basically saying they're going to give $10/month free API credits to students and open source maintainers and so on... while otherwise getting out of the consumer portion of this space.
they're downsizing free github copilot pro for open source maintainers. At the very least, it looks like small open source projects got their free copilot pro cut off
You can pool credits through open router (afaik, I'm only using a single user account), but if you top-up $10 per user, per month, any unused credits will rollover.
Tbh I think it still works, but only because the new allowance will likely get used very quickly within a billing cycle - I'm expecting this change to increase our orgs bill significantly based on how many API credits with open router I consume in a weekend using a single agent in a pairing style.
The pooling will only be useful if you have a bunch of infrequent/low usage users that you still want to have licenses.
Lots of us have noticed that usage limits for Claude have been nerfed in recent weeks/months.
If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.
The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.
This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.
However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.
There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.
Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]
> because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI-
Not sure how it all works out. Currently trillion dollar companies can't make a native app for platforms. Everything is just JS/Electron because economics does not work for them.
And here companies can make GW data center running very expensive GPUs for 1/10th of current prices. Sound little fanciful to me.
The price you pay for anthropic must include the price of training new and better models which is incredibly costly. If you use the models someone else already spend money to develop you don’t need to pay this price.
And at some point even frontier model costs will hopefully come down (if there is still a meaningful difference between closed and open source models at that point) as all of the compute that's being built out right now comes online.
Yups... Mythos is the smallest possible leap. Not a standard model generation advance, not even a version point advance. Just the smallest possible quanta of a change. We are absolutely hitting a plateau any day now. Any day. Any time. Any second now. Yup. Right now! Surely!
Yeah. AI progress is insanely fast if you compare it to anything else. Where else is a one year old technology already hopelessly outdated? 10 years ago is basically stone age.
I am continually tripped out by the fact when I was 16, I didn't have a 'smartphone' beyond a Windows Mobile 6 phone that had no internet on it.
Now, I have this high-resolution shiny object that can near instantaneously get any information I want along with _streaming HD video to it_ *anywhere*.
15 years even feels like a stone age. I can't fathom what it has to feel like people in their 60s and 70s.
If/when it gets to the point where it can replace a skilled worker, the service can be sold for close to the same price as that skilled labour. But the AI can run 24/7, reliably, and scale up/down at a moments notice.
There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best.
"There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best."
Based on what exactly? So far every time OpenAI, Anthropic or whatever has released a new top performing model, competitors have caught up quickly. Open source models have greatly improved as well.
I expect AI to be just like cloud computing in general - AWS, Azure, GCP being the main providers, with dozens of smaller competitors offering similar services as well.
I do. "Commoditize your complement". Want to sell lots of silicon? Give away good local models to run on that silicon.
Even if SOTA models in the cloud are a few percentage points better, most work can be routed to local models most of the time. That leaves the cloud providers fighting over the most computationally intensive tasks. In the long term, I think models are going to be local-first.
(Unless providers can figure out a network effect that local models can't replicate).
Just be aware OpenRouter charges a 5.5% fee, I didn’t know until recently. I like the product, and I think the fee is fair, but if you want the absolute best pricing then go direct.
But with open router you can always just use the latest model. If you're committed to eg Claude opus then you're better off going directly to anthropic for sure, but if not, varying other models may be fine too, depending on use case and be massively cheaper. Eg new deep seek model with same mio context window or Kimi k2.6 with 270k context window for subagents which implement
>but if not, varying other models may be fine too, depending on use case and be massively cheaper
Do inference providers have standardized endpoints, or at least endpoints compatible with claude code? Otherwise to pay 5.5% on all your tokens just so it's slightly easier to swap providers (ie. changing a few urls?)
"This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business and experience for all users."
I see statements like this as strong indicators that the sales people are wrapping up their work and the accountants are taking over. The land rush is switching to an operational efficiency play.
It's interesting that the cost multiplier for Claude Sonnet 4/4.5/4.6 varies so much (1/6/9), while the API cost is exactly the same for all three models.
Also, the multiplier of 27 for Claude Opus 4.6/4. is way higher than the increase in API price would suggest.
27x for Opus is genuinely shocking. at that point you're not paying for convenience anymore, you're just paying a GitHub tax. OpenRouter or direct API makes way more sense unless you're really glued to the IDE integration.
Does it effectively bypass regional restrictions for you, so you can use something like the Claude API from unsupported regions such as Hong Kong, or does it still enforce the official providers' geo-restrictions?
OpenRouter is great for budget control, but as they are indirect APIs, your experience with cached tokens may vary, eventually costing much more than in direct depending on the providers.
You can pay with crypto though, which seems to be convenient for people under sanctions or with limited access, or if you are in low-tax jurisdiction (e.g. HK)
That said I think few people using openrouter are actually being selective about providers.
It took half a day to get my opencode setup, was not friendly. A lot of manually cross referencing model and providers. I was actually mainly optimizing for relatively fast providers. It all is super fragile and I'm sure half out of date; I have no idea if these picks are still fast, no promises they are still the same price (pretty terrifying honestly).
I'm mostly on coding plans so it doesn't super affect me. But man is it a bother to maintain.
One theory of the play of SpaceX might do if everyone migrates to query-based billing:
Provide cheap and unlimited access to Grok for programmers (hence the Cursor partnership/purchase for distribution).
-> This would drag massive revenue right before the IPO announcement, like if the company is super growing
-> At a loss, but don't worry, we need these funds to build the biggest datacenter of the universe.
This announcement would create enough momentum to increase valuation, and because of the merge of his companies, would save his X/Twitter investors from a tragedy.
-> Would also be a great service to Cursor investors and so, who are stuck with their VSCode fork
Which in turn owns Twitter. SpaceX is now a social media company in addition to a rocket company.
One theory I think Matt Levine posited, is that SpaceX will go public with dual-class stock that gives Elon control even with a minority ownership stake, and will subsequently buy Tesla, which doesn't have dual class stock, making SpaceX the singular "Elon Musk company", with him having operational control despite being public.
OpenRouter doesn't even have hardware. What are they possibly subsidizing? The platform costs?
OpenRouter is guaranteed to be about the highest margin operator in the business right now. Everyone wishes they'd be them, skimming 5% off as the middleman without any OpEx.
Streaming, caching, and tool calling can get pretty expensive with scale, even when you don't touch inference. Maybe they're doing something clever and are quite profitable.. or maybe they've already taken $40mm from VCs and are currently trying to raise $120mm at a 1.3B evaluation.
They also show headline prices for the cheapest provider of whatever model, but then need to hit different backends some of which may be more expensive. For now they absorb those costs, but the VCs always come knocking.
Just my opinion though. Totally agreed that they have one of the best positions amongst all AI providers from a financial standpoint.
What's annoying is that it's obvious. In the case of GPT 5.5, if Copilot is going to charge 7.5x what GPT 5.4 costs while OpenAI themselves via the API/Codex only charges 2x of what GPT 5.4 costs, that will immediately raise an eyebrow.
To anybody who's been watching the tech sector with a critical eye for pretty much any period from the late 90s and onward, this is just the enshittification process. For most of OpenAI's existence it's been obvious, to me, that investors were burning insane levels of capital to build the market, and now that folks are locked in, you're seeing higher fees, ads, etc. Yet again, the user is the product; the investors want to siphon your data, attention and once you're hooked, money. And for companies like Microsoft and Apple, those hooks can dig deep.
If you paid attention to the power requirements and amount of hardware being put into data centers, you should have realized that it cost them an order of magnitude more than you were being charged. To rework your analogy: they hooked you, now they're gonna see if they can reel you in.
They can only reel you in if its worth it. I still can code.
And while i do not spend 200$ privat, in my startup we discussed this and our current mental model is, that instead of hiring someone new, we prefer to have more money for tokens.
This is easier for us and has a bigger benefit. The cost of a new / first employee is very high, a 200$ subscription is not. Upgrading that to lets say 400 or 800$ is still alot easier and if i can run multiply and better agents with that money, lets goooo.
I'm looking at education -- teachers and students, not terribly tech savvy, are being mandated to use these tools. And then comes the rug-pull. It was worth it, but now it's outside of their budget. Poorer schools / students can't stay at the cutting edge; richer schools / students can.
“Enshitification” is just when unsustainable subsidies end?
Another reason to hate that word.
From a different perspective, you were granted an incredible gift from the companies who let you use their product on their dime. Hopefully you made the most of it when you had the opportunity.
No, it's much more than that. It starts with unsustainable subsidies, as Uber undermined the taxi industry with a ludicrous burn rate. And then, once everybody's hooked to the point that they can't imagine life without the product, you raise costs. And you iterate: raising costs, lowering quality, selling data, increasing addictiveness. Until everybody wants to get rid of it, hates every aspect of it, but is still hooked to the core product. I'm personally not using these tools, not using uber or Meta products. But I'm still using some Google products and it's hard to extricate them from my life now that I'm using them.
That's so unfair to us hard working developers. A month ago i could buy for .4$ a turn with Sonnet. Now i have to pay at least .9$ for this turn. Weeks ago i could buy for .12$ an Opus turn after they already raised prices and now they want .27$ from me for the same product! They are stealing from us!
They aren't stealing from us, for several reasons. First of all, it's a voluntary transaction. If you don't like the prices, use something else. Or don't use AI at all.
Second, you have no idea what their costs are. It is most likely that they are simply passing on their costs to you. If that was not the setup, users would just go to another service provider who was providing tokens at a cheaper rate. It's not like there is a dearth of competitors in this business.
Seems like a strong signal the money burning party is coming to a close. Nearly all AI companies have tightened their belts in the past month. Anthropic removed Claude Code from the Pro plan, Z.AI increased their prices, GitHub removed some Claude models from Copilot, now this.
Also, Opus 4.7 seems like a model more intended to save Anthropic money than push the bar.
> Seems like a strong signal the money burning party is coming to a close.
One provider who was undercutting the market with non-standard billing model moving to a more standard billing and prices doesn't seem like that strong of a signal, other than that Copilot was underpriced.
the point is that they tipped their hand about where they want to go in the future. They are just A B testing to see how much it pissed off their customers
Yeah, honestly it feels like this came faster than I was expecting. I thought we'd see another few years of reeling in with too-good-to-be-true prices to really lock in dependency but it feels like most companies have kind of a lot of wiggle room to back out of this still
Not really sure why I would stick with Copilot after this, and increasing Sonnet from 1x to 9x for annual subscribers is highway fucking robbery. Very glad I didn't commit myself to an annual plan.
> Alternatively, they may convert to a monthly paid plan before their annual plan expires, and we will provide prorated credits for the remaining value of their annual plan.
I don’t understand if this means they’re providing actual refunds or not. For them to straight up go back on their word this had to have been a major cost they didn’t exactly expect.
Save us Deepseek!
I don’t need the world’s greatest programmer for the types of vibe coding projects I actually build.
However, if compute keeps going up in cost, hiring skilled people who know how to utilize it becomes more important. This might save the tech economy.
Contrary to the other reply, I'm going to say yes, that's exactly what it means. For Github Copilot users with annual plans that are grandfathered in to per-prompt rather than per-token pricing, Github is increasing the cost of Sonnet from 1 "premium request" per Sonnet prompt to 9, thus meaning that those users will be able to submit 1/9th the number of prompts per month before incurring additional usage charges. For all practical purposes, this is a straightforward 9x increase in price.
Not quite. Premium models have different type of multipliers applied. The multiplier decides how many PRUs (premium request units or tokens) are used. These PRUs are replaced with different units with this announcement but the methodology remains the same: https://docs.github.com/en/copilot/concepts/billing/copilot-...
I'm starting to see comments like this in a new light after using some primarily AI-coded apps the past few weeks. They are a lot like apps that were built by hundreds of developers/product people over years and years, in the worst ways.
Inconsistent design patterns from page to page, half baked features, inconsistent documentation (but BOY is there ever a lot of it!), NIH ui component libraries that don't act like you'd expect. All that fun stuff.
It's like they speedran the worst parts of enterprise apps.
I made so much progress on my personal projects, I actually regret not subscribing sooner. I've been coding alone for over a decade. It's been great having a coding buddy for a change. I'm actually going to miss it.
(This was originally posted to Microsoft and OpenAI end their exclusive and revenue-sharing deal - https://news.ycombinator.com/item?id=47921248, but in a perhaps-futile effort to keep the discussions partitioned, Maxwell's demon will move it to the Copilot pricing thread.)
"Your plan pricing is unchanged: Copilot Pro remains $10/month and Pro+ remains $39/month, and each includes $10 and $39 in monthly AI Credits, respectively."
If there's no discount on credits (in terms of tokens per dollar) over other providers, I'm going to switch to a PAYG provider. If there's a month where there's little to no coding I can pocket the 10$. What incentive do they give to stay with this plan?
Or if you're a business with multiple seats, these plans may be more inefficient than raw API usage billing. Since if anyone at your organization fails to utilize their full $19/39 allotment each month, that's wasting money, whereas with API credits it is 100% utilized.
I don't think they've thought through the implications of this. Everyone should cancel and go usage-based billing with caps.
They do address this in the doc, Orgs can now (although it was vague as to whether it was an option or just the new standard, probably option due to business contracts) 'pool' the Usage billing across all users.
I'm guessing they did that (and the 'temporary bonus credits') to make the pill easier to swallow for that side of customers.
It forces you to pay at least $20 in tokens per user even for people who use less (they probably have stats on how many people use just autocomplete, which doesn’t count against the quota. or have a seat and don’t use the service at all).
This was my first thought too. "Oh cool, I should be seeing lower prices" as I don't use Co-pilot that often anymore. But no, that's not the case. It rather served to remind me that I should probably just cancel.
Everybody who says it's a 5-9-27x seems to not be aware of the obvious loophole. More like 50x increase. You were able to use over $500 worth of Opus on a $10/mo Github plan easily, no hacks. You could just prompt "plan this out for me, don't stop until fully planned, don't ask any questions", and you would get ~$5 worth of planning in one 3x request. At 100 requests/mo, each easily reaching $5, that's easy $500 worth of tokens.
Yeah it was crazy. Nowadays I use pi with OpenAI GPT 5.4/5.5, which to me seems both better and more generous than Claude. I supplement it with OpenCode Zen to get access to a bunch of models at token cost, and OpenCode Go ($10/mo) to get subscription-style access to Kimi, GLM and friends.
I was curious why a company would still use the VS Code + Copilot sidebar method for coding, rather than something like Claude Code. Turns out there’s a GitHub Copilot CLI!
I thought I was pretty familiar with available options, but no one in my circles ever mentions this product. It doesn’t seem to have much mindshare.
I'm curious about the opposite: Why would anyone use the CLI when, at least with Copilot, the VSCode plugin is super tightly integrated with VSCode, meaning the agent can see everything I can see. There's no mismatch in linter calls where I can see a lint in the ide that the agent can't find for example. I've had this problem even using CC in their VSCode extension, so I can't imagine it's not an issue in the CLI as well.
The vs code integration is pretty slick. I can copy and paste function names into the prompt and it automatically turns them into these `#sym:` reference objects that I presume populate the context window with metadata about the function and where it lives. It knows what file I'm currently looking at as I jump around in the code, and that automatically gets loaded into the context. I can also drag and drop folders or specific files for context into the sidebar.
It's a lot of stuff that makes me have to type less into the prompt, since it's already getting so much info from my editor
I’m actually trying to move back from the Claude Code style, I feel like it’s easy to become distant from your own code, and I am feeling uncomfortable with that.
I’ve “vibe-coded” some projects and when I start to find issues or go to refactor them I don’t have that memory of why decisions were made, because many decisions were never made.
> I was curious why a company would still use the VS Code + Copilot sidebar method for coding, rather than something like Claude Code.
I use Claude Code, but I kept my Copilot subscription around mostly for really cheap usage of other models when I need to try a different one (which appears to be ending, in a sense) and also the autocomplete in Visual Studio Code which was really great across a bunch of files, I could make changes in one file and then just tab through some others.
I wonder what other good autocomplete is out there.
> also the autocomplete in Visual Studio Code which was really great across a bunch of files <...> I wonder what other good autocomplete is out there.
I am in the same boat. I tried looking for tab/auto-complete implementations ~ a year ago and it was pretty disappointing. If that has changed, would love to know!
I've used it quite a bit. There are a lot of AI terminal coding products and this is another one. It works well, handles sub-agents without issue and does a reasonable job operating in the Copilot ecosystem. It handles mid-task questions and such we well.
But its a really good UI for agentic coding. Not sure why more people don't use it. I've tried the others and keep coming back to Copilot chat. It's a really good tool. Which is why the rugpull on pricing is so concerning.
The other cool thing is Copilot SDK, so you can build agentic capabilities into apps, or build tools, that leverage the agent harness of the Copilot CLI:
That's espoused as the big reason for the price increase: most Copilot subscribed developers it seems have moved to "agentic usage" with the CLI and Cloud-based agents.
Which feels a bit like a kick in the pants for me as a developer that was primarily using Copilot for VS Code ghost text and very rarely used the Chat sidebar much less "agentic" tools.
Copilot Pro sort of made sense for my personal account when amortized across a year, but I don't want to "waste" $10/month on credits I won't use most months.
I'm just so confused why people aren't just using ghostty/kitty/terminal.app and claude code. Compared to the other approaches I've tried, it's by far the most effective way to get performance from opus 4.6/4.7
I don't know about others, but I use Copilot more often than other apps because of its tight integration with the VS Code itself where I still spend most of my time working on other things while letting AI do some task that I decided to delegate to it.
I tried the VS Code + Copilot sidebar approach a few months ago. It was definitely rough around the edges compared to Cursor/Claude. In our corporate environment, we weren't even able to use frontier models.
Search has become so bad that I also struggled to find Claude Code alternative and made my own tight (not editors, not plugins, not agents, strictly similar to Claude Code CLI) list: https://github.com/omarabid/cli-llm-coding
The list is not long but there are quite a few options. Even Grok has its own CLI!
The reality is, even though a CLI prompt looks very simple, it's a very complex piece of software. I personally use Claude Code (with GLM) and anything else I have tried was significantly inferior (with the exception of opencode).
> In March 2026, Windsurf replaced the credit-based system with a quota-based usage system. Instead of buying and spending credits, your plan now includes a daily and weekly usage allowance that refreshes automatically.
With hindsight, per-request pricing makes no sense at all if an agent can burn a widely varying amount of tokens satisfying that request. These pricing plans were designed before coding agents changed the dynamics of token usage.
I wouldn't call it hindsight - I don't think anyone, at any stage, thought running a 10 minute+ sonnet session for 1 premium credit was ever profitable. We all knew it was a loss leader to get people using it.
It would have been profitable if that premium credit cost more than a negotiated discounted rate with Anthropic. We have no way of knowing if there were negotiated rates though!
There is no way to make that cost model profitable consistently. If 1 prompt can mean 100's/1000's of requests over hours, and you only pay for that 1 premium prompt, that can never be profitable.
Guys, you're discussing a house of cards to begin with: No matter how you're paying for the $CURRENTSOTA you're not garunteed that next month what you pay for will be the same.
So, lets do some honest evaluations:
1. The model itself is a non-deterministic engine of work with an unknown value; it's real value is just magic.
2. The business model itself is non-deterministic engine of profit with a known value; whatever the VCs have put into it, _must_ be piulled out. If Ed Zitron's numbers are correct, circa 2030, it's several trillion dollars.
So do some matrix multiplication of non-determinism vs determinism, and realize that the value proposition for _you_ is only going to decrease because #1 can never outpace #2, ensuring enshittification captures a smaller and smaller whale.
We know this. This has been the last 2 decades of money extraction from software. It was ok when it was some 12 year old's parents CC. But now it's you, or your business, that's going to either ben squeeze for value or squeeze out of the market.
And everyones squabbling about the color of the cost.
ok
The problem with assuming that tokens can only get more expensive is that the Chinese open weight LLM firms have dropped models which have a known, fixed price that can never get more expensive (since we can run them on hardware we own).
This may be a more accurate analogy... "The Porsche you rented at $200/mo now only allows you a maximum of 100km of travel. You will be automatically charged extra when you go over that."
On top of being worth less, the subscriber discounts are gone.
The old plans were $0.033/request for Pro, $0.026/request for Pro+ and $0.04/request for pay-as-you-go. That discount is now gone. They even still advertise "5x the number of requests" for Pro+ over Pro.
Yeah, if I go to a petrol station with 50€, but only get a tenth of the amount of petrol I got last week, I may think that the price has in fact changed.
I wonder if GitHub (Microsoft) is implicitly betting that enterprise demand is sticky enough to absorb these rates, especially given that Opus 4.6 “fast” was being listed at a 27x multiplier. Maybe they saw enough usage at that price point to conclude the demand is real. Or maybe the strategy is to keep the enterprise customers who can justify it while shedding heavier individual and power-user usage.
The interesting question is how long it takes enterprises to notice the capability/pricing tradeoff, and whether they respond by limiting access to the strongest models internally.
The part that worries me is that this market is still very early. Most developers and organizations are still learning how to use these tools effectively. Raising the experimentation cost this much may slow down the discovery process that makes the tools valuable in the first place.
Subsidies stop when LLMs improvement plateaued (though they still benchmark higher somehow). At some point, you have to make money or at least break even; and I think they concluded that we reached that point.
As someone that is on the enterprise side in a non-tech F500 company, what I'm seeing is some FOMO and need to be part of the hype cycle. We're about to plonk a bunch of money on more Copilot licenses. Something got in the water where all the C-levels the past two months are pushing everyone to use AI but when they bring up examples of their uses its like "I use it to rewrite my emails" or prompt 'engineering' ideas that point more to patching over poor processes, data management, and decision-making within the organization or not.
What we're seeing across the board is every software company tossing AI onto their name or sales pitch and no one understanding what that actually means. But we will spend money on it because of FOMO.
I really question if we're reaching the end of the hype cycle to the point. I wish I were brave enough to put money on it. It feels like there was a command from up top to 'do something with AI' and leadership is scambling for some resume-building projects vs doing the hard work they should've done the past two years at a people and process level.
I don't use Copilot or any paid AI but all of this usage-based billing reminds me of cellphones back when you paid per individual text message.
Usage paying for AI is 1000x crazier because you're not even getting a guarantee in the thing you pay for in the end. You have to keep feeding it prompts and hope it gives you the solution you want. You may end up with no expected result yet you are paying for it. At least with texting, you got what you paid for.
I wonder how long it'll be before all AI costs are flat unlimited monthly fees or even free across the board, without compromise.
I expect in the future we'll find out that someone in the industry was juicing the numbers with fake thinking tokens or something. The whole pricing model of charging you for the tokens it generates while not knowing how much it is going to generate going in has always been pretty crazy.
Yeah, this was my frustration with Suno and Sora. You can burn a lot of credits (not to mention time) generating things that aren't what you wanted.
I don't mind a PAYG model for a simple chat interface. But when it comes to actually producing things, you burn through TONS of tokens creating the wrong output.
> I wonder how long it'll be before all AI costs are flat unlimited monthly fees or even free across the board, without compromise.
That's already the case if you can self-host an LLM; you don't even need a mythical H200: gamer-grade GeForce cards can get you a long way there (if this page is to be believed: https://www.runpod.io/gpu-compare/rtx-5090-vs-h200 )
...after RAM prices return to normalcy, of course - and then wait another 2 or 3 generations of GPU development for a 96GB HBM card to hit the streets - and also assuming SotA or cloud-only LLMs don't experience lifestyle-inflation, but I assume they must, because OpenAI/Anthropic/Etc's business-model depends on people paying them to access them, so it's in their interests to make it as difficult as possible to run them locally.
Github had, by far, the most easily game-able agent usage policy. People would force the agent to run a script before the end of turns that consisted entirely of `input("prompt: ")` so that you could essentially talk endlessly to an agent for the price of a turn. I see this less about the future of this industry and more about fighting the costs incurred by bad actors.
I never played any games like that, but simply giving the agent a clear exit criteria and instructions to check the exit criteria every time it thinks it's done on a complex task was often enough to keep it chugging away for most of a day on a single prompt in my experience. Per-prompt pricing just isn't sustainable period, even if everyone is acting in good faith.
I once asked it to do a comprehensive security review of our code. It churned for nearly an hour (and then produced 90% false positives). Insane that that usage was charged the same amount as me just saying "Hello".
So given that I primarily interact with LLM's through VSCode, and I prefer the Copilot interface to the Claude Code plugin, does anyone have any suggestions on other plugins I should try? In my experience, Copilot is much more "plugged in" than any of the other plugins, in the sense that it can see things like linter outputs in VSCode. Basically, copilot "sees what I see" in a way that no other plugin or command line tool can, which make it much more ergonomic to use.
With this pricing change, I see no reason at all to stick with Copilot in principle, but I really need to solve this issue of IDE integration to move on.
You can use Copilot Chat* with basically any API provider, and if you switch to the VS Code Insiders build you can configure it to use literally any OpenAI API-compatible endpoint.
Other than that Zed has a similar experience which is pretty decent.
* By which I mean the good one, whatever it's called now - the part of Copilot that used to be a plugin and is now part of VS Code, not the thing that has always been part of VS Code.
One reason I used it was that I wasn't locked into a single provider and switching them was as easy as changing a drop-down. Small feature? Sonnet or GPT5.4/mini? Large changes? Opus. And why not see how good Raptor Mini does this one refactor?
It also helped build an intuition of what wach model could do and which parts it was weaker at because you could try them almost side by side, especially if one model's output wasn't great.
That said, these were all side projects so nothing truly consequential. Otoh, you might leave some extra perf on the table but I found the models worked quite with the Copilot harness.
Yeah, this is a very useful abstraction layer. The entire concept of separating the model creator from the model runner is good for competition and is customer friendly. Which means they likely hate the concept and want to kill it.
Gosh, imagine getting to do that with your TV/Streaming subscription. Getting to pay one fee to access some set number of hours per month from any of the providers.
The problem is I can't afford the tokens! Even on my $10/mo plan, running either 100 opus, or 300 sonnet agent runs would cost hundreds of dollars - well above my budget!
The cheapest copilot plan felt totally unsustainable to me. For around £8 month i was getting 100 opus 4.6 prompts (albeit with a reduced context window size around 128k iirc vs 200k to 1m for first party hosted opus). Gpt5.4 was hosted with 400k context iirc.
On top of that, you’ve got 2000minutes of container runtime, so running cloud agents was included. As was anthropic agent sdk mode via copilot which is very comparable with claude code - not identical, the anthropic “modular prompt” is much leaner in the sdk version.
I cant say im mad, i got above what i paid in value. That said, going forward ill probably go back to openrouter payg rather than a subscription.
I got a free 3months of the gemini £19 plan and ive been playing quite a bit, 3.1 pro is a good model, i just find it slow. Flash i think i under appreciated until now.
On my personal account, Copilot Pro+ still only gave me back Opus 4.7, whereas my work's Pro account still lets me use Opus 4.6.
So, my gut says, it's entirely possible that Pro+ will continue to have more segregation on model availability...
FTA
> Last week, we also rolled out temporary changes to Copilot Individual plans, including Free, Pro, Pro+, and Student, and paused self-serve Copilot Business plan purchases. These were reliability and performance measures as we prepare for the broader transition to usage-based billing. We will loosen usage limits once usage-based billing is in effect.
There's enough weasel wording here that I would expect only certain models get re-enabled on Pro.
e.x. lots of people seem to get good enough results from Opus 4.6, personally I prefer it over 4.7 in GH Copilot... locking that down to Pro+ would be, given this salvo of enshittification, a 'logical' move on their part.
I pay for Copilot annually, and mostly for its code auto completion features. I use CC if I want to do anything agentic. Not sure if I want to pay more for occasionally-good-intellisense at this point.
But you can no longer amortize annually, which makes it even more a question of "is this worth it this month?" each month. Especially for personal accounts.
I'm similarly thinking about sticking with the auto-downgrade back to Copilot Free when the annual sub ends and then just yelling about it any months I hit the 2000 completion cap.
I wouldn't mind a plan between Free and Pro that is just "all I care about is code completion and next edit suggestions".
I think that only applies to held-over users on the annual plan:
> Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires, however, model multipliers will increase on June 1 (see table).
It isn't just the big multiplier increase, they also say "...and no new models or features will be added to annual plans going forward."
Can you imagine ten months from now and you're still rolling Sonnet 4.6?
Cancel/refund is looking pretty good. They're doing refunds until May 20.
"To request a refund, go to Settings → Billing and licensing → Licensing, select Manage subscription, then choose Cancel and refund "subscription". (The phrasing varies slightly depending on your subscription ). This option will be available until May 20."
It is an apples for apples comparison since those new multipliers only count if you are on an annual plan in which case the premium request system stays in place until you either cancel and get a refund or until your renewal comes up. https://docs.github.com/en/copilot/concepts/billing/usage-ba...
Those multipliers will only apply if you are currently on an annual subscription (and only until your renewal comes up or you cancel). So I assume they simply want to make it as unattractive as possible to get most people to cancel it and move to the token based system.
That's not an answer. It's specifically a discrepancy between 5.4 and 5.4-mini. If you look at all other models/generations you see that the cheaper model indeed has a lower multiplier. It's very strange that only 5.4 doesn't have this.
In places with reasonable consumer protections (Australia, Germany) it almost certainly is illegal unless they give a full (whole year) refund. I think the short time limit of applying for a refund won't be looked at favorably either. Regardless of their ToS which I'm sure covers this.
But companies do lots of illegal things, and in general nobody takes them to court over it.
I doubt you can force them to provide the service with the original terms, but you might be able to ask for a (partial) refund. If not today, after a week of verbal abuse they will receive for this online.
It depends where you’re located. In the EU they have to honor the contract you entered, but presumably there is a clause that they can prematurely terminate the contract without cause and give you all of your money back (from the start of the contract).
That kind of clause would be void in many places around the world.
For example, the German Civil Code states:
Section 308 - Prohibited clauses with the possibility of valuation
In standard business terms, the following in particular are ineffective:
[...]
4. (Reservation of the right to modify) the agreement of a right of the user [TL note: this means beneficiary of the terms, eg. party or other subject of the contract] to modify the performance promised or deviate from it, unless the agreement of the modification or deviation reasonably can be expected of the other party to the contract when the interests of the user are taken into account;
My thought exactly! First the usage limits + model limitations and now fundamental change to the billing. Hope some consumer watchdogs are looking into this!
"Paid for annual? Tough luck, from now on your usage limits are reduced by 89%. You can do 11% of what you paid for. Good luck if you paid annual a month ago!"
And then they have the gall to say
> "The bottom line: Plan prices aren’t changing"
If anyone lives in a place like Germany or Australia and has an annual sub, please take them to court, you're guaranteed to win because you have reasonable consumer protections and their ToS doesn't stand a chance. 9x reduction is unreasonable and the consumer cannot be expected to see this coming.
In case some diehard enshittifier believes that consumers should know better and businesses should be allowed to get away with it, where is the line? 99% reduction? Is that still okay?
If this situation is to be acceptable then it should be regulated as a financial product like stocks, which come with knowledge tests of "do you know you can lose all of your money?". And come with regulatory compliance and all that.
What's the current situation for coding with Local LLM's on decent hardware? I have an M3 Max with 64 gb of ram and am thinking I should start looking at Ollama and Opencode? Is this a useful stack for smaller personal projects?
One nice development recently was ollama's support for MLX optimization on Mac hardware. It's not obvious how to know you're using a model that works with it, yet, so it's rough around the edges.
That was the first thing I turned off in VSCode. Autocomplete for my TypeScript projects was great. And the "AI" suggestions/completions were really getting in the way of me still being the "driver."
I liked copilot because I didn't have to think about tokens. I get hung up when having to think about the price of things, and its hard to think about the project at the same time I got to think about token usage like a gas bill. The usage system had its own issues, but having a set amount of requests was a very comfortable way to use a paid AI service.
Not paying per token? Not sending my code to someone else's servers for inference? That's the stuff of sweet dreams for a stingy, paranoid solopreneur like me.
If I could run a local model comparable to even Sonnet 4.6 without shelling out $50K in hardware, I'd do it in a heartbeat. But all I have is a 32 GB of RAM and an old RTX 4080.
Or am I not up to speed? Are there decent coding models that can run on dev laptops? Not that that's what you were suggesting by recommending a local model, necessarily; just curious.
I do love using local models when I can, but qwen-35B is the best model I can run, and while its an insanely good local model, it does not compare to the big ones.
"It" being the end of subsidization of tokens and plans (expected) but while lock-in to foundational models and cloud services is still lacking. Guess investors want their ROI sooner than later, given how big of a wrench the AI boom has thrown into global economics.
According to `bunx ccusage` I'm easily doing $250-400/day in "real" API costs on my $200/month plan. There's no way everybody else isn't going to do the same thing and completely change the industry again. Both beginner and advanced developers are already hooked on all this stuff and they all know it.
The problem is that people expect to get the output of 100 people with a $20 subscription by spawning multiple agents. This is unrealistic. I'm using 2 codex plus account and able to manage a repo with 265-300k lines of code.
The point is - if its the same or more expensive per month than a real human employee - why pay for AI ?
Human retain knowledge, product knowledge, can pick up more work often for the same money. And having many of them means your business wont go down if provider suddenly bumps API pricing.
How much is the average pay for a junior developer in US? Its definitely much costlier considering PF and other benefits. Just the math. if you use it efficiently it's much cheaper than hiring a permanent staff. You can maintain a lean team and do all the mundane boilerplate coding with AI.
Built credit pricing into my SaaS for AI features and the hardest part wasn't the math, it was that customers can't easily predict their own usage. They underuse and feel cheated, or overuse and churn. Subscriptions hide that volatility from the customer. Usage based pricing makes it their problem, which is honest but harder to sell.
We all knew this from the very beginning but couldn't compete with OpenAI or Anthropic on their subscription-based pricing strategy. It was nuts except for those few corporations burning investor money to keep competition out as long as possible. Now they don't have to hide anymore that subscription pricing won't do it for ai. The pyramid scheme is falling.
This is sad because with request based pricing I'd let Copilot resolve really difficult tasks in my TypeScript checker project tsz[1] which would take hours sometimes and would only cost one request. Seems like those days are gone now
some of Github's open source maintainers have lost their free github copilot pro, guess this is really the next step for them to save cost in their infrastructure.
As a Github Copilot user, who mostly just uses chat in the VS Code editor but still burns through my Pro limit every month -- what's the best alternative price to performance? Claude Code?
Whose idea was this “premium request” model anyway? If you’re going to invent a new metric used to bill, why not align it with what, even at the time, was a clear underlying cost structure that GitHub actively chose to ignore for a more confusing system.
It made more sense in the ye old days where a request was basically just a chat message in a sidebar and it could also edit code. Then saying someone can use 300 chat messages a month kinda makes sense.
Turns out when a request can spawn tens of subagents and use millions of tokens over many turns of toolcalls then suddenly github copilot has a massive financial problem on their hands.
This approach started with the “Ask a question about your code” feature, which is more comparable to single chat message with relatively predictable token usage. Now it’s an agent who might work for 30 minutes, read the whole codebase, and write 1000 lines
I'm not usually a Conspiracy Guy, and the answer is probably `incompetence * tech_debt`. But I think that having sufficient layers of abstraction to any billing model is a useful way to hide the real cost of things. It's why it's done everywhere.
I'm not sure I understand this. All I know is now, I pay $39/month (actually less because I paid a year up front), use the agent, mostly on auto--and only choosing a model if it got stuck or in a loop--every day, and haven't hit any limits yet. It seemed to good to be true, after hearing others talk of $300/month bills. I guess it was.
cursor, windsurf, and CC are all already on usage-based models so I guess what really matters is whether Copilot's GitHub integration depth justifies the price per token vs the alternatives
I'm happy I invested in local solutions and cutting context to the bone for API providers. Claims about AI being able to fully replace programmers never took into account the long-run equilibrium price of inference.
That's my point. They made those decisions without any consideration for the long run, which would have required them to project the cost of AI services years into the future. Obviously management didn't do that and had no way to do that. It made current earnings look good, though, which was enough for them when they made the decision.
Glad this was announced because I didn't even realize that our (small) team has been paying $20/month for Github Copilot when none of us are using it, so used the opportunity to cancel CoPilot altogether. I think it was free when I first activated it (this was also before Claude, Codex, Gemini, and while not that great, it was why not), and didn't realize it had switched to paid and bundled with our Github bill, and now per usage.
This subsidized inference is just a marketing ploy to increase prices and profit.
If common people can have a DIY setup with an open source model cheaper than those behemoths with a scale advantage, it's clear that we have been played.
Time to either self host a Chinese open source model or to just pay the cheap Chinese providers.
Yeah, local is clearly the future. Even beyond the cheap Chinese models you can install the apfel[1] stuff if you're on a mac and want a quick available onboard cli option. And I'm sure people will adapt the Flash-MoE[2] integration to be even better soon as well.
Re 1: Current models don't solve coding. They are useful tool for it though.
Re 2: Open weight models seem to be less than a year behind proprietary ones, so sure, if you're willing to spend tens or hundreds of thousands of dollars on a super computer that you probably don't fully utilize instead of renting time on someone else's super computer for a lot less.
People need to wake up and stop being surprised by these billing increases. I see it on every update of every model. This was all subsidized by VC and company money. Now they need a return and the prices will keep going up. Be glad that you took advantage of that up until now, but can we stop the pearl clutching when we all know the amount of money being dumped into AI and the lackluster returns?
It's less surprise, but more confusing given the game theory as their competitors are not doing the same thing and the multiplier changes alone will likely churn current users.
tldr: people were running multi-hour agentic coding sessions for the same flat fee as a one-liner autocomplete, github was eating the bill, and that party's over on june 1st
Their "API pricing" is exactly the same as that of providers: https://docs.github.com/en/copilot/reference/copilot-billing...
Seems a massive loss for Microsoft. Presumably there's a further rugpull to come.
How would that be? They are already charging as much as the underlying providers. They can hardly expect to have any customers if they are charging more.
Tbh I think it still works, but only because the new allowance will likely get used very quickly within a billing cycle - I'm expecting this change to increase our orgs bill significantly based on how many API credits with open router I consume in a weekend using a single agent in a pairing style.
The pooling will only be useful if you have a bunch of infrequent/low usage users that you still want to have licenses.
Seems like folks would be better off with OpenRouter instead.
If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.
The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.
This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.
There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.
Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]
Not sure how it all works out. Currently trillion dollar companies can't make a native app for platforms. Everything is just JS/Electron because economics does not work for them.
And here companies can make GW data center running very expensive GPUs for 1/10th of current prices. Sound little fanciful to me.
And at some point even frontier model costs will hopefully come down (if there is still a meaningful difference between closed and open source models at that point) as all of the compute that's being built out right now comes online.
Now, I have this high-resolution shiny object that can near instantaneously get any information I want along with _streaming HD video to it_ *anywhere*.
15 years even feels like a stone age. I can't fathom what it has to feel like people in their 60s and 70s.
It has been years now, of cash injections, investors can't keep feeding the beast forever.
If/when it gets to the point where it can replace a skilled worker, the service can be sold for close to the same price as that skilled labour. But the AI can run 24/7, reliably, and scale up/down at a moments notice.
There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best.
Based on what exactly? So far every time OpenAI, Anthropic or whatever has released a new top performing model, competitors have caught up quickly. Open source models have greatly improved as well.
I expect AI to be just like cloud computing in general - AWS, Azure, GCP being the main providers, with dozens of smaller competitors offering similar services as well.
Even if SOTA models in the cloud are a few percentage points better, most work can be routed to local models most of the time. That leaves the cloud providers fighting over the most computationally intensive tasks. In the long term, I think models are going to be local-first.
(Unless providers can figure out a network effect that local models can't replicate).
I've been wanting to get off MS more generally and this is good motivation. Will be playing round with OR this week.
Do inference providers have standardized endpoints, or at least endpoints compatible with claude code? Otherwise to pay 5.5% on all your tokens just so it's slightly easier to swap providers (ie. changing a few urls?)
Yep, you can plug deepseek/kimi/minimax into claude code just fine. Or run everything through another harness like opencode instead.
Apple still charges 30%. 5.5 seems pretty reasonable. /shrug I dunno.
I see statements like this as strong indicators that the sales people are wrapping up their work and the accountants are taking over. The land rush is switching to an operational efficiency play.
Also, the multiplier of 27 for Claude Opus 4.6/4. is way higher than the increase in API price would suggest.
I wonder why that is.
The only model I even used on Copilot was Sonnet and now its got a ridiculous multiplier.
At this point they might as well just charge per Million tokens like every other provider instead of having a subscription.
Pretty sure that's what they will eventually do
Does it effectively bypass regional restrictions for you, so you can use something like the Claude API from unsupported regions such as Hong Kong, or does it still enforce the official providers' geo-restrictions?
You can pay with crypto though, which seems to be convenient for people under sanctions or with limited access, or if you are in low-tax jurisdiction (e.g. HK)
That said I think few people using openrouter are actually being selective about providers.
It took half a day to get my opencode setup, was not friendly. A lot of manually cross referencing model and providers. I was actually mainly optimizing for relatively fast providers. It all is super fragile and I'm sure half out of date; I have no idea if these picks are still fast, no promises they are still the same price (pretty terrifying honestly).
I'm mostly on coding plans so it doesn't super affect me. But man is it a bother to maintain.
Provide cheap and unlimited access to Grok for programmers (hence the Cursor partnership/purchase for distribution).
-> This would drag massive revenue right before the IPO announcement, like if the company is super growing
-> At a loss, but don't worry, we need these funds to build the biggest datacenter of the universe.
This announcement would create enough momentum to increase valuation, and because of the merge of his companies, would save his X/Twitter investors from a tragedy.
-> Would also be a great service to Cursor investors and so, who are stuck with their VSCode fork
But they can't buy curser before their IPO so thats that?
Perhaps they have to much compute because Musk overpromised and Twittergroq doesn't need that much compute after he nerved the porn stuff?
One theory I think Matt Levine posited, is that SpaceX will go public with dual-class stock that gives Elon control even with a minority ownership stake, and will subsequently buy Tesla, which doesn't have dual class stock, making SpaceX the singular "Elon Musk company", with him having operational control despite being public.
OpenRouter is guaranteed to be about the highest margin operator in the business right now. Everyone wishes they'd be them, skimming 5% off as the middleman without any OpEx.
They also show headline prices for the cheapest provider of whatever model, but then need to hit different backends some of which may be more expensive. For now they absorb those costs, but the VCs always come knocking.
Just my opinion though. Totally agreed that they have one of the best positions amongst all AI providers from a financial standpoint.
And while i do not spend 200$ privat, in my startup we discussed this and our current mental model is, that instead of hiring someone new, we prefer to have more money for tokens.
This is easier for us and has a bigger benefit. The cost of a new / first employee is very high, a 200$ subscription is not. Upgrading that to lets say 400 or 800$ is still alot easier and if i can run multiply and better agents with that money, lets goooo.
Another reason to hate that word.
From a different perspective, you were granted an incredible gift from the companies who let you use their product on their dime. Hopefully you made the most of it when you had the opportunity.
Second, you have no idea what their costs are. It is most likely that they are simply passing on their costs to you. If that was not the setup, users would just go to another service provider who was providing tokens at a cheaper rate. It's not like there is a dearth of competitors in this business.
Now they just increase the price to buy it back
Just got an email from GitHub saying they'll be raising prices for Co Pilot.
"To keep up with the way you use Copilot, we're transitioning to usage-based billing, and we want to give you enough time to prepare."
Man, it was fun. Having my tokens subsidized by Microsoft. If the prices go up to much I guess I'll try Deepseek again.
Also, Opus 4.7 seems like a model more intended to save Anthropic money than push the bar.
One provider who was undercutting the market with non-standard billing model moving to a more standard billing and prices doesn't seem like that strong of a signal, other than that Copilot was underpriced.
I don't disagree with your other points though.
How so? By all accounts I've read so far it uses more tokens overall for roughly the same results.
Not really sure why I would stick with Copilot after this, and increasing Sonnet from 1x to 9x for annual subscribers is highway fucking robbery. Very glad I didn't commit myself to an annual plan.
I don’t understand if this means they’re providing actual refunds or not. For them to straight up go back on their word this had to have been a major cost they didn’t exactly expect.
Save us Deepseek!
I don’t need the world’s greatest programmer for the types of vibe coding projects I actually build.
However, if compute keeps going up in cost, hiring skilled people who know how to utilize it becomes more important. This might save the tech economy.
Sometimes the multiplier increase is significant like for Claude Opus 4.6 from 3x to 27x (https://docs.github.com/en/copilot/reference/copilot-billing...), meaning using that model will use up a lot more „tokens“ (whatever the new word for it is)
Will always be grateful for the greed of trillion dollar corporations that subsidized me.
Inconsistent design patterns from page to page, half baked features, inconsistent documentation (but BOY is there ever a lot of it!), NIH ui component libraries that don't act like you'd expect. All that fun stuff.
It's like they speedran the worst parts of enterprise apps.
If there's no discount on credits (in terms of tokens per dollar) over other providers, I'm going to switch to a PAYG provider. If there's a month where there's little to no coding I can pocket the 10$. What incentive do they give to stay with this plan?
Or if you're a business with multiple seats, these plans may be more inefficient than raw API usage billing. Since if anyone at your organization fails to utilize their full $19/39 allotment each month, that's wasting money, whereas with API credits it is 100% utilized.
I don't think they've thought through the implications of this. Everyone should cancel and go usage-based billing with caps.
I'm guessing they did that (and the 'temporary bonus credits') to make the pill easier to swallow for that side of customers.
It still does make one wonder, why have seats at all though? If everyone is just in one big API credit pool - what do the seats/users accomplish?
I was using 100M+ tokens per day, $250 per day or so and only paying $160 per month to GitHub.
I cancelled my GHCP sub and switched to Codex last week, so far so good but I miss Gemini 3.1 Pro for UI work.
I would say its a x1000 increase in price for agentic workflows.
I thought I was pretty familiar with available options, but no one in my circles ever mentions this product. It doesn’t seem to have much mindshare.
Has anyone used it? What’s your experience?
https://github.com/features/copilot/cli
What's actually better in the CLI?
It's a lot of stuff that makes me have to type less into the prompt, since it's already getting so much info from my editor
I’ve “vibe-coded” some projects and when I start to find issues or go to refactor them I don’t have that memory of why decisions were made, because many decisions were never made.
I use Claude Code, but I kept my Copilot subscription around mostly for really cheap usage of other models when I need to try a different one (which appears to be ending, in a sense) and also the autocomplete in Visual Studio Code which was really great across a bunch of files, I could make changes in one file and then just tab through some others.
I wonder what other good autocomplete is out there.
I am in the same boat. I tried looking for tab/auto-complete implementations ~ a year ago and it was pretty disappointing. If that has changed, would love to know!
Personally I got CLI fatigue and am happy with Conductor for now, but things are moving fast in this space.
But its a really good UI for agentic coding. Not sure why more people don't use it. I've tried the others and keep coming back to Copilot chat. It's a really good tool. Which is why the rugpull on pricing is so concerning.
https://github.com/github/copilot-sdk/
Which feels a bit like a kick in the pants for me as a developer that was primarily using Copilot for VS Code ghost text and very rarely used the Chat sidebar much less "agentic" tools.
Copilot Pro sort of made sense for my personal account when amortized across a year, but I don't want to "waste" $10/month on credits I won't use most months.
The list is not long but there are quite a few options. Even Grok has its own CLI!
The reality is, even though a CLI prompt looks very simple, it's a very complex piece of software. I personally use Claude Code (with GLM) and anything else I have tried was significantly inferior (with the exception of opencode).
> In March 2026, Windsurf replaced the credit-based system with a quota-based usage system. Instead of buying and spending credits, your plan now includes a daily and weekly usage allowance that refreshes automatically.
With hindsight, per-request pricing makes no sense at all if an agent can burn a widely varying amount of tokens satisfying that request. These pricing plans were designed before coding agents changed the dynamics of token usage.
So, lets do some honest evaluations:
1. The model itself is a non-deterministic engine of work with an unknown value; it's real value is just magic.
2. The business model itself is non-deterministic engine of profit with a known value; whatever the VCs have put into it, _must_ be piulled out. If Ed Zitron's numbers are correct, circa 2030, it's several trillion dollars.
So do some matrix multiplication of non-determinism vs determinism, and realize that the value proposition for _you_ is only going to decrease because #1 can never outpace #2, ensuring enshittification captures a smaller and smaller whale.
We know this. This has been the last 2 decades of money extraction from software. It was ok when it was some 12 year old's parents CC. But now it's you, or your business, that's going to either ben squeeze for value or squeeze out of the market.
And everyones squabbling about the color of the cost. ok
Isn't this like saying "The Porsche you rented at $200/mo is now a Honda. But the price hasn't changed!"
* with a quota of 138 meters per hour, overage charges may apply
The old plans were $0.033/request for Pro, $0.026/request for Pro+ and $0.04/request for pay-as-you-go. That discount is now gone. They even still advertise "5x the number of requests" for Pro+ over Pro.
The interesting question is how long it takes enterprises to notice the capability/pricing tradeoff, and whether they respond by limiting access to the strongest models internally.
The part that worries me is that this market is still very early. Most developers and organizations are still learning how to use these tools effectively. Raising the experimentation cost this much may slow down the discovery process that makes the tools valuable in the first place.
What we're seeing across the board is every software company tossing AI onto their name or sales pitch and no one understanding what that actually means. But we will spend money on it because of FOMO.
I really question if we're reaching the end of the hype cycle to the point. I wish I were brave enough to put money on it. It feels like there was a command from up top to 'do something with AI' and leadership is scambling for some resume-building projects vs doing the hard work they should've done the past two years at a people and process level.
Usage paying for AI is 1000x crazier because you're not even getting a guarantee in the thing you pay for in the end. You have to keep feeding it prompts and hope it gives you the solution you want. You may end up with no expected result yet you are paying for it. At least with texting, you got what you paid for.
I wonder how long it'll be before all AI costs are flat unlimited monthly fees or even free across the board, without compromise.
I don't mind a PAYG model for a simple chat interface. But when it comes to actually producing things, you burn through TONS of tokens creating the wrong output.
That's already the case if you can self-host an LLM; you don't even need a mythical H200: gamer-grade GeForce cards can get you a long way there (if this page is to be believed: https://www.runpod.io/gpu-compare/rtx-5090-vs-h200 )
...after RAM prices return to normalcy, of course - and then wait another 2 or 3 generations of GPU development for a 96GB HBM card to hit the streets - and also assuming SotA or cloud-only LLMs don't experience lifestyle-inflation, but I assume they must, because OpenAI/Anthropic/Etc's business-model depends on people paying them to access them, so it's in their interests to make it as difficult as possible to run them locally.
Give it 5 years from now and reassess.
I once asked it to do a comprehensive security review of our code. It churned for nearly an hour (and then produced 90% false positives). Insane that that usage was charged the same amount as me just saying "Hello".
With this pricing change, I see no reason at all to stick with Copilot in principle, but I really need to solve this issue of IDE integration to move on.
Other than that Zed has a similar experience which is pretty decent.
* By which I mean the good one, whatever it's called now - the part of Copilot that used to be a plugin and is now part of VS Code, not the thing that has always been part of VS Code.
Also heard of more and more people moving to Kilo Code or OpenChamber instead.
With this kind of pricing (sonnet 4.6 has 9x multiplier, previously 1x) it begs the question why use Copilot to begin with.
You could easily just buy the tokens directly and have a lot more choice as well.
It also helped build an intuition of what wach model could do and which parts it was weaker at because you could try them almost side by side, especially if one model's output wasn't great.
That said, these were all side projects so nothing truly consequential. Otoh, you might leave some extra perf on the table but I found the models worked quite with the Copilot harness.
Gosh, imagine getting to do that with your TV/Streaming subscription. Getting to pay one fee to access some set number of hours per month from any of the providers.
GitHub has the full power of Azure with their hosted models but it's not being passed to consumers.
(I'm a copilot subscriber since 2022)
On top of that, you’ve got 2000minutes of container runtime, so running cloud agents was included. As was anthropic agent sdk mode via copilot which is very comparable with claude code - not identical, the anthropic “modular prompt” is much leaner in the sdk version.
I cant say im mad, i got above what i paid in value. That said, going forward ill probably go back to openrouter payg rather than a subscription.
I got a free 3months of the gemini £19 plan and ive been playing quite a bit, 3.1 pro is a good model, i just find it slow. Flash i think i under appreciated until now.
> What is the benefit of using the Copilot Pro+ at 39$/month instead of using the Copilot Pro at 10$/month and paying for extra usage?
On my personal account, Copilot Pro+ still only gave me back Opus 4.7, whereas my work's Pro account still lets me use Opus 4.6.
So, my gut says, it's entirely possible that Pro+ will continue to have more segregation on model availability...
FTA
> Last week, we also rolled out temporary changes to Copilot Individual plans, including Free, Pro, Pro+, and Student, and paused self-serve Copilot Business plan purchases. These were reliability and performance measures as we prepare for the broader transition to usage-based billing. We will loosen usage limits once usage-based billing is in effect.
There's enough weasel wording here that I would expect only certain models get re-enabled on Pro.
e.x. lots of people seem to get good enough results from Opus 4.6, personally I prefer it over 4.7 in GH Copilot... locking that down to Pro+ would be, given this salvo of enshittification, a 'logical' move on their part.
I wouldn't mind a plan between Free and Pro that is just "all I care about is code completion and next edit suggestions".
> Plan prices aren’t changing
did not continue with an em-dash followed by something profound that is changing.
Plan prices aren't changing -- the value you get out of it is.
> Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires, however, model multipliers will increase on June 1 (see table).
Can you imagine ten months from now and you're still rolling Sonnet 4.6?
Cancel/refund is looking pretty good. They're doing refunds until May 20.
"To request a refund, go to Settings → Billing and licensing → Licensing, select Manage subscription, then choose Cancel and refund "subscription". (The phrasing varies slightly depending on your subscription ). This option will be available until May 20."
Before:
- Opus 4.6 each premium request is 3 premium requests
After:
- Opus 4.6 each dollar spent is 27 dollars in copilot AI Credits.
Given that you'll receive 19 dollars of AI Credits in Business plan, that means you can probably say 1 "hi" to opus per month.
If you are not on an annual plan, multipliers will be gone completely. You can see the rates that apply instead here: https://docs.github.com/en/copilot/reference/copilot-billing...
[1]: https://aws.amazon.com/bedrock/pricing/
I haven't been able to use my subscription much over the busy spring months, but i'm being charged every month.
I'd be tempted to keep the subscription if usage-based billing meant that i'd save money when i had less time.
But today, after hearing this, i cancelled my subscription.
But companies do lots of illegal things, and in general nobody takes them to court over it.
1. Github could choose to grandfather in those plans and make no changes until those plans expire.
2. Github could offer, or the user could request, a pro-rated refund along with cancellation of the account.
3. Tough luck, those users agreed that Github could unilaterally change the ToS at any time.
They explicitly stated that they won't be doing that: the multipliers go into effect in June for everyone, annual plan or not.
For example, the German Civil Code states:
-BYOK runs are $0 to Mouse, period.
-Hosted runs are billed at provider cost + a published markup.
-We will never invent a unit of billing that isn't denominated in tokens, seconds, or tool calls.
-Credits in the paid category never expire.
And then they have the gall to say
> "The bottom line: Plan prices aren’t changing"
If anyone lives in a place like Germany or Australia and has an annual sub, please take them to court, you're guaranteed to win because you have reasonable consumer protections and their ToS doesn't stand a chance. 9x reduction is unreasonable and the consumer cannot be expected to see this coming.
In case some diehard enshittifier believes that consumers should know better and businesses should be allowed to get away with it, where is the line? 99% reduction? Is that still okay?
If this situation is to be acceptable then it should be regulated as a financial product like stocks, which come with knowledge tests of "do you know you can lose all of your money?". And come with regulatory compliance and all that.
https://ollama.com/blog/mlx
If I could run a local model comparable to even Sonnet 4.6 without shelling out $50K in hardware, I'd do it in a heartbeat. But all I have is a 32 GB of RAM and an old RTX 4080.
Or am I not up to speed? Are there decent coding models that can run on dev laptops? Not that that's what you were suggesting by recommending a local model, necessarily; just curious.
"It" being the end of subsidization of tokens and plans (expected) but while lock-in to foundational models and cloud services is still lacking. Guess investors want their ROI sooner than later, given how big of a wrench the AI boom has thrown into global economics.
Human retain knowledge, product knowledge, can pick up more work often for the same money. And having many of them means your business wont go down if provider suddenly bumps API pricing.
[1] https://tsz.dev
Turns out when a request can spawn tens of subagents and use millions of tokens over many turns of toolcalls then suddenly github copilot has a massive financial problem on their hands.
I have Copilot Pro that I use occasionally, but not enough to tell how the switch to per use would affect my usage.
Based on description Pro plan users will get $10 in monthly AI Credits, but that seems rather low compared to what you could use same plan until now.
That's exactly where the subsidy is being removed.
Google won
Z/Mimo already raised their prices multiple times since the promotional prices at the start of the year.
If common people can have a DIY setup with an open source model cheaper than those behemoths with a scale advantage, it's clear that we have been played.
Time to either self host a Chinese open source model or to just pay the cheap Chinese providers.
[1] https://apfel.franzai.com/ [2] https://github.com/danveloper/flash-moe
[0] - Last weeks changes limited my personal Copilot Pro account but not my Work one
1. Current models in fact do not solve coding.
2. You can simply wait for a ~year for open-source to catch up and run it locally.
Re 2: Open weight models seem to be less than a year behind proprietary ones, so sure, if you're willing to spend tens or hundreds of thousands of dollars on a super computer that you probably don't fully utilize instead of renting time on someone else's super computer for a lot less.