> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
To play devil's advocate, without A/B testing a lot of decisions would be made with insufficient relevant data, and lead to subpar results that affect the many negatively form the road.
A lot of decisions made with A/B testing are also made with insufficient relevant data, but it's less obvious since it's easy to think the A/B results cover everything.
> Depends entirely on the stakes and whether personal data is involved
Sure. Let me just A/B test whether or not you'll respond positively or negatively to having your news delivered via push notification or delayed by 10 minutes.
I'm sure you would appreciate being tested on without your consent, just so that I can make an extra quick buck at your expense. Nothing amoral or unethical about it.
pretty much none of these big providers are offering the guarantees needed to be taken seriously in workplaces right now. the technology itself isn't offering the deterministic guarantees that should warrant it in the workplace right now. problem is everyone's foot is just on the gas. even if your workplace isnt paying for it, people are just straight up rolling their own personal claude accounts to do work at orgs.
ive been trying to make the case all year that if we're going to let employees do shit with ai, lets try claude. in the past like.. 2-3 weeks all that goodwill has basically evaporated.
local inference needs to take off asap because all of these entities actually suck and i wouldn't trust a single sla with anthropic. they are not acting like a serious company right now, this is a joke.
I just cancelled before seeing this news. i was already pissed about constantly hitting limits on the 20 a month plan and looking for alternatives and this seals the deal. Bye bye!
I just paid for Pro for the first time 24 hours ago. Its been great, but the limits are crazy. It's nice not dealing with ChatGPTs sycophantic gaslighting, and not having random bugs.
That said, I seem to be caught in that 2% test if I open in a private tab. What nonsense. I wouldn't be paying for Claude if it wasn't for its quality abilities, which necessarily includes Claude Code.
Maybe a silly bet where the head of sales had 1-2 glasses of wine too much... "I bet they will still pay us 20 bucks/mo without CC! Don't believe me? I'm going to prove it!"
>"his title should be changed to Head of Corporate Bullshitting"
They're hitting the physical limits of energy production and chip supply for inference capacity. There's literally nothing that can be done but reduce usage to spread it around for now.
Hopefully the negative responses in that thread + the conversation here on HN might help them realize that totally removing Code access for Pro users isn't a good look.
And with no free trial period on top of that, nobody is going to want to pay $100+ just to check it out. I can't imagine the conversion rate of that test being positive.
Random data point: Guest passes apparently still include Claude Code in their Pro trial. If they are running a test this is a really sloppy way to do it.
It is honestly truly fucking incredible how corps still find new, innovative ways to enshittify. Regular enshittification won't cut it, they have to exercise their artistic creativity. Who the fuck comes up with the idea that what services you get with your subscription are random? It's mind-boggling that some percentage of people visiting the website will be presented with an inferior version of the same subscription for the same price. I'm not even mad (despite my colorful wording), I don't use Claude, just impressed with the bold new territory being explored here.
I think of enshittification as "we're making plenty of money but let's make more." In other words greed.
Based on how much money Zitron has reported that these companies are losing on every subscription, this feels more like they're just trying to survive. In other words "ohshittification."
> It is honestly truly fucking incredible how corps still find new, innovative ways to enshittify. Regular enshittification won't cut it, they have to exercise their artistic creativity.
I had a bit of an epiphany the other day thinking about these VC companies offering products to the public at unsustainable prices. It's classic anticompetitive behavior.
You imagine anticompetitive behavior to come from a monopoly because they can afford to burn money to drive competition out before they bring prices back to profitable but the whole VC burn is the same thing. People talk about it a lot without really saying it explicitly when they talk about moats. The only moat Anthropic and OpenAI have is money and they utilize it by offering products below cost.
The two companies are just trying to outlast the other one until they are the only one left.
So it's not really enshitification as much as you were previously getting the deal of a lifetime.
In physical markets we call this kinda thing dumping and it's often regulated. Maybe offering SaaS or compute at below profitable rates should be investigatable too, to avoid killing competitors too easily?
It could be an A/B test to see whether people without an existing subscription care about Claude Code (CC) at all. If they sign up then CC is disabled (or not as it is not really an issue to offer more). Capturing that info would definitely be useful to a growth team.
No I think the test is that some new sign ups won't get Claude code in that tier if they pick it and they're seeing if users will still pay for it without it?
Although the ones that never touch claude code are a free $20 a month, the ones that do are potentially a seventy to eighty dollar twenty dollars a month . it’s not instantly obvious which customers you prefer (revenue vs cash negative growth- on second thought obviously they prefer the second)
They've preferred the second so far, but they might have a fair reason to see if they can keep growing with the first one instead or cut down on some loss leading, right?
That's how i read it too - they want to test if people will still pay for pro plan if it doesn't include Claude Code. At the same time they are also saying that if you subscribe having been told it does include Claude Code, they may still change their mind later and take it away!
What a way to ruin goodwill with the very community they are trying to court. I am on a Pro subscription to use with Claude Code, but it sounds like the days of using it are numbered. I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch. Moreover, I have been recommending Anthropic's API solutions up to now to friends and clients. Based on this dumb move I will be now starting with this anecdote and then giving a very hedged recommendation.
Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).
It would signal quite a fundamental pivot if their "Pro" plan excludes coding but supports personal productivity (Cowork). Quite surprising given most people attribute Anthropic's success to their elevation of coding above everything else. To have casual users locked out of that would be a major hit you would think.
Makes me curious about the internal thinking. One theory being they are in a capacity crisis and knocking Pro users off Claude Code is an emergency brake getting pulled. But an opposite theory is it's a revenue move and they think they have the lock in to pull it off. Especially if they are building up to IPO.
Interestingly the Team subscription which is still $20/month/seat still includes Claude Code. But you need minimum 5 seats. So it could be a way to force people off individual plans and into enterprise plans where possibly things scale better for them, especially IPO/wise. When one user wants it in a company, probably they go buy 5 seats.
I have to assume they're compute constrained and thus need to either raise prices or cut their lowest-margin products (which amounts to more or less the same thing, but with different optics), or turn away new users.
My assumption is that people are able to very easily saturate Pro with Claude Code and therefore even though the quotas are lower (more than proportionally) the utilization of those quotas is higher enough that Pro is less profitable.
I just switched from the $10 Copilot subscription to a $20 Claude subscription to get general AI and coding in one bill. I guess I'll try out GPT Codex.
gpt allows you to wire their models into other CLI tools, I'm advising everyone I know to lean that direction. Not trying to become hostage to something like claude's ecosystem for the rest of my development career.
I have a Claude Pro tier subscription; Claude Code, as of right now, is still functional for me. If Anthropic does boot Pro-tier users off Claude Code, I will be cancelling my subscription.
They would probably grandfather existing users in for at least a year or something, you have to imagine. Even if this "test" goes very well and points to removal
This test makes perfect sense with their actions the last few weeks, they think they've done enough to transition into the general public and away from devs and our goodwill no longer is something they should be concerned with.
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
Why would you even want a Claude subscription if not for Claude Code? Anthropic is obviously the best for programming, but probably nowhere else. Seems like a good way to onboard people to the Claude Code experience...everyone who's working seriously with it needs Opus, anyway. But, maybe that's the rub, if the Pro plan includes no Opus usage (which I think has always been the case), you might have a worse impression of Claude Code. Codex 5.4 is better than Sonnet, but not better than Opus.
I dunno, I'm no business genius, but I think we're starting to see these companies try to find ways to make money instead of losing it.
The pro plan does include Opus usage. I've noticed the limits on the web client are a bit higher than through CC, but probably more because of the increased token usage of agentic coding in general.
Claude web is actually pretty good for dealing with random projects outside of code. I have a Home Assistant MCP server [1] behind a Cloudflare tunnel exposed to it that makes maintaining automations a lot easier.
I have been using https://claude.ai and, initially, it was good, but, unfortunately, it keeps getting worse. I had it search for contact information for a certain public entity, and in Claude's response, all emails were being replaced with [email protected] or something like that. They also added an absolutely horrendous automatic markdown in the text input, so now you can't even properly enter your prompt. It actively gets in my way and prevents me from typing what I want. Fuck you Anthropic.
I would love someone to play devil's advocate against this perspective:
While these tools stand to enable the democratization of productive capability in software engineering and other tasks (creating a renaissance for solopreneurs, let's say), what seems more likely to actually happen is that entrenched capital will become the only player with real access to this "knowledge as a utility" (was it Altman who called it that?).
We already see this playing out in two fronts: 1) the gradual reduction of services and 2) the DRAM market, where local-first tools (i.e., potential disruptors of the emerging "knowledge monopoly" created by the big AI firms) are being stifled by supply shortages. How many promising small-to-medium-sized competitors are being snuffed out of existence (or never starting) due to the insanity of the DRAM/storage/CPU (soon) markets?
The currently-subsidized access that we have to the big Opus-like models will, in parallel, be gradually be taken away until only the big players can afford it. And in the end what we will have is hyper-productive skeleton crews at a few consolidated firms performing (or selling expensive access to) basically all of the knowledge labor for society, with very little potential for disruption due to the hardware and "knowledge" scarcity engineered (in part, maybe) by this monopoly.
Not necessarily a closely held belief – just a hunch – which is why I want to see what parts of the picture I might be missing.
Not only because of cost. Mythos has only been released to some of the big tech players because it's "too dangerous" [0] for us little people.
It's easy to see this becoming a permanent position; the latest models and smarts are reserved for establishment members only, the riff-raff get the cast-offs. So the establishment is preserved and the status quo protected.
[0] I'm putting scare/irony quotes around this, but if the reporting is accurate, there is something to this; we built the internet on string and duct tape, it's not hard to see how a very smart AI could cut it to ribbons.
Devils advocate here - pro and max tier customers for all the major inference providers are loss leaders from the data we have been able to figure out, and reverse engineer. They are effectively a marketing exercise.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
They already effectively halved it with the introduction of Opus 4.7 and the new tokenizer that basically gives you about half as much usage for the same price.. Convenient to price based on tokens, and leave what a token is a moving target..
Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it. Zero value delivered.
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
I'm starting to think I've been A/B tested, because this was my experience for almost a year with Claude ever since I tried it for coding. Meanwhile, my coworkers seemed to be able to use it for long periods of time without getting rate limited.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
I wouldn't be surprised if folks start complaining to California government agencies like the Department of Consumer Affairs, and they take it seriously.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
Im locked in for a year of claude pro, I encountered the same issues as you a couple weeks ago, Id get like one solid plan done and really really hope it was a 1 shot because that was legit all i was gonna get out of it for those 5 hours, and it would be ~10% of weekly usage to really make me feel scared to hit send.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
> Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
My personal LLM coding stack is now OpenCode, Claude Sonnet for ideation on spec with OpenWhispr for voice-to-text, GLM-5.1 for the orchestrating loop, GLM-4.7 for coding, and DeepSeek R1 for review and validation. It works much, much better than the Claude Code setup I have at work for substantially less money to boot.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
I have to guess that they're compute limited somewhere or the new models are incredibly overusing tokens, so I guess you need to wait for new data centers to come online?
All I want is a reasonably priced subscription combining both coding AI and general AI in a single bill for non professional use that allows me to opt out of my data being used for training.
Google limits history to 72 hours if you opt out of training even if you pay them $20 a month which rules them out for me. I guess I'm going to try the $20 chat gpt plan.
At this point I am wondering if I need to accept that were moving to a token based model and get comfortable with opencode and manually switching models.
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
Is there a wager that this is 100% foreshadowing Claude Code will be removed from the $100-200/month Max plans soon and go to something like API-only? Or only available on like a new $500-1,000/month plan? Restrict the $100-200/month ones to Claude.ai (website) or Claude desktop app only?
Either way, doesn't seem good to say it's a small test and then start justifying it in this direction.
Do they have a substantial userbase for this outside of claude code? The only two use cases for LLMs that seem to have significant traction are programming, and erotic roleplay lol. If they stop catering to devs, who is their market?
The last couple of weeks using Claude has been…interesting to say the least.
Additionally I run a constant hacking contest between GPT and Claude. It’s a toy project and it simulates an attack/defense of a small corporate network.
Claude used to win pretty handily. Suddenly it’s started to lose 90% of the time. I thought GPT had gotten better but no, looking at the logs it seems that Claude is slower and more prone to running in circles. This is still the case when switching to Opus 4.7.
I don’t know what that means but it’s undoubtedly worse.
Hmm, we just bought my wife an annual subscription at the Pro tier, largely to use Claude Code. Wonder if she'd be grandfathered in or if we'll need to get a refund.
I see lots of speculation that Anthropic needs to cut usage because they are compute constrained. If that's the case, will they be focusing on reducing compute costs for their models?
From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.
Note that some companies, like Amazon, purchased and ran the Claude on their own hardware. They didn't change the model parameters during the Claude Opus 4.6 karma.
If Anthropic continues to getting worse, try Amazon Kiro and other companies that run Claude on their own hardware.
It might be expensive and have a worse experience compared to Claude's code, but at least the model itself is the "original flavor."
It’s seems like there are a lot fishy smells coming from the timing of the mythos announcement and the reports of issues with casual users. Combine that with the mass rejection of 4.7 it kinda seems like they are burning their ‘non research’ users in order to keep the Mythos users warm.
I could be connecting unrelated dots here, but it sure as hell seems quite coincidental to me.
The only thing they'd need to do to enjoy the positive PR from the DoD spat is shut up and improve (or at least not worsen) product.
Even the downtime would've been fine (as GitHub shows). Instead they're pissing it all away by letting employees make random announcements on random platforms.
Until you work for a company or government agency that is subject to any sort of technology audit. The moment offshore processes running in China comes up you'll have a never ending hole of questions to answer.
ANthropic never wanted my money anyway... they don't allow work + personal accounts to have the same phone number. I had to close my personal account otherwise I could not complete onboarding at work.
You should be blaming your employer for forcing you to use a personal device to access company resources. You should have been given a company phone or stipend.
Anthropic NEED to get better at communicating with their customers. The most meaningful updates we get on changes come from employees on X. It's unprofessional and unsustainable.
I use it on Pro and was just thinking today, there is no way $20 covers the cost of it. But I'm long term unemployed and can't afford any higher tier, so if they drop it guess I'll have to find a non-anthropic solution somehow.
Yes, it's been a way better deal to go for a subscription than pay as you go for me in the past. I had a month where I burnt through ~3.8b tokens which was somewhere in the ballpark of $8k worth of savings.
Now though I don't dare use spend tokens for basic note taking with Sonnet because I'm hitting the limit over a couple million tokens on the 20x plan, so they've really tightened the purse strings since November.
Unrelated to the Claudge Code change, I'm fascinated by people on Twitter and Bluesky posting screenshots of the answers they get from AI like it's an original source of information. It's as if some users see the AI as an authority, and derive some kind of social capital from that authority. For example, in the OP's linked Bluesky thread, one person replies with "Fin says it’s included with Pro" and attached a screenshot from "Fin AI Agent" (which I haven't heard of) that claims Claude Code is still available on the Pro tier. Is that valuable? Personally I don't trust what any AI has to say, especially when the subject is currently in flux.
Another example, I recently saw two people over on Twitter posting LLM responses at each other in a bitter argument about Vercel's security breach. They made no attempt to pretend they'd formulated the ripostes themselves, it was just screenshotting one-sided conversations... What's the point? They could've saved themselves the trouble by spawning two LLMs, naming them "John Doe" and "Fred Doe", then telling them to argue and post the name of the winner.
Disclaimer: I don't use Twitter, Bluesky, Mastodon, etc., so maybe it's not that deep.
Local AI is almost impossible right now with the prices of RAM and GPUs and the sizes of decent models. No way spending even an optimistic 10k, but more likely 20k, on a setup that is good for 5-6 months makes any financial sense.
Impossible to find Mac minis in some areas, and if this goes through expect it to get worse.
I settled for the AMD rough equivalent. It’s not perfect but it can still handle most of the work. Now if only extra ram would come down in price… I find I need about 5 GB more than I have
I remember when they first added Claude Code to Pro — it was limited to Max initially — and my first thought was that it seemed kind of stupid, because at one fifth of my current limit, I would be hitting walls all the time...
I’ve found that I hit the limit just around the end of the 5-hour window, so it’s definitely been usable for me.
But I’ve mostly been using it for gitops infrastructure in my homelab. I wonder if the token usage is lighter than if I were developing an application.
It was for about the first 6 months after I subscribed, then the rate limits were tightened to the point of uselessness and pushed me to cancel and go for the Codex plan instead.
Oh FFS claude code is the only reason I have a pro claude subscription. I don't even use my personal subscription all that much after spending all day with claude/bedrock at work. I will absolutely cancel my pro subscription and continue to use local / Codex if claude code stops working.
I realize this duplicates a lot of sentiment already in this thread but anyone here with pull at Anthropic please understand it will undo a lot of the goodwill that made Claude so successful in the first place.
Would it really be that hard for them to just make all of the changes and then do a redeploy rather than doing them incrementally? It's not like they're just editing the raw HTML sitting on the server manually, right? Actually, don't answer that, I'm not sure I even want to know the answer.
you could try customer support, that chat bot will happily loop you with some more non answers, but try to make you feel good about those non answers :)
But the current plans are unsustainable and prices will have to be effectively raised sooner or later:
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
How long until the $10 Github Copilot subscription goes away? That was a great deal for my limited personal programming. The only reason I switched from it to Claude was to get coding and general ai in a single bill.
I think Github Copilot is in the process of slowly winding down right now. They've been putting very, very long (multiple day) rate limits on users for various esoteric reasons for weeks now and just yesterday or so paused signups.
I assume this has to do with the $20 tier now running out of provisioned tokens so quickly as to be not particularly useful, giving users a bad experience.
The million token context + reduced caching period + new models using more tokens made this a probably unpopular but perhaps unavoidable development.
There's a hard problem here balancing costs and experience. I'm afraid despite the bad experience for people that this is necessary and $20/month was just too big a loss to sustain.
Is there any marginal cost associated with a new subscriber?
I have always heard inference is cheap and the cost was in training, so I assumed any subscriber was making them money, just not enough to cover their insane fixed costs.
The "5x" and "20x" no longer make sense for Max. It's supposed to be 5 times the Pro limits. But if only Max 5x has access, then they need to renamed to "Max 1x" and "Max 4x".
This does not explain the changes to documentation.
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
A/B testing people without their informed consent is immoral, unethical, and should be illegal.
Sure. Let me just A/B test whether or not you'll respond positively or negatively to having your news delivered via push notification or delayed by 10 minutes.
I'm sure you would appreciate being tested on without your consent, just so that I can make an extra quick buck at your expense. Nothing amoral or unethical about it.
I can't trust Anthropic to manage their products in a way that supports my workflow.
ive been trying to make the case all year that if we're going to let employees do shit with ai, lets try claude. in the past like.. 2-3 weeks all that goodwill has basically evaporated.
local inference needs to take off asap because all of these entities actually suck and i wouldn't trust a single sla with anthropic. they are not acting like a serious company right now, this is a joke.
That said, I seem to be caught in that 2% test if I open in a private tab. What nonsense. I wouldn't be paying for Claude if it wasn't for its quality abilities, which necessarily includes Claude Code.
his title should be changed to Head of Corporate Bullshitting
They're hitting the physical limits of energy production and chip supply for inference capacity. There's literally nothing that can be done but reduce usage to spread it around for now.
And with no free trial period on top of that, nobody is going to want to pay $100+ just to check it out. I can't imagine the conversion rate of that test being positive.
I, and everyone else I have asked, see this new updated sales UI; sounds like more than 2%.
This is concerning though. If I lose my current usage allotment at this price point I will likely switch to codex
Based on how much money Zitron has reported that these companies are losing on every subscription, this feels more like they're just trying to survive. In other words "ohshittification."
I had a bit of an epiphany the other day thinking about these VC companies offering products to the public at unsustainable prices. It's classic anticompetitive behavior.
You imagine anticompetitive behavior to come from a monopoly because they can afford to burn money to drive competition out before they bring prices back to profitable but the whole VC burn is the same thing. People talk about it a lot without really saying it explicitly when they talk about moats. The only moat Anthropic and OpenAI have is money and they utilize it by offering products below cost.
The two companies are just trying to outlast the other one until they are the only one left.
So it's not really enshitification as much as you were previously getting the deal of a lifetime.
These companies probably need to be forced to at least try to price their products at a level that would be sustainable long term.
Plenty of Pro subscribers never touch claude-code.
Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).
After all, we may be a just a data source and not their intended demographic all along.
Makes me curious about the internal thinking. One theory being they are in a capacity crisis and knocking Pro users off Claude Code is an emergency brake getting pulled. But an opposite theory is it's a revenue move and they think they have the lock in to pull it off. Especially if they are building up to IPO.
Interestingly the Team subscription which is still $20/month/seat still includes Claude Code. But you need minimum 5 seats. So it could be a way to force people off individual plans and into enterprise plans where possibly things scale better for them, especially IPO/wise. When one user wants it in a company, probably they go buy 5 seats.
My assumption is that people are able to very easily saturate Pro with Claude Code and therefore even though the quotas are lower (more than proportionally) the utilization of those quotas is higher enough that Pro is less profitable.
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
I dunno, I'm no business genius, but I think we're starting to see these companies try to find ways to make money instead of losing it.
Claude web is actually pretty good for dealing with random projects outside of code. I have a Home Assistant MCP server [1] behind a Cloudflare tunnel exposed to it that makes maintaining automations a lot easier.
[1] https://github.com/homeassistant-ai/ha-mcp
While these tools stand to enable the democratization of productive capability in software engineering and other tasks (creating a renaissance for solopreneurs, let's say), what seems more likely to actually happen is that entrenched capital will become the only player with real access to this "knowledge as a utility" (was it Altman who called it that?).
We already see this playing out in two fronts: 1) the gradual reduction of services and 2) the DRAM market, where local-first tools (i.e., potential disruptors of the emerging "knowledge monopoly" created by the big AI firms) are being stifled by supply shortages. How many promising small-to-medium-sized competitors are being snuffed out of existence (or never starting) due to the insanity of the DRAM/storage/CPU (soon) markets?
The currently-subsidized access that we have to the big Opus-like models will, in parallel, be gradually be taken away until only the big players can afford it. And in the end what we will have is hyper-productive skeleton crews at a few consolidated firms performing (or selling expensive access to) basically all of the knowledge labor for society, with very little potential for disruption due to the hardware and "knowledge" scarcity engineered (in part, maybe) by this monopoly.
Not necessarily a closely held belief – just a hunch – which is why I want to see what parts of the picture I might be missing.
It's easy to see this becoming a permanent position; the latest models and smarts are reserved for establishment members only, the riff-raff get the cast-offs. So the establishment is preserved and the status quo protected.
[0] I'm putting scare/irony quotes around this, but if the reporting is accurate, there is something to this; we built the internet on string and duct tape, it's not hard to see how a very smart AI could cut it to ribbons.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
“You asked, and we listened: Introducing Max Plus, our biggest plan yet, designed for those…” blah blah
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
(Head of Growth @AnthropicAI)
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
Is there a wager that this is 100% foreshadowing Claude Code will be removed from the $100-200/month Max plans soon and go to something like API-only? Or only available on like a new $500-1,000/month plan? Restrict the $100-200/month ones to Claude.ai (website) or Claude desktop app only?
Either way, doesn't seem good to say it's a small test and then start justifying it in this direction.
Additionally I run a constant hacking contest between GPT and Claude. It’s a toy project and it simulates an attack/defense of a small corporate network.
Claude used to win pretty handily. Suddenly it’s started to lose 90% of the time. I thought GPT had gotten better but no, looking at the logs it seems that Claude is slower and more prone to running in circles. This is still the case when switching to Opus 4.7.
I don’t know what that means but it’s undoubtedly worse.
From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.
If Anthropic continues to getting worse, try Amazon Kiro and other companies that run Claude on their own hardware.
It might be expensive and have a worse experience compared to Claude's code, but at least the model itself is the "original flavor."
These days, it's hard to ask for much.
I could be connecting unrelated dots here, but it sure as hell seems quite coincidental to me.
Even the downtime would've been fine (as GitHub shows). Instead they're pissing it all away by letting employees make random announcements on random platforms.
So I pay for Codex instead.
Why not with email?
That is the only way to avoid being held captive by Anthropic / Meta / Google.
Now though I don't dare use spend tokens for basic note taking with Sonnet because I'm hitting the limit over a couple million tokens on the 20x plan, so they've really tightened the purse strings since November.
https://bsky.app/profile/mattgreenrocks.bsky.social/post/3mk...
Another example, I recently saw two people over on Twitter posting LLM responses at each other in a bitter argument about Vercel's security breach. They made no attempt to pretend they'd formulated the ripostes themselves, it was just screenshotting one-sided conversations... What's the point? They could've saved themselves the trouble by spawning two LLMs, naming them "John Doe" and "Fred Doe", then telling them to argue and post the name of the winner.
Disclaimer: I don't use Twitter, Bluesky, Mastodon, etc., so maybe it's not that deep.
I settled for the AMD rough equivalent. It’s not perfect but it can still handle most of the work. Now if only extra ram would come down in price… I find I need about 5 GB more than I have
I remember when they first added Claude Code to Pro — it was limited to Max initially — and my first thought was that it seemed kind of stupid, because at one fifth of my current limit, I would be hitting walls all the time...
But I’ve mostly been using it for gitops infrastructure in my homelab. I wonder if the token usage is lighter than if I were developing an application.
I realize this duplicates a lot of sentiment already in this thread but anyone here with pull at Anthropic please understand it will undo a lot of the goodwill that made Claude so successful in the first place.
Would it really be that hard for them to just make all of the changes and then do a redeploy rather than doing them incrementally? It's not like they're just editing the raw HTML sitting on the server manually, right? Actually, don't answer that, I'm not sure I even want to know the answer.
3 hours later…
> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
https://x.com/TheAmolAvasare/status/2046724659039932830
April: "The fact that we're doing X isn't news because we're only starting to do X"
August: "The fact that we've fully rolled out X isn't news because we started X in April"
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
https://xcancel.com/TheAmolAvasare/status/204672528250217304...
The Anthropic website has become inconsistent. Some places say Claude Code is included in the Pro plan, other pages don't.
The million token context + reduced caching period + new models using more tokens made this a probably unpopular but perhaps unavoidable development.
There's a hard problem here balancing costs and experience. I'm afraid despite the bad experience for people that this is necessary and $20/month was just too big a loss to sustain.
Is there any marginal cost associated with a new subscriber?
I have always heard inference is cheap and the cost was in training, so I assumed any subscriber was making them money, just not enough to cover their insane fixed costs.
But I am just guessing.
Maybe this is coming next
"We've determined that claude code is too dangerous to your code base to release, so we are withdrawing it"