Anthropic made a big mistake(archaeologist.dev)
69 points bycodesparkle5 hours ago |28 comments
ojosilva2 hours ago
They did not. Anthropic is protecting its huge asset: the Claude Code value chain, which has proven itself to be a winner among devs (me included, after trying everything under the sun in 2025). If anything, Anthropic's mistake is that they are incapable of monetizing their great models in the chat market, where ChatGPT reigns: ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.

Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!

Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.

Palmik1 hour ago
I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.
F7F7F71 hour ago
OP is responding to an article that largely frames Anthropic as clueless.
Majromax1 hour ago
> Anthropic is protecting its huge asset: the Claude Code value chain

Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”

If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.

“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.

dchftcs27 minutes ago
Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.
jrsj2 hours ago
It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.

Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres

Terretta1 hour ago
As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.

It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?

An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".

They are literally "please don't give us money any more this month, thanks".

ojosilva40 minutes ago
Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.

But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.

Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.

solumunus2 hours ago
I’ve done that and unless I’m missing something it seems like it would be trivial for me to switch to an alternative.
jrsj2 hours ago
If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.

The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.

Philpax2 hours ago
I'll be honest; I'm pretty sure this "mistake" will be completely forgotten by the next month. Their enforcing that their subscription only works with their product should not really come as a surprise to anyone, and the alt-agent users are a small enough minority that they'll get over it.
jrsj1 hour ago
I’m starting to think you’re right but only because software engineers don’t seem to actually value or care about open source anymore. Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.

Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.

Philpax1 hour ago
There's nothing stopping you from using OpenCode with any other provider, including Anthropic: you just can't get the subsidised pricing while doing so. This is irritating, yes - it certainly disincentivises me from trying out OpenCode - but it's also, like, not unexpected?

In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.

jrsj1 hour ago
Yes but why are they subsidizing the pricing and requiring to use their closed source client to benefit from it? It’s the same reason the witch in the story of Hansel and Gretel was giving out free candy.
bpt31 hour ago
> Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.

We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.

As with everything else (finance comes to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.

nerdjon2 hours ago
I am sure the company is going to get very upset at people no longer paying who were using their product in a way that they did not intend. Just going to be heartbroken. I will never understand the people that make a big deal about "I will never support this business again because of x" when X not something the company ever officially said they cared about.

In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.

Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.

Especially when they are not blocking the ability to use the normal API with these tools.

I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.

Palmik1 hour ago
To me it's very easy to understand why people would be upset and post about it online.

1. The company did something the customers did not like.

2. The company's reputation has value.

3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.

nerdjon1 hour ago
Sure, it is perfectly valid to complain all you want. But it is also important to remember the context here.

I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.

Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.

Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.

lemontheme1 hour ago
Before this drama started, OpenCode was just another item on a long list of tools I've been meaning to test. I was 100% content with CC (still am, mostly). But it was nice to know that there were alternatives, and that I could try them, maybe even switch to them, without having to base my decision on token pricing. The idea of there being escape hatch made me less concerned about vendor lock-in and encouraged me to a) get my entire team onto CC and b) invest time into building CC's flavor of agents, skills, commands, hooks, etc., as well as setting up a marketplace to distribute them internally.

While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.

Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.

So while this is all very anecdotal, here's what Anthropic accomplished:

1) I no longer feel like evangelizing for their tool 2) I installed a competitor and validated it's as good as others are claiming.

Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.

TylerJewell16 minutes ago
Note - we primarily make use of Gemini CLI, which is very promising, but have made pretty extensive trials as Claude Code.

Anthropic hasn't changed their licensing, just enforcing what the licensing always required by closing a loophole.

Business models aside - what is interesting is whether the agent :: model relationship requires a proprietary context and language such that without that mutual interaction, will the coding accuracy and safety be somehow degraded? Or, will it be possible for agentic frameworks to plug and play with models that will generate similar outcomes.

So far, we tend to see the former is needed --- that there are improvements that can be had when the agentic framework and model language understanding are optimized to their unique properties. Not sure how long this distinction will matter, though.

nwienert2 hours ago
A good example of an extremely small but extremely vocal minority doing their best to punish a company for not catering to their explicitly disallowed use case for no reason other than they want it. I'd bet this has 0 negative impact on their business.
joelthelion20 minutes ago
650,000 monthly active users is not "extremely small". I wonder how many total users Claude Code has?
jsumrall2 hours ago
illegal?
nwienert2 hours ago
my 3am writing tends to be less precise, updated
msxT2 hours ago
Anthropic doesn’t want you to use a tool that makes it easy to switch to a competitor’s model when you reach a cap. They want to nudge you toward upgrading - Pro -> Max -> Max 20× -> extra usage - rather than switching to Codex. They can afford to make moves like this as long as they stay on top. OpenAI isn’t the good guy here - it’s just an opportunity for them to bite off a bit more of the cake.
F7F7F71 hour ago
I’d say the vast majority of people on OpenCode aren’t using CC in combination with Codex.

It’s CC with Qwen and KLM and other OSS and/or local models.

tolerance2 hours ago
This reads like an overreaction. I think both OpenAI and Anthropic are soon to settle upon their target markets; that each of them are attracting separate crowds/types of coders and that the people already sold on Claude Code don’t care about this decision.
jsumrall2 hours ago
Honestly very confused by the people happy or agreeing with Anthropic here. You can use their API on a pay-per-use basis, or (as I interpreted the agreement) you can prepay as a subscription and use their service with hourly & weekly session limits.

What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.

As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.

Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.

So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.

F7F7F71 hour ago
This is like asking why you can use ChatGPT in the Claude desktop app. “They are both Electron apps. What’s the problem?”
pella2 hours ago
> "For me personally, I have decided I will never be an Anthropic customer, because I refuse to do business with a company that takes its customers for granted."

The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.

nicce2 hours ago
The context is here that Anthropic tried to suppress alternatives. Boycott works here because there are alternatives, like writer addressed.
pella2 hours ago
If "never" means never, you are not leverage, you are just gone.
nicce1 hour ago
"Just gone" is the biggest leverage against business? Note that boycott is usually conditional. If they change things, the customer might come back.
mohsen12 hours ago
I was paying for Max but after trying GLM 4.7 I am a convert. Hardly hit the limit but even if I do it is cheaper to get two accounts from Z.ai than one Max from Anthropic
visarga2 hours ago
> they really, really want to own the entire value chain rather than being relegated to becoming just another "model provider"

I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.

Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.

estearum2 hours ago
Those are two sides of the same coin.

Being "just a wrapper" wouldn't be a risky position if the LLMs would be content to be "just a model." But they clearly wouldn't be, and so it wasn't.

alvsilvao2 hours ago
Just checked https://opencode.ai/.

It looks like they need to update their FAQ:

Q: Do I need extra AI subscriptions to use OpenCode? A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.

Philpax2 hours ago
That's not inaccurate. You can still use all of those providers: you just need to pay API costs, instead of reusing your subscription.
kzahel2 hours ago
Can't Opencode just modify their implementation to use the anthropic claude code SDK directly? The issue is they were spoofing oauth. I tried OpenCode before this whole drama and immediately noticed the oauth spoofing and never authorized it. Doesn't opencode speak ACP? https://agentclientprotocol.com/overview/agents
dd8601fn1 hour ago
It already does.

You can use the Anthropic API in any tool, but these users wanted to use the claude code subscription.

macinjosh2 hours ago
The SDK bundles Claude code and uses it for its agentic work. The SDK really only lets you control the UI layer. It als doesn’t yet fully support plan mode.
kentonv2 hours ago
I mean... I don't like it either but this is pretty standard stuff and it's obvious why they're doing it.

Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.

There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.

When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.

If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.

So they need to make the product sticky somehow. So they:

1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.

2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.

These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.

Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.

But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.

vrosas2 hours ago
> And that means: none of them can make a profit

Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.

cmrdporcupine2 hours ago
I'm not "mad", I'm "sad" -- because I was very much on "Team Anthropic" a few months ago ... but the tool has failed to keep up in terms of quality.

If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.

They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?

They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.

So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.

kentonv2 hours ago
There is so much low-hanging fruit in the tooling side right now. There's no way Anthropic alone can stay ahead of it all -- we need lots of different teams trying different things.

I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.

Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.

cmrdporcupine1 hour ago
(Also, Kenton, I'd add that I'm an admirer more broadly of your work, and so if by chance you end up creating some public project commercial or open source in the general vein we're talking about here, I'd love to contribute)
cmrdporcupine2 hours ago
Yep. And in a way this has always been the story. It's why there's just so few companies making $$ in the pure devtooling space.

I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.

Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.

IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.

And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.

hakanderyal1 hour ago
Try plan mode if you haven't already. Stay in plan mode until it is to your satisfaction. With Opus 4.5, when you approve the plan it'll implement the exact spec without getting off track 95% of the time.
cmrdporcupine1 hour ago
It's fine, but it's still "make big giant plan then yeet the impl" at the end. It's still not appropriate for the kind of incremental, chunked, piecework that's needed in a shop that has a decent review cycle.

It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.

Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.

I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.

But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.

Which is something that the tool should have done for me, and involved me in.

Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.

What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools

fathermarz1 hour ago
After reading this opinion ten times today. Can someone explain to me why OpenCode is a “better harness”? Or is it just because it’s open source that people support it?
cadamsdotcom1 hour ago
All these harnesses are free and grateful for any use they get. It might be worthwhile to try it and see.
fathermarz1 hour ago
Good call. Will test it out today
hakanderyal1 hour ago
It's mostly based on feelings/"vibes", and hugely dependent on the workflow you use. I'm so happy with Claude Code, Opus and plan mode that I don't feel any need to check the others.
vorpalhex1 hour ago
OpenCode has some more advanced features and plays nicely in more advanced setups. ClaudeCode isn't bad at all, but OpenCode has some tricks up it's sleeve.
AznHisoka3 hours ago
Isnt claude code more popular than codex?
orwin1 hour ago
> they really, really want to own the entire value chain rather than being relegated to becoming just another "model provider"

This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.

m0llusk1 hour ago
I'm supposed to adopt these wonderful new tools, but no one can figure out exactly what they are, how they should work, how much they cost, or other basics. This is worse than the early days of the cloud. Hopefully most of this goes the way of SOAP.
cmrdporcupine3 hours ago
Yeah I think Anthropic has the "right" to do this. That's fine.

But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).

People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.

They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.

I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...

Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.

If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.

F7F7F71 hour ago
Meh. I’ve never used my x20 Max account in OpenCode because the Oauth solution was clearly “hacky”.

But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.

My experience with bugs has also been the exact opposite of what you described.

netdur2 hours ago
Anthropic thinks highly of its "moat", yet it is spreading FUD to kill open weights
zzzeek3 hours ago
"renowned vibe-coder Peter Steinberger"

what? that's a thing ? why would a vibe coder be "renowned"? I use Claude every day but this is just too much.

eddyg21 minutes ago
He vibe-coded Clawdbot and lots of people are spinning up their own.

https://clawd.bot/ https://github.com/clawdbot/clawdbot

He's also the guy behind https://github.com/steipete/oracle/

hakanderyal2 hours ago
He is pretty popular in the AI/vibe coding niche on X and amassed a good following with his posts. Clearly the user is in the same bubble as him.
Mystery-Machine2 hours ago
> For me personally, I have decided I will never be an Anthropic customer, because I refuse to do business with a company that takes its customers for granted.

Archaeologist.dev Made a Big Mistake

If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".

dmezzetti2 hours ago
It's too bad that Anthropic is so hostile to open source. It's a big missed opportunity for them.
jrsj2 hours ago
The people defending Anthropic because “muh terms of service” are completely missing the point. These are bad terms. You should not accept these terms and bet the future of your business on proprietary tooling like this. It might be a good deal right now, but they only want to lock you in so that they can screw you later.
solumunus2 hours ago
How exactly are they going to lock me in?
jrsj2 hours ago
By only supporting their own cloud service for remote execution & slowly adding more and more proprietary integration points that are incompatible with other tools.
einsteinx21 hour ago
But switching cost to a different CLI coding tool is close to zero… I truly don’t understand the argument that using Claude Code means betting your business on that particular tool. I use Claude Code daily, but if tomorrow they massively raised prices, made the tool worse, or whatever I’d just switch to a competitor and keep working like nothing happened.

To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.

jrsj1 hour ago
They wouldn’t require you to use their closed source client if they weren’t planning on using it to extract value from you later. It’s still early & a lot more capabilities are going to be coming to these tools in the coming months. Claude Code or an equivalent will be a full IDE replacement and a lot of the integration and automation mechanisms are going to be proprietary. Want to offload some of that to the cloud? Claude Code Web is your only option. Someone else drops a better model or a model that’s situationally better at certain types of tasks? You can’t use it unless you move everything off of that stack.
jrsj1 hour ago
As an example, this is the exact type of thing Anthropic doesn’t want you to be able to build with Claude & it’s why they want you on their proprietary tooling:

https://builders.ramp.com/post/why-we-built-our-background-a...

reilly30002 hours ago
I just cancelled, citing this as the reason. I’m actually not all that torn up about it. I mostly want to see how Anthropic responds to the community about this issue.