Enterprises Need To Be Careful Before They Go All-In On Anthropic
I start my day with Claude. I end my day with Claude. But I have also started moving apps off of Claude. My firm, Moor Insights & Strategy, runs core workflows primarily across Claude and Perplexity Computer, alongside a few ChatGPT and Gemini Workplace functions for specific workflows. So I’m writing this as a paying Anthropic direct customer, not as someone watching from the sidelines. I’m also writing it as an analyst who regularly meets directly with enterprises.
Here’s what I've been tracking. There's a pattern at Anthropic of saying one thing in public and doing something different in practice. The Opus 4.6 and 4.7 “nerfing” episode, which I’ll describe in detail below, is the cleanest example. CEO Dario Amodei has been hammering on AI-driven job destruction in the press — while his sales organization is selling the very tools that create the destruction. The rollout of its Mythos model reads like a scaled-up rerun of the Claude 4 “extreme blackmail” disclosures from last spring. Three self-inflicted security incidents hit in a span of thirty days at the company that markets itself as a safety lab. And while all of this was happening, Anthropic was quietly walking up the SaaS stack, going after Microsoft 365, Salesforce, Workday and ServiceNow and establishing direct relationships with enterprises.
Enterprises run on a high trust bar. If you’re an enterprise CIO, you don’t standardize directly on a vendor whose words and actions don’t line up, and you don't bet your production stack on a partner whose product roadmap is starting to look like your software vendor list. I'd tell any CIO or chief AI officer the same thing right now: Use Claude but be careful before going all-in directly with Anthropic. Buy through a hyperscaler or SaaS provider. Reserve direct Anthropic access for R&D and the newest features. We do this at MI&S, and it's the right scope of reliance for most enterprises in 2026.
What Anthropic Says, And What Actually Happens
I’m not the only one tracking this pattern. David Sacks, the All-In Podcast cohost and former White House AI and crypto czar, has been making the same observation publicly for more than a year. On his podcast in April, talking about Mythos, Sacks said : “Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people.” His specific critique was structural. “We've seen a pattern in their previous releases where, at the same time they roll out a new model or model card, they also roll out some study showing really the worst possible implication of where the technology could lead.” Last October, Sacks went further on X, calling Anthropic's broader posture “a sophisticated regulatory capture strategy based on fear-mongering.” While I'd phrase some of this differently, the underlying observation about how Anthropic handles its products is hard to argue with once you look at the specifics.
Exhibit A is the Opus 4.6 nerfing episode. In early April, Stella Laurenzo, a senior director in AMD’s AI group, published a forensic audit of 6,852 Claude Code session files and 234,000 tool calls on GitHub, documenting a measurable drop in reasoning depth. Anthropic’s first response implied that users were the problem. The second said the changes were made for users’ benefit. The third, weeks later, was a postmortem admitting that three engineering missteps had degraded the product. Sure, AMD was vindicated — but it shouldn’t have had to do that work.
Exhibit B is Dario Amodei’s job-destruction tour. In May 2025, days before launching Claude 4, Amodei told Axios that AI could wipe out half of all entry-level white-collar jobs in five years and push unemployment into the 10 to 20% range. The framing was that he had a duty to warn the public. Read that timing again. The CEO of an AI company went on record about a coming bloodbath in the same week his company was shipping the model that does the work. The warning isn’t separate from the sales pitch. It is the sales pitch. It boils down to, “You should be terrified, and the way to get ahead of the terror is to deploy our product before your competitor does.” That’s a fundraising narrative dressed up as ethics.
Exhibit C is the Mythos rollout, where the pattern is probably most visible. Anthropic has framed Mythos as posing “unprecedented cybersecurity risks” and won’t release it generally. Instead it has launched Project Glasswing , an exclusive consortium of about 40 partners including Apple, Microsoft, Google, Nvidia, AWS and JPMorgan, with $100 million in usage credits and Mythos Preview pricing at five times Opus 4.6. Compare this to the Claude 4 launch in May 2025, when Anthropic disclosed that internal testing showed the model engaging in “extreme blackmail behavior” against engineers who threatened to replace it. Sacks pointed out the obvious: “We’re now a year later, [and] there's a bunch of open-source models out there that have the same level of capability. Have you seen any examples of blackmail in the wild? I don't think so.”
It’s the same playbook for Mythos: Emphasize the danger, restrict access, generate exclusive demand, raise the next round. To Sacks’ credit, he stopped short of calling Mythos a hoax and acknowledged that the cybersecurity findings look real. But as he put it on All-In , “Anytime Anthropic is scaring people, you have to ask: Is this part of their ‘Chicken Little’ routine, or is it real?” Once a vendor establishes a pattern of using fear as marketing, every future safety disclosure costs them credibility, even when it’s the truth.
Exhibit D is the security incident pattern. On March 26, Fortune reported that an Anthropic CMS misconfiguration left close to 3,000 unpublished assets publicly accessible, including draft Mythos documentation. On March 31, Anthropic shipped a Claude Code npm package with a JavaScript source map that exposed 513,000 lines of TypeScript across 1,906 files. Days later, Adversa AI used the leaked source to identify a prompt-injection bypass of Claude Code’s safety cap. On April 22, Bloomberg reported that a group accessed the Mythos preview through a third-party vendor environment. That’s three incidents in less than thirty days — none of them from external attacks, all of them originating in operational processes. The total customer data exposure was, by Anthropic’s account, zero. That’s true (and thank goodness). But the relevant measurement is the rate of self-inflicted exposure at the company that calls itself the safety-focused alternative.
Operational Outages For Anthropic’s Direct Customers
Layered on top of the trust pattern is the operational reality. On April 6, Claude.ai, the Anthropic API and the company’s mobile applications went down for roughly ten hours . Major outages followed on April 7, April 15, April 16, April 20, April 22 and again on April 28 , when authentication failures across Claude.ai, Claude Code, and the API generated more than 12,000 user reports on Downdetector during a 78-minute window. Eight outage events in a single month, and as of this writing there is still no public postmortem from the company for the April 20 or April 28 outages.
Now compare contracts. For its Bedrock offering, AWS publishes a 99.9% uptime SLA with financial penalties for breach — and it does this by default, for every customer. Azure and Google Cloud do the same. Anthropic’s enterprise tier announced a 99.99% SLA in March 2026, but it’s negotiated case-by-case rather than published as a standard contract, and service credits are typically capped at 5 to 10% of monthly fees. That cap could be financially insignificant relative to the cost of a ten-hour production outage that affects a customer’s operations.
Independent monitoring has also documented that the claimed 99% uptime for Claude Max is actually 84% in practice, which is another permutation of the say-one-thing-do-another pattern showing up at the operational layer.
According to the company, annualized revenue went from $9 billion at the end of 2025 to $30 billion in April 2026 , so one would think that Anthropic could afford to fix its issues. Anthropic might argue that the structural fix for outages is in the works. It has recently announced significant infrastructure deals, first with AWS at $100 billion over ten years to deliver up to 5 gigawatts of Trainium 3 capacity, then with the combo of Google and Broadcom to deliver 3.5 gigawatts of next-generation TPU capacity coming online in 2027. Anthropic is also exploring creating its own silicon. This is the right strategy, but it’s also a story that will unfold in the real world in 2027 and 2028. Until all that new capacity lands, every direct customer absorbs the risk of more outages.
Hyperscaler Contracts Can Absorb Risks For Enterprises Using Anthropic
Before I get into the specifics of hyperscaler contracts, let me give you a prime example of why this is important. On February 27, the Trump administration ordered federal agencies to halt use of Claude. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk to national security , a designation typically reserved for adversarial foreign firms. The dispute came down to two clauses that Anthropic refused to remove from its acceptable use policy: mass domestic surveillance and fully autonomous weapons. As things stand, defense contractors have until June 30 to certify they don’t use Anthropic in their work for the Pentagon.
Anthropic’s principled stance is exactly why a lot of developers respect them, and I get that. But here’s the enterprise problem: Anthropic’s AUP travels with the model, and it overrides your use case if your use case lands on the wrong side of Anthropic’s policy. That has already happened, and it’s sure to happen again. So if you have any defense-adjacent revenue, this is a procurement issue. By contrast, hyperscaler contracts absorb the contractual and procurement consequences when any Anthropic-specific federal action lands. Your AWS, Azure or Google relationship is not directly subject to a six-month phase-out. Which brings me to the best path forward for enterprises using Anthropic.
Claude is the only frontier AI model available on all three major hyperscalers: AWS Bedrock, Microsoft Azure Foundry, and Google’s Gemini Enterprise Agent Platform (formerly Vertex AI). AWS also announced Claude Platform on AWS on April 20, currently in preview, which gives enterprises direct Claude features inside the AWS account framework. With these moves, Anthropic itself is acknowledging that the hyperscaler-mediated path is the enterprise path.
To stick with the AWS example, Bedrock pricing for Claude on-demand matches Anthropic’s direct API pricing. That said, total cost via AWS can run 20 to 35% higher once you factor in Provisioned Throughput, cross-region surcharges, and feature add-ons. The premium is stiff, but the premium alone isn’t what you should optimize for. What the hyperscaler buys you on top of the model is operational maturity: enforceable SLAs, IAM authentication, VPC isolation with zero public-internet egress, managed RAG, batch inference at 50% off and the option to fall back to a different model in the same API if Anthropic stumbles. Bedrock also doesn’t store your prompts or share them with Anthropic. So, there are a slew of good reasons to use Anthropic’s (very good) models — while also taking advantage of the operational benefits of using them via your preferred CSP.
Direct enterprise adoption of Anthropic is significant, to say the least. Anthropic reports more than 1,000 customers spending over $1 million annually, doubled in two months after the company’s Series G funding round was announced in February 2026. Eight of the Fortune 10 are Claude customers. The point isn’t that nobody is buying direct. The point is that procurement velocity is accelerating fastest through hyperscaler channels, and Anthropic’s own gross-basis revenue reporting captures hyperscaler-mediated consumption inside its top-line figures. The path enterprises are choosing increasingly isn’t pure direct or pure hyperscaler. It’s hyperscaler-mediated even when the model is Claude. That’s the smart path for operational deployments.
Having said that, honest analysis points out its limits. As noted earlier, Anthropic’s AUP travels with the model, so hosting Claude on Bedrock (or any comparable service) doesn’t change what Claude itself will or won’t do for you. Beyond that, switching costs are real if your prompts and agents are already tuned to Claude’s reasoning style. Latency-sensitive applications still pay a 50 to 150 millisecond proxy tax on Bedrock. And the newest features still hit the direct API first. That’s why my initial recommendation is bifurcated, not absolute: Production enterprise workloads should run through hyperscalers, while R&D, prototyping and the latest feature work still warrant direct Anthropic access.
The final benefit of routing through hyperscalers and SaaS is that they have fairly convenient ways to shift models if one isn’t working well. Enterprises never want to get locked into any vendor — and this includes with AI models. Enterprises will want to jockey between Anthropic, OpenAI, Gemini, open source models, their own home-grown models and maybe even xAI.
What Anthropic Needs To Prove For This Verdict To Change
Anthropic isn’t a lost cause, and a lot of what I’ve described is fixable. If I were sitting at lunch with Dario Amodei and his board, I’d tell them five things:
- Communicate operational issues like an enterprise utility, with detailed status pages and root-cause postmortems within 72 hours of every incident.
- Stop the scare-then-sell rollout cadence. When even the former White House AI czar is calling it a “Chicken Little routine” on a podcast that reaches the entire industry, the credibility tax is already compounding.
- Publish enterprise-grade SLAs as standard contracts with meaningful penalties, not negotiated case-by-case with symbolic credit caps.
- Tighten operational security so three self-inflicted exposures in thirty days isn’t possible.
- Be transparent about which SaaS markets Anthropic is going after and which it isn’t, so enterprises can build a trust model around it.
My Best Advice On Foundation Models For Enterprise Buyers
Use Claude and all the other leading frontier models from OpenAI and maybe xAI in the future, along with open source. Claude is currently the best frontier model for many enterprise use cases. But be careful before going all-in directly with Anthropic. Its pattern of saying one thing and doing another, the structural conflict with your existing software stack, the SLA gap between negotiated symbolic credits and hyperscaler-published penalties, the eight April outages, the three security incidents in thirty days and the AUP that overrides your use case on principle all add up to a vendor relationship that doesn’t yet meet the enterprise trust bar.
Much better to buy through AWS Bedrock, Claude Platform on AWS, Microsoft Azure Foundry, IBM’s agentic platforms or Google Gemini Enterprise Agent Platform. Use the hyperscaler abstraction layer to get an enforceable SLA and to maintain your optionality for model-switching. Reserve direct use of Anthropic for R&D, prototyping and the newest feature work. That's the approach we use at MI&S. Use the model. Just don’t marry the vendor.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with AWS, Google, IBM, Microsoft, Nvidia, Salesforce and ServiceNow.
Loading article...