Anthropic Weekly Intelligence Report
Week of March 1ā7, 2026 | Published: March 7, 2026
Executive Summary
Anthropicās week of March 1ā7 was dominated by a single, high-stakes confrontation with the US federal government that threatens its defense revenue, damaged its public standing, and deepened an already bitter rivalry with OpenAI. On March 5, the Pentagon formally designated Anthropic a āsupply chain risk, effective immediatelyā ā an unprecedented label previously reserved for foreign adversaries ā after months of negotiations broke down over Claudeās use in autonomous weapons and domestic mass surveillance. President Trump simultaneously ordered all federal agencies to cease use of Anthropic technology. Within hours of the Pentagonās move, OpenAI announced a deal to replace Anthropicās Claude in classified military environments with ChatGPT. A leaked internal memo from CEO Dario Amodei, calling OpenAI staff āgullibleā and attacking Sam Altman, drew backlash and forced a public apology. Separately, the Claude API and Claude.ai suffered a global outage on March 2, lasting nearly three hours at a moment of already elevated scrutiny.
The week was not without commercial momentum: run-rate revenue surged to ~$19 billion ā more than double the figure from late 2025 ā driven almost entirely by Claude Code. And Anthropicās Claude Sonnet 4.6 (launched February 17) continued its rollout as the new default model for all free and pro users, representing a significant capability leap at unchanged pricing. Taken together, this week frames Anthropic as a company of extreme contradictions: rapidly growing commercial velocity against a political and reputational crisis of the first order.
In-Depth Analysis
1. Pentagon Designates Anthropic a Supply Chain Risk
What happened: On March 5, the Department of Defense (referred to internally as the āDepartment of Warā under Secretary Pete Hegseth) formally notified Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately. President Trump simultaneously directed every federal agency to cease all use of Anthropicās technology within six months. The Pentagon gave the military itself a 6-month phase-out window given Claudeās deep embedding in classified operational systems, including those used during active US strikes on Iran.
Strategic Context
The designation is legally and historically extraordinary. The supply chain risk statute (10 USC 3252) was designed to block foreign adversaries ā principally Chinese companies ā from embedding their technology in US defense systems. Its application to an American-founded, American-headquartered AI company is without precedent. The dispute centers on two narrow restrictions Anthropic placed in its $200 million DOD contract signed in July 2025: prohibitions on Claude being used for mass surveillance of Americans or fully autonomous weapons. Anthropic refused to remove these restrictions; the Pentagon refused to accept them.
Pentagon undersecretary Emil Michael revealed that the negotiations broke down specifically over Claudeās potential role in the āGolden Domeā missile defense program ā a Trump initiative to place weapons in space ā and the implications for autonomous weapons authorization. Anthropic has consistently maintained that the restrictions were narrow, targeted, and not based on any existing operational use of Claude. The company has vowed to challenge the designation in court under what it describes as a ālegally unsoundā application of the statute.
CEO Dario Amodei stated that the formal notification Anthropic received shows the designation applies only to direct use of Claude within specific DoW contracts, not to all commercial use by any company that holds a government contract. Microsoftās lawyers reviewed the statute and concluded that non-defense projects with Anthropic can continue. Lockheed Martin, however, immediately said it would follow the Presidentās direction and pivot to alternative LLM vendors.
Market Impact
The designation creates both a direct revenue headwind and a reputational contagion risk. On the direct side, Anthropicās $200 million DoD contract and associated Palantir partnership revenue are immediately at risk. Palantirās stock moved lower on the news (before closing roughly flat), with analysts at Piper Sandler estimating that an Anthropic transition could cause āshort-term disruptionsā for approximately 60% of Palantirās US government revenue. On the reputational side, any enterprise customer in the defense supply chain now faces procurement compliance pressure to certify non-use of Claude ā even if the statuteās actual scope is narrow.
The competitive windfall for OpenAI is tangible but may be short-lived. OpenAI announced a deal to replace Anthropic in classified environments within hours of the Pentagonās move, but CEO Sam Altman subsequently acknowledged the deal looked āsloppy and opportunisticā and is renegotiating terms to add further intelligence-agency restrictions ā a move that signals even OpenAI recognizes the optics of offering unrestricted government access.
For the broader AI industry, the dispute creates an important precedent: AI companies that embed safety restrictions in government contracts are now exposed to contract termination via national security law. This will shape how every major AI lab structures government deals going forward.
Tech Angle
The technical irony of this dispute is significant. Despite the federal ban, Claude was reportedly used by US military personnel during active operations against Iran in the same week Trump ordered agencies to stop using it ā an indication of how deeply embedded Anthropicās models are in operational military workflows and how difficult a clean phase-out will be.
2. Revenue Acceleration ā $19 Billion Run Rate Despite Crisis
What happened: Reporting published March 3 revealed that Anthropicās annualized revenue run rate surpassed $19 billion ā up from $9 billion at the end of 2025 and $14 billion just weeks earlier. The growth is driven almost entirely by Claude Code, whose run-rate revenue has grown to over $2.5 billion and whose weekly active users have doubled since January 1.
Strategic Context
The velocity of revenue growth ā effectively more than doubling in roughly two months ā is driven by a single product dynamic: the AI coding market is inflecting sharply, and Claude Code holds a structural lead among enterprise developers. Four percent of all public GitHub commits worldwide are now authored by Claude Code, a figure that doubled in just one month. Business subscriptions to Claude Code quadrupled since the start of 2026, with enterprise use representing over half of all Claude Code revenue. The number of $1M+ annual customers has grown from roughly 12 two years ago to over 500 today. Eight of the Fortune 10 are now Claude customers.
This revenue momentum stands in direct tension with the Pentagon crisis. It demonstrates that Anthropicās commercial position is robust enough to weather a government ban ā but it does not eliminate the reputational risk to enterprise clients in regulated industries that now face supply chain compliance questions.
Market Impact
The revenue trajectory also reinforces Anthropicās $380 billion Series G valuation (closed February 12, led by GIC and Coatue), which was already the second-largest private funding round in tech history at the time of closing. With run rate now approaching $20 billion, the valuation ā while still a substantial multiple ā becomes more defensible on fundamentals. An IPO remains the logical next question. Anthropic has not set a timeline, but at this growth rate and valuation, a 2026 or early 2027 public offering would not be surprising.
| Sources: Bloomberg ā Anthropic Nears $20 Billion Revenue Run Rate | CNBC ā Anthropic closes $30B Series G |
3. Global Outage ā Claude.ai and API Down for ~3 Hours (March 2)
What happened: On Monday, March 2, Anthropicās Claude API and Claude.ai suffered a global outage lasting approximately 2 hours and 45 minutes, with a brief recurrence before full resolution by early afternoon EST. The outage affected login/logout paths and the Claude.ai interface specifically, while the API itself was later confirmed to be operating as intended.
Strategic Context
Timing made this outage particularly damaging. It occurred one week after Trump ordered federal agencies to halt Anthropic usage, placing Anthropic under intense public and institutional scrutiny. Any reliability question ā however temporary ā reinforces the risk calculus for enterprise customers evaluating their Claude dependency, particularly those in regulated sectors where uptime guarantees matter for procurement decisions.
Anthropic provided transparency in real time through its status page and confirmed a fix had been implemented. No root cause was publicly disclosed.
Market Impact
A single outage of this duration is unlikely to cause meaningful enterprise churn on its own. However, in combination with the Pentagon crisis, it contributed to a negative news cycle at a particularly vulnerable moment. For enterprise risk officers reviewing their AI vendor concentration, it adds one more data point to a week that already raised questions about stability and government relations. Competitors ā including OpenAI and Google ā will not hesitate to reference reliability in their enterprise sales motions.
Source: PYMNTS ā Anthropic Outage
4. Product ā Claude Sonnet 4.6 Rollout Continues (Background Context)
What happened: While formally launched February 17 (just outside this weekās 7-day window), Claude Sonnet 4.6 was actively rolling out to users throughout this week as the new default model for Free and Pro plans on claude.ai and Claude Cowork.
Strategic Context
Sonnet 4.6 is a material capability upgrade: it achieves near-Opus-level performance on enterprise coding and agentic tasks at a significantly lower cost and latency than Opus models. At $3/$15 per million input/output tokens, pricing is unchanged from Sonnet 4.5. Key upgrades include a 1 million token context window (beta), improved computer use and prompt injection resistance, and benchmark results ā including 79.6% on SWE-Bench Verified and 72.5% on OSWorld-Verified ā that place it among the top models available anywhere. Anthropic described the modelās safety profile as having āa broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment.ā
Tech Angle
The rollout of Sonnet 4.6 to the free tier is a deliberate democratization move. By delivering Opus-adjacent performance as the default free experience, Anthropic is reducing the motivation for users to upgrade to paid tiers for capability alone ā while potentially growing its user base and developer ecosystem more rapidly. For enterprise buyers, it raises the ceiling on what is available at the mid-tier price point, creating a meaningful competitive response to OpenAIās GPT-5.4 launch this same week.
| Sources: Anthropic ā Introducing Claude Sonnet 4.6 | TechCrunch ā Anthropic releases Sonnet 4.6 |
Claude Code Platform Update (This Week)
Separately, Anthropic shipped several notable Claude Code updates this week (via Releasebot, first seen March 5ā6):
/claude-apiskill added ā enabling developers to build applications using the Claude API and Anthropic SDK directly from within Claude Code sessions.- Multi-language voice STT added to the agent/worksphere interface.
- MCP management improvements and VS Code session visualizations.
- Bug fixes for API 400 errors with third-party gateways and Bedrock inference profiles.
- Improved stability for tool search with proxy endpoints.
Source: Releasebot ā Anthropic Claude Code Updates
Forward Outlook
Anthropic enters the next week managing a three-front challenge: a legal fight with the Pentagon, a reputational repair effort in the enterprise market, and a competitive sprint against OpenAIās GPT-5.4 launch. The legal challenge is the most consequential. Anthropic has strong grounds to contest the unprecedented application of 10 USC 3252 to an American company, and Amodeiās narrower reading of the statuteās scope ā confirmed by Microsoftās legal review ā suggests the operational impact on commercial enterprise may be more limited than the headline suggests.
The commercial fundamentals remain intact: near-$20 billion run rate, accelerating Claude Code adoption, and a frontier model portfolio that is genuinely competitive. But Anthropicās signature differentiator ā its safety-first brand ā is simultaneously the cause of its current crisis and the reason its enterprise customer base remains loyal. How it navigates that tension in court and in public will define its trajectory for the rest of 2026.