The AI You Choose Is a Values Decision. Anthropic Just Proved It.
By Jessica Denkelaar · March 1, 2026

This week, something happened in the AI industry that deserves far more attention than it's getting in most business circles. Anthropic, the company behind Claude, publicly refused a demand from the U.S. Department of Defense. Not quietly, not diplomatically buried in a press release. Openly. At the cost of a $200 million government contract, a historic blacklisting by the Trump administration, and an escalating legal battle that is still unfolding.
I'm writing this a few days after the story broke, and honestly, that's turned out to be an advantage. What started as a tense standoff has now become one of the most consequential moments in AI history. And it confirms everything I've believed about why the tool you choose to build your business on is a values decision, not just a technical one.
What Actually Happened
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon and demanded "unfettered" access to Claude for all military purposes, without any restrictions. The two specific use cases Anthropic refused to enable were fully autonomous weapons systems (AI that selects and engages targets without human oversight) and mass domestic surveillance of American citizens.
Hegseth's ultimatum: comply by 5:01 PM Friday, or face consequences.
Amodei's answer, published Thursday: "We cannot in good conscience accede to their request."
He acknowledged the Pentagon's right to make military decisions. He acknowledged that Anthropic stood to lose significantly. And he held the line anyway, citing two clear principles: that today's AI is simply not reliable enough to make life-or-death decisions autonomously, and that mass domestic surveillance is incompatible with democratic values, even if it's technically legal under current law.
Then on Friday, the consequences arrived.
The Fallout: Historic and Unprecedented
Trump ordered all federal agencies to immediately cease use of Anthropic's technology. Hegseth formally designated Anthropic a "supply chain risk to national security," a label historically reserved for companies from adversarial nations like China's Huawei. It's the first time in U.S. history that designation has been publicly applied to an American company.
The practical impact is severe. Not only does Anthropic lose its $200 million Pentagon contract, but any company that does business with the U.S. military must now certify they don't use Claude in their workflows, potentially forcing a wide swath of enterprise clients to choose between their government contracts and their AI stack.
Anthropic's response? They announced they will challenge the designation in court, calling it "legally unsound" and a "dangerous precedent." They also pushed back on Hegseth's claim that military contractors are barred from doing business with Anthropic entirely, arguing the designation cannot extend beyond DoD contracts themselves.
Their final public statement was unambiguous: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
What the Other AI Companies Did
Here's where it gets uncomfortable, and important.
OpenAI, Google, and Elon Musk's xAI all agreed to the Pentagon's demands before this standoff even reached its peak. They signed contracts allowing the U.S. military to use their models for all lawful purposes without the ethical guardrails Anthropic insisted on maintaining.
Then, within hours of Anthropic being blacklisted on Friday, OpenAI CEO Sam Altman announced a new Pentagon deal to deploy OpenAI models in classified systems. Altman claimed his agreement does include the same two red lines Anthropic had been fighting for, but structured differently and buried in the contract rather than explicitly stated. Whether that distinction holds up in practice remains to be seen. It reads, to many observers, as having it both ways.
Elon Musk's contribution to the debate? He posted that "Anthropic hates Western Civilization."
Make of that what you will.
Meanwhile, in a remarkable show of solidarity, hundreds of employees at both Google and OpenAI signed an open letter urging their own companies to stand with Anthropic. The people actually building these tools understand what's at stake better than some of the executives selling them.
Why I Moved Away from ChatGPT and Why This Confirms It
I didn't switch from ChatGPT to Claude overnight. It was a gradual shift, driven by how the tools actually behave in practice.
ChatGPT often felt like the eager intern. Quick to agree, quick to produce, sometimes at the expense of nuance or accuracy. Claude felt different. More considered. More willing to push back when I was wrong, more honest about uncertainty, more aligned with how I actually want to think through complex problems. I started calling Claude my "junior employee I actually want to keep," and more often than not, the adult in the room.
But beyond the product experience, questions about OpenAI's data practices, its increasing political entanglements, and the direction of its leadership have been building for a while. The speed with which Sam Altman swooped in to claim Anthropic's Pentagon contract, just hours after a competitor was blacklisted for standing on principle, tells you something about priorities.
As a CTO, I think about these things structurally. Whose infrastructure am I building on? Whose values am I embedding into my work and my clients' work? What happens to the data that flows through these systems, and what will the company behind them agree to do with it when power comes knocking?
This week, Anthropic answered that question at enormous cost.
A Note for European Businesses and GDPR-Conscious Teams
If you're operating in the EU or serving EU clients, this story has a specific dimension worth paying attention to.
One of the things the Pentagon was explicitly pushing for was the ability to collect and analyze data on Americans at scale using AI, including geolocation, web browsing history, and personal financial information purchased from data brokers. Anthropic refused to enable this. The other major AI providers did not push back with the same force.
The GDPR exists precisely because we recognized, as a region, that data isn't neutral. How it's collected, used, and protected reflects power dynamics that matter deeply. The idea of an AI provider agreeing to enable mass surveillance by a government, any government, should give any European business pause about where their data sits and what infrastructure it touches.
Anthropic's refusal wasn't just principled in the abstract. It was a concrete demonstration that they take their usage policies seriously enough to enforce them even when it costs them hundreds of millions of dollars and a government blacklisting. That's the kind of vendor relationship that matters when you're handling client data, building products, or advising organizations on their AI strategy.
The Bigger Picture: AI Is Not Neutral
We are at a genuine inflection point. AI is moving from "interesting experiment" to critical infrastructure in businesses, in governments, in daily life. The companies building these systems are making choices right now about what they will and won't do, who they'll work with, and on what terms.
Those choices shape the AI landscape we'll all be operating in for the next decade.
Anthropic just demonstrated, at extraordinary cost and under extraordinary pressure, that they will hold their ethical lines. They were threatened with government blacklisting, called arrogant and sanctimonious, had their CEO branded a liar with a "God complex," lost a $200 million contract, and now face an ongoing legal battle. And they did not move.
That is not a small thing. In a landscape where most companies optimize for growth and access above all else, it is genuinely rare. As someone who has built her career on making principled technical decisions for clients, not just efficient ones, it matters to me enormously.
If You're Still on the Fence
If you've been considering switching from ChatGPT to Claude, or evaluating which AI tools to recommend to your clients or your team, let this week's events inform that conversation.
Ask yourself: what do I know about how this company handles pressure? What have they refused to do, even when it cost them? What are they building toward, and who are they willing to say no to?
The answers tell you more about the long-term safety and integrity of your AI investment than any feature comparison chart.
And here's the practical good news: switching is easier than you think. One of the biggest reasons people stay with ChatGPT is inertia. Years of context, preferences, and working style that their AI has built up over time. The idea of starting over feels like a loss.
Anthropic has removed that barrier. Go to claude.com/import-memory and you'll find a dedicated import tool that walks you through the entire process in two steps. You paste a prompt into ChatGPT (or Gemini, Grok, or whatever you're currently using), it generates a summary of everything it knows about you, and you paste that output into Claude. That's it. Your preferences, working style, project context, and communication habits all come with you. No data exports, no JSON files, no technical setup.
Claude processes the import and within 24 hours it knows you. Your role, how you like to communicate, what you're working on, what you care about. You pick up right where you left off, just with an AI backed by a company that has demonstrated it will protect your data and hold its ethical lines under pressure. The import feature is available on all paid plans (Pro, Max, Team, and Enterprise).
Notably, OpenAI has not built anything equivalent. As the market leader, making it easy to leave isn't in their interest. Anthropic, as the challenger, has deliberately made switching frictionless. That asymmetry tells you something.
For me, the answer has been clear for a while. This week just made it undeniable.