What the Anthropic-DOD Beef Means for Power and Subjugation
- Thomas Yin
- 18 hours ago
- 5 min read

Earlier on Friday, the Trump administration used a rare “supply-chain security risk” designation to effectively ban U.S. military contractors from doing business with Anthropic, the second-largest AI startup in the world. This follows from a terse statement by Anthropic CEO Dario Amodei in which he reported the Administration’s requests for Anthropic to erase certain clauses from its contracts with the U.S. government—namely, contracts that would prevent domestic surveillance and use for fully autonomous weapons. In the aftermath of the Trump Administration making good of their promise to cut off Anthropic, several issues emerged. Yet beyond being a legal or practical matter, this “beef,” is the pivotal point to the role of AI not just in national defense, but in the governance of mankind.
First of all, it’s no surprise that any tool as ubiquitous as LLMs is somewhat ingrained into how the government works. Anthropic’s message itself, along with the Trump Admin’s so-called “six-month phase out” plan, suggests just how important Claude was. In his state of the union address, Trump boasted that his budget cuts have saved over a trillion dollars while the government “still works.” In fact, Trump’s ambitious budget cuts during the first year of his term might just have been enabled by Anthropic’s flagship product, settling the dual mandate of cutting down the national budget while keeping the insanely complicated arrays of government bureaus in order. It’s also apparent, then, that the U.S. government needs an LLM. And a consistent one at that, since mass-pivots like the one set in motion on February 27th cost a lot of money. So why is it so important that the government can’t have Anthropic?
The convoluted explanation that I’m going to give below illustrates a simple concept: whoever controls the AI has all the power. At least, maybe in the near future. The US government is only as powerful as its bureaucracy. If the office of management and budget fails, federal workers would simply walk out—just like they did during the record-breaking government shutdown just a few months ago. If DEA or FBI agents stopped enforcing instances of severe crime, the nation would surely descend into chaos. This, combined with a consuming reliance of AI models to upkeep this bureaucracy, means that the developers of whichever AI favored by the Executive branch effectively has the helm to what the U.S. government is able to do at its fullest capacity. If the government wanted to impose a comprehensive review of state police reports using Claude to pre-scan for suspicious arrests, they could do so in a few weeks. If Anthropic pulls their support, it could take at least a year.
But wait, isn’t the US government threatening Anthropic? If Anthropic is really more powerful than the government, shouldn’t it be the other way around?
Right now, the industry is in a state of competition. In a few more years, it likely won’t be this way. Either one company would outpace the others thanks to the superior self-reproducing capabilities of their own AI models, or multiple leading companies would merge, desperate to stop the endless “money-burning” that characterizes modern-day AI development. The few billion AI users—including the staffers employed by the U.S. government—have options in terms of both performance and pricing. What’s more, the vast majority of LLM users have not passed the “point of no return” when it comes to how much they rely on AI. Although the developer of the Administration’s preferred AI can set back operations simply by pulling their support, they are not yet powerful enough to dictate just exactly what their users can do.
This distinction may precisely be why the government is deciding to double-down on its claims to potentially use Claude for surveillance and autonomous weapons. From a conflict-of-interests standpoint, it might soon be too late. I speculate that the most important part of the controversy isn’t what the DOD demands—whether it’s Claude for drones and missiles or Claude for internal surveillance—but the fact that they’re demanding it at all. The Trump Administration has a large influence over what happens in the AI industry beyond the decision of which AI model to employ in the bureaucracy, and their recent negotiations have, most likely, contained thinly-veiled threats to act against major AI players. Why? Precedence. It’s entirely possible, and to me even likely, that what the DOD is really looking for is a contract or settlement that effectively creates an obligation for a frontier AI company to provide its services with special exceptions for them. This goes beyond being a one-time phenomenon: perhaps the government is trying to use its dwindling power on the AI industry to pressure any one of the three major AI developers to grant it internal access, before AI becomes superintelligent and untouchable to anyone but its developers.
That is the key problem to consider here. Any small company holding contracts with the DOD might have been easily fazed by the massive influence that the Admin has exercised today, yet for companies like Anthropic, the decision is much more complicated. On one hand, DOD contracts are a big deal, both in terms of direct monetary value and in terms of the strategic political advantages that they might bring. On the other hand, allowing the U.S. government carte blanche use of the world’s most potent tool is simply a breeding ground for alignment issues, particularly because optimizing a model to NEVER hack anyone is much, much, much safer than optimizing a model to ALWAYS hack China, Russia, and Iran, and NEVER hack America.
At the same time, the move also raises key questions about whether AI development will remain chiefly, as Anthropic claims, “for the benefit of humanity.” It’s definitely true that AI can enable many humanitarian projects, and even reduce wealth gaps or educational inequality, but what happens when a certain segment of our humanity seeks to weaponize it against another? Would these dealings largely be antagonistic, with AI companies exerting their massive influence to withstand the economic and political pressures enacted by the government appointed by the society which built the LLMs? Or, worse, will there ever be a scenario where the frontier AI company colludes with its government, and, together, rallies the forces of authoritarianism against that of democracy?
The Trump Administration’s new plans account for six months to offboard Claude in favor of another model. If you ask me, the negotiations that directly follow the events of today will come to answer our questions on how AI contributes to—or is subjugated by—our existing societal constructs. Here’s to hoping that whatever compromise that surfaces will act to safeguard the key essence of human liberty, one way or another.




Comments