
(Originally published on the Harvard Kennedy School website)
On February 26, 2026, the CEO of one of the world’s most powerful AI companies published an open letter refusing a government ultimatum. The Department of War had threatened to designate Anthropic a “supply chain risk”, a label reserved for U.S. adversaries, never before applied to an American company, and to invoke the Defense Production Act to compel the removal of two safeguards: prohibitions on mass domestic surveillance and fully autonomous weapons. Dario Amodei refused. Within 24 hours, the Pentagon followed through, blacklisting Anthropic as a national security risk and ordering all military contractors to immediately cease commercial activity with the company. Anthropic said it would fight the designation in court.
What the standoff reveals is not a story about corporate courage. It is a story about a vacancy at the center of global AI governance. One of the most consequential human rights questions in current AI policy, whether artificial intelligence may be used for mass surveillance, was not resolved at the UN Human Rights Council, not in a standards body, not through any multilateral process. It was resolved, for now, by a private company CEO making a unilateral ethical judgment. A system in which human rights protections depend on the personal convictions of a technology executive is not a governance system. It is a vacancy.
And that vacancy does not fall equally on everyone.
The most consequential human rights questions in AI are being decided in bilateral negotiations between governments and technology companies. Most of the world is not in the room.
For decades, doomsday scenarios predicted that new communication technologies would make traditional diplomacy irrelevant. The telegraph, the telephone, email, social media, each was supposed to render the diplomat obsolete. Each time, diplomacy proved more durable than its critics expected. It adapted, absorbed the new tools, and continued performing its core functions: negotiation, representation, and the management of international relations.
Artificial intelligence is introducing a different kind of disruption. Not the replacement of diplomats, but the relocation of the arena. The decisions that matter most in AI governance, what these systems may be used for, whose data they may collect, how they may be deployed in conflict and for domestic surveillance, are increasingly being made not by democratic institutions or multilateral forums but in contracts between governments and technology companies. This is governance by procurement, conducted without transparency, without public accountability, and without remedy mechanisms for those affected by its outcomes.
The Anthropic case makes this architecture visible. Claude is “the single most widely deployed AI system in the U.S. military”, according to Jack Shanahan, who served as the first director of the Pentagon’s Joint Artificial Intelligence Center. Claude is used for intelligence analysis, operational planning and cyber operations. Each such contract is a governance decision, about acceptable use, about safeguards, about the boundary between legitimate defense and rights violations with no formal process through which affected populations, their governments, or civil society can meaningfully contest the terms.
Across much of the Global South and Europe, states are building their digital futures as tenants on someone else’s digital land. The Anthropic case shows how this dependency extends into the governance layer itself. Countries that do not control their digital infrastructure do not control the rules that govern it. Those rules are increasingly written before they arrive at the table.
The temptation is to read Amodei’s letter as reassurance. But the structural lesson runs the other way. The Department of War has stated it will only contract with companies that agree to “any lawful use” and remove safeguards. Other major AI companies have already complied. The protection in place rests entirely on one company’s willingness to absorb financial pressure. That is not a governance architecture. It is a single point of failure. And if Anthropic were to exit the field entirely, the outcome would not be a safer AI landscape: the race would continue, led by actors with demonstrably weaker safety commitments. Voluntary governance doesn’t just fail; it fails unpredictably, protective when relations are good, exposed when they are not.
And the protection is framed in explicitly national terms: defending U.S. democracy, defeating adversaries. That framing is honest. But it leaves unanswered the question that matters most at an institution like the UN Human Rights Council: what protections apply to everyone else? To populations in partner countries where U.S.-contracted AI systems are deployed? To citizens of nations with no seat at the table? To U.S. citizens?
Within hours of blacklisting Anthropic, the Pentagon signed a deal with OpenAI with apparently similar safeguards. Altman himself acknowledged on X that the deal was ‘definitely rushed’. Whatever the fine print, the structural point holds: these protections exist because two CEOs negotiated separate bilateral agreements, with no common framework and no independent verification. That is not a governance system. It is improvisation.
The human rights framework adopted in 1948 was universal by design. It was not a framework for protecting U.S. citizens from their government, or Europeans from theirs. It was a framework for protecting people, all people, against arbitrary power, regardless of where that power originates. The current architecture of AI governance, conducted through bilateral contracts between dominant states and dominant technology companies, is structurally incompatible with that universality.
Rights that depend on the goodwill of a technology executive expressed in a letter that could be withdrawn tomorrow, are not rights. They are discretions, and discretions can be revoked.
Even Anthropic’s own architects of its Responsible Scaling Policy acknowledged the limits of voluntary governance explicitly, designing the policy from the start not as a permanent solution but as a testbed for practices they hoped would eventually be required by binding regulation. The governance vacancy was never a surprise to those closest to the problem. Those rules are only beginning to be written.
There is an existing mechanism capable of operating across jurisdictions and embedding human rights as structural constraints rather than voluntary commitments: technical standards. The Global Digital Compact, adopted in 2024 as an annex to the UN Pact for the Future, explicitly grounds digital and AI governance in international human rights law. The WSIS+20 resolution of December 2025 reaffirmed this direction and confirmed the role of the International Telecommunication Union (ITU), a UN specialized agency, as the authoritative global source for digital standards. UN Human Rights Council Resolution 47/23 on new and emerging digital technologies made human rights in technical standard-setting a formal priority. The Seoul Statement, issued by ITU, the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC) at the 2025 International AI Standards Summit, committed all three bodies to deepening this work, with progress reports due within one year. ITU has catalogued over 880 AI-related standards. Looking further ahead, the UN Independent International Scientific Panel on AI has just been established and proposals for an international AI oversight body modeled on the International Atomic Energy Agency are gaining ground.
This architecture exists. The gap is not ambition: it is engagement. Ninety-four countries now have national AI strategies; most were designed without meaningful input from standardization bodies. ISO/IEC 42001, the AI management system standard, provides a concrete framework for operationalizing transparency, safety, and accountability across the AI lifecycle, but it only constrains actors who are required to use it. For nations that neither build AI systems nor negotiate the contracts governing their deployment, formal engagement in standards processes is not a technical preference. It is the most available lever for shaping a governance system that currently excludes them.
In a recent study at King’s College London, three leading AI models (GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash) were placed in simulated nuclear crisis scenarios; nuclear signaling occurred in 95% of games, and not one model ever chose accommodation or de-escalation. When AI systems deployed by governments cause harm, who is liable: the provider that built the model, the institution that deployed it, or the government that contracted for it? Without a common standard of care, the question has no agreed answer, and without an answer, there is no basis for remedy.
The window is closing. AI governance frameworks are being established now, in procurement contracts, in bilateral negotiations, in the quiet decisions of technology executives facing government pressure. Member states absent from those processes will not simply miss meetings. They will inherit architectures shaped entirely by others.
The Anthropic standoff will not be the last time a private company is asked to make a governance decision that the international community has failed to make for itself. Each time that happens, the vacancy becomes harder to fill.
The room where the rules are written has moved. The work of member states, through multilateral institutions, is to move it back, before harm is done.