
Entrance Sign for the Expert Workshop on AI, Rule of Law, and Human Rights — Geneva, 8 May 2025
I was invited last week to participate in an Expert Meeting on the Rule of Law and Human Rights Aspects of Using Artificial Intelligence for Counter-Terrorism Purposes, organized by the Geneva Centre for Security Policy in partnership with the UN Counter-Terrorism Committee Executive Directorate and the Swiss government. The discussions, though focused on legal safeguards, surveillance, and freedom of expression, brought into sharp focus a broader question: who actually governs AI today?
And perhaps more urgently: can public institutions still govern it alone?
This raises a fundamental challenge for the future of democratic governance: are we witnessing the erosion of public control over technologies that increasingly define our societies?
In 2024, seven companies—Amazon, Apple, Alphabet, Microsoft, Meta, NVIDIA, and OpenAI—generated a combined $2 trillion in revenue. That is more than six times the entire annual budget of the European Union, and significantly more than the national budgets of France, Germany, or even the United Kingdom.
From Corporate Scale to Political Influence
These aren’t just tech firms. They are de facto global infrastructure providers:
- They manage the data that trains tomorrow’s algorithms.
- They host the platforms where political discourse, economic transactions, and cultural life unfold.
- They are setting the pace—and the terms—of the AI arms race.
This concentration of power is neither neutral nor invisible. It affects everything from how privacy is defined to who has access to digital public goods. And the faster their technologies evolve, the harder it becomes for traditional institutions to keep up—let alone provide oversight.
Europe’s Predicament
The EU has earned global respect for its normative leadership on human rights. But when it comes to regulating companies that are both foreign and financially stronger than most member states, the limits of legal instruments become clear.
Europe cannot outspend these firms. Nor can it regulate them effectively if it remains trapped in fragmentation and bureaucratic inertia. Yet, Europe’s normative power remains a formidable lever, if strategically deployed. But it can act strategically if it stops treating the private sector as either a threat or a black box.
A Call for Strategic Public–Private Partnerships
We need a new approach. One that does not abandon regulation, but integrates it into a more pragmatic architecture of public–private partnerships. These should be:
- Transparent in their objectives and scope,
- Grounded in international human rights law,
- Inclusive of civil society and affected communities,
- And focused on co-governance of AI systems.
Initiatives like the EU AI Act or the G7’s Hiroshima AI Process are early steps in this direction. But they must evolve into frameworks that embed public interest directly within corporate governance models.
Think of it less as “outsourcing regulation” and more as embedding public values into private power.
This is especially critical in domains like counter-terrorism, where AI intersects with national security, rights to liberty and expression, and the risk of mass surveillance. As the Geneva workshop rightly stressed, the legitimacy of AI applications in such fields depends not only on technical reliability, but on legal accountability and human rights safeguards.
Time for a New Compact
If the 20th century gave us public-private partnerships for infrastructure and industrial policy, the 21st must do the same for national security.
That means that governments need to build strong in-house capacity to lead these partnerships and ensure that AI serves the public good, not just shareholder interests.
At the end, the real threat is not that these companies are too powerful. It’s that our institutions are not adapting fast enough to match them.