
Session at the United Nations Human Rights Council in Geneva
In Geneva and beyond, much of the debate on AI governance still revolves around abstraction—principles, ethical frameworks, good intentions. Yet as AI becomes embedded in the machinery of both states and corporations, an urgent question arises: how do we ensure that public procurement and deployment of AI respects human dignity?
This is the focus of a new report from the UN Working Group on Business and Human Rights. Titled Artificial Intelligence Procurement and Deployment: Ensuring Alignment with the Guiding Principles on Business and Human Rights, it addresses a critical blind spot: what happens when public institutions and private actors use—but do not develop—AI systems?
I had the opportunity to be consulted during the preparation of this report. I believe it marks a necessary step forward. But it is only a step.
From Ethical Intention to Concrete Application
Rather than adding another voice to the ethical chorus, the report goes straight to the point: states and businesses must align their use of AI systems with the UN Guiding Principles on Business and Human Rights (UNGPs). This means ensuring that AI tools used in public administration, education, health, mobility, and justice meet expectations of transparency, equity, accountability, and remedy.
This is a welcome development. However, the report remains high-level in many areas. Its recommendations are ambitious—but often too abstract to be readily implemented by procurement officers working under legal, budgetary, or institutional constraints. The challenge is not only about establishing norms—it’s about translating them into technical specifications that guide procurement decisions.
Why This Matters Now
I’ve written numerous terms of reference under public contracts. Technical specifications must be precise: tools need to be interoperable, secure, upgradeable, and—above all—responsive to user needs. This is difficult to guarantee when public bodies lack internal technical expertise or access to up-to-date methodologies.
Meanwhile, AI legislation is advancing—consider the Council of Europe’s Framework Convention on AI, the European Union’s AI Act, or the African Union’s Continental AI Strategy. Yet most of these efforts remain focused on developers, not on procurement. In practice, however, it is government agencies, hospitals, schools, and financial institutions that are integrating AI into the daily lives of citizens.
The report is a timely call to shift the spotlight. Procurement processes must be reimagined to anticipate and mitigate risks before AI systems are deployed. This requires more than global guidance—it demands practical instruments: checklists, model clauses, procurement toolkits, and training programs for public buyers.
Toward Procurement That Builds Trust
The UN Working Group calls on states, businesses, investors, civil society, and international organizations to align AI procurement and deployment with international standards—encompassing human rights principles, AI policy frameworks, and technical governance norms. This includes adopting legal and policy frameworks, enforcing robust due diligence, and ensuring transparency, accountability, and access to remedy.
Public procurement must be equipped with enforceable safeguards, targeted guidance for high-risk sectors, public disclosure requirements, and meaningful stakeholder engagement. Businesses are expected to embed human rights into governance, procurement, and data practices—while supporting SMEs and fostering cross-sector learning. Investors are encouraged to align AI-related activities with rights-based benchmarks, and civil society and academia are called upon to monitor compliance and support affected communities. International organizations, in turn, are urged to provide technical guidance, capacity-building, and help coordinate global policy coherence, including the implementation of digital due diligence frameworks.
The report outlines practical tools to support these goals: procurement guidelines, human rights impact assessments, due diligence methodologies, and grievance mechanisms tailored to AI. It also stresses the importance of inclusive design, participatory processes—including civil society engagement—and context sensitivity, especially in the Global South.
Crucially, it reaffirms that accountability must span the entire lifecycle of AI systems—from design and development to procurement, deployment, and ongoing use.
This is the moment for standardization experts, procurement professionals, and public interest advocates to come together. Aligning procurement with the UNGPs isn’t about ticking boxes—it’s about building systems that earn public trust and deliver benefit without compromising rights or equity.
A Path Forward
If we want AI to serve the public interest and uphold fundamental values, this report offers a meaningful starting point. But the work is far from complete. Its recommendations now need to be translated into practical tools for public procurement professionals—resources they can rely on to ask the right questions and make informed, context-sensitive decisions.
Public organizations cannot procure information systems—including AI—as they would office furniture. These technologies are central to institutional functioning. They must be adaptable to users’ needs and supported by technical teams committed to the public good.
Turning this report into tangible outcomes will require sustained collaboration—across sectors, countries, and disciplines. The opportunity is there. What’s needed now is the collective will to act on it.
You can read the full report here: UN A/HRC/59/53 – Artificial Intelligence Procurement and Deployment