
Panel discussion
On March 17, 2025, I had the opportunity to speak at the AI Standards Hub Global Summit in London, joining a panel on “The Role of Civil Society and Human Rights Expertise in Shaping AI Standards.” This discussion, featuring experts from civil society, technical standardization, and policymaking, underscored a crucial reality: AI standards are not just technical tools—they are governance mechanisms that shape societal outcomes.
From Technical Accuracy to Human Rights Integration
Historically, technical standards have prioritized efficiency, interoperability, and market adoption. This approach made sense in sectors like telecommunications, where the ITU has played a central role for over 160 years. However, AI challenges this traditional paradigm. Unlike telecom protocols, AI systems make decisions—decisions that affect people’s lives, from hiring practices to healthcare diagnoses, from surveillance to content moderation.
This means that AI standards embed value-laden choices, shaping the way AI systems behave, how they interact with users, and what safeguards are in place. If these standards are developed without a strong human rights lens, they risk reinforcing biases, enabling discrimination, or facilitating surveillance.
At the ITU, this awareness has grown significantly. Following the 2021 UN Human Rights Council call for cooperation between ITU and OHCHR, and the 2024 Global Digital Compact’s push for human rights-aligned AI standards, we have been integrating human rights considerations:
- Study Group 20 on Smart Sustainable Cities incorporates indicators on human rights.
- Study Group 21 on Telehealth ensures AI-driven healthcare services remain accessible.
- The AI for Good initiative fosters interdisciplinary collaboration, aligning AI development with ethical principles.
Yet, these efforts remain insufficient if we do not bridge the participation gap ensuring that civil society, human rights defenders, and marginalized communities can meaningfully engage in AI standardization.
The Participation Gap: The Role of Civil Society in AI Standardization
A key discussion point in our panel was the systemic barriers to civil society participation in AI standards development. Unlike governments or industry players, most civil society organizations lack the resources to engage in highly technical, time-consuming standardization processes. Travel costs, membership fees, and the complexity of technical documents make participation difficult.
Yet, without civil society, the risk is clear: standards will reflect the priorities of dominant industry actors rather than broader societal needs. This can lead to AI systems that maximize commercial interests at the expense of fairness, transparency, and accountability.
At ITU, we are working to lower the barriers to participation:
- Waiving membership fees for non-profits engaging in AI standardization.
- Expanding remote participation to allow experts from diverse regions to join discussions.
- Developing a human rights impact assessment framework with partners for ICT standards.
A New Approach: The AI Standards Exchange and Database
Beyond participation, another pressing challenge is fragmentation in AI governance. Multiple organizations—ITU, ISO, IEC, the Alan Turing Institute, and national standards bodies—are developing AI standards, yet there is no centralized way to track and compare them.
At the Summit, I discussed ITU’s work on the AI Standards Exchange and Database, a new initiative responding to the Global Digital Compact’s call for better AI governance. This platform, should be developed in collaboration with ISO, IEC, and the Alan Turing Institute, will:
- Aggregate global AI standards into a single, accessible repository.
- Provide an evolving reference point for evaluating AI systems against international standards.
- Ensure alignment with human rights, transparency, and accountability.
This initiative aims to prevent regulatory arbitrage—where companies exploit inconsistencies between different jurisdictions—and ensure that AI standards serve the public good, not just private interests.
What’s Next? Towards a More Inclusive AI Standardization Ecosystem
The AI Standards Hub Global Summit reaffirmed a growing consensus: technical standards are now a critical governance tool, shaping AI’s societal impact as much as laws and policies do. As such, their development must be:
- Inclusive—bringing civil society and marginalized voices into the conversation.
- Human rights-driven—ensuring AI respects fundamental freedoms and dignity.
- Coordinated—avoiding duplication and aligning efforts across global standardization bodies.
More info on this summit here