in Artificial Intelligence, Human Rights

Building Trustworthy AI: Aligning Standards with Public Interest and Integrity

Panel Discussion: Towards a Global Response to AI and Human Rights - Side Event for the UN Human Rights Council (HRC) Session - March 21, 2025

Panel Discussion: Towards a Global Response to AI and Human Rights – Side Event for the UN Human Rights Council (HRC) Session 58th – March 21, 2025

In a world increasingly shaped by algorithmic systems, the question of trust in technology has become central. Over the past months in Geneva, I participated in several gatherings of policymakers, experts, and diplomats focused on aligning artificial intelligence with human rights.

As part of the UN’s standardization community, I see daily how complex it is to shape global standards for emerging technologies. Yet we also have a responsibility to ensure that these standards do not harm anyone.

Embedding Human Rights in Technical Standards

The ITU has long been at the heart of global standardization. For 160 years, it has built common languages for interoperability, safety, and efficiency. Traditionally, this work has focused on technical accuracy and commercial viability. But today, we are adding a third, essential dimension: human rights.

This shift is not just aspirational; it is rooted in international law. States have a duty to protect human rights, including in the digital space. Over the last decade, momentum has grown within the international system to better integrate rights protections into the fabric of digital governance. Recently, global policy commitments have emphasized the need for technical standards that promote safety, inclusiveness, and accountability in AI systems.

And this is starting to materialize: from standards for inclusive telehealth to sustainability indicators for smart cities, technical communities are beginning to treat human rights not as an add-on, but as a foundational requirement.

Building Trustworthy AI Requires More Than Promises

As digital systems take on more decision-making roles in society, the trust we place in them becomes central. But what does trust actually mean in this context?

Trust is often understood as interpersonal. It grows from shared experience or emotional connection with others human beings. But in a world of AI systems, we are dealing with a different kind of trust: social trust, which is built through transparency, reliability, and accountability. We don’t need to be friends with the technology; we need to be able to rely on it.

For this to happen, the systems must be integrous. Integrity here means more than untampered data. It means end-to-end coherence: from training inputs to model behavior, from intended purpose to actual impact. Without clear guarantees of integrity, even the most powerful AI system cannot be considered trustworthy.

From Standardization to Impact: What’s Next?

Within the ITU, over a hundred AI-related standards have already been published, with many more under development. Efforts are underway to create watermarking standards to identify AI-generated content.

However, the real challenge lies ahead: how do we assess the human rights impact of technical standards before they are deployed? This is where the work must deepen.

There is a growing case for establishing a collaborative and open group focused on integrating human rights directly into the technical standardization process through the development of tools such as taxonomies, impact assessments, due diligence methodologies, and others metrics able to measure and value trust in emerging technologies and AI systems.

This would not only strengthen technical outcomes; it would anchor them in legitimacy and public confidence.

Need for Collaboration

We cannot build trustworthy AI in silos. The systems created today will shape the freedoms and limitations of tomorrow. That’s why technical experts, academics, and human rights specialists should work hand in hand to co-design the future technical standards.

The road to integrity, accountability, and rights-respecting technology will be long. But it is both possible and necessary.

Write a Comment

Comment