
AI Action Summit – OECD Headquarters
On February 11, 2025, I had the privilege of participating as a panelist at the OECD AI Governance Summit at the OECD headquarters. The event, titled “Shaping the Future of AI Governance: Standards, Risk Management, and Responsible Practices,” brought together leading experts, policymakers, and stakeholders to discuss the evolving landscape of AI governance. My panel focused on an essential yet often overlooked aspect of AI standardization: how to integrate human rights considerations into global AI standards.
Why AI Governance Needs a Rights-Based Approach
For over 160 years, the International Telecommunication Union (ITU) has been at the forefront of global standardization, ensuring that technologies are interoperable, reliable, and commercially viable. However, in the AI era, technical accuracy and economic efficiency are not enough. The impact of AI on fundamental rights—such as privacy, freedom of expression, and non-discrimination—demands that human rights become a core consideration in AI standards.
Integrating Human Rights in AI Standardization
- ITU & OHCHR Collaboration: The 2021 UN Human Rights Council resolution called for greater cooperation between ITU and the Office of the UN High Commissioner for Human Rights (OHCHR) to ensure AI technical standards respect fundamental rights.
- Global Digital Compact (2024): The Global Digital Compact (GDC) called for AI standards that promote safety, reliability, sustainability, and human rights, reinforcing the need for multi-stakeholder collaboration.
- Study Groups on Human Rights & AI: ITU has incorporated human rights across several key areas:
- Study Group 5: Developing standards for e-waste management, reinforcing the right to a clean environment.
- Study Group 20: Establishing key performance indicators (KPIs) for IoT and Smart Sustainable Cities, linking AI governance to access to information, health, and education.
- Study Group 21: Ensuring telehealth accessibility, particularly for persons with disabilities.
Key Takeaways from WTSA-24 and AI Standardization
- Resolution 101 – AI Standardization
- Strengthens ITU’s role in global AI governance.
- Supports the development of trustworthy AI standards in collaboration with ISO, IEC, and UN agencies.
- Aims to ensure AI standards align with human rights principles.
- Resolution 105 – Metaverse Standardization
- Calls for global metaverse standards that protect privacy, security, accessibility, and inclusion.
- Marks the first time human rights are explicitly referenced in an ITU resolution.
Closing the Standardization Gap: Ensuring Inclusive AI Governance
- Bridging the Standardization Gap Initiative: Provides training, funding, and fellowships to help Global South stakeholders participate in AI governance.
- Encouraging Civil Society Participation: ITU is facilitating fee waivers for non-profit organizations to join standardization discussions.
- Network of Women in ICTs: Promotes women’s leadership in technical standardization, addressing gender disparities in AI governance.
Looking Ahead: The Future of AI and Human Rights in Standardization
- Develop human rights impact assessment tools for AI governance.
- Strengthen cooperation between standard-setting bodies and human rights organizations.
- Advance global AI risk management frameworks that align with human rights.
- Create a taxonomy to translate human rights into technical terms for technical communities.
Final Thoughts
As AI technologies become more embedded in daily life, global standards will define their rights-based boundaries. The OECD AI Governance Summit was a crucial moment to discuss how standards can serve as a safeguard for human rights in AI. Through continued international collaboration, we can ensure AI is developed in a way that respects and protects fundamental rights.
More information on the event is available here: OECD AI Governance Summit