On November 28, 2024, I had the privilege of speaking at the expert consultation on the application of the UN Guiding Principles on Business and Human Rights (UNGPs) to technology companies, particularly regarding artificial intelligence (AI). Held at the Palais des Nations in Geneva, this event brought together stakeholders to discuss the pressing challenges and opportunities in aligning technological innovation with human rights principles.
The State Duty to Protect in the AI Age
My contribution centered on the State Duty to Protect human rights, a pillar of the UNGPs that underlines governments’ responsibility to safeguard individuals from human rights abuses, including those linked to corporate activities in the AI sector. In today’s rapidly evolving technological landscape, this duty is more critical than ever.
ITU’s Role in Bridging Technology and Human Rights
As a program coordinator at the International Telecommunication Union (ITU), I highlighted ITU’s commitment to fostering sustainable development through telecommunications and ICTs while embedding human rights at the core of our initiatives. Here are some key efforts that I shared:
- AI for Good Initiative: A UN platform fostering collaboration among over 40 agencies to leverage AI for tackling global challenges, aligning with the Sustainable Development Goals (SDGs).
- Global AI Governance Dialogues: ITU has facilitated discussions on governance frameworks through events like the International AI Standards Summit, focusing on transparency and interoperability.
- Standards Development: With over 100 AI-related standards in progress, ITU ensures these frameworks respect human rights principles, addressing issues like privacy, freedom of expression, and non-discrimination.
- Capacity Building: By bridging gaps between the Global North and South, ITU supports inclusive participation in standard-setting, ensuring equitable technological benefits.
The “Smart Mix” of Measures for AI Governance
I also addressed the concept of a “smart mix” of measures—combining voluntary and mandatory approaches to AI governance. This includes:
- Voluntary Measures: Encouraging multistakeholder collaboration in designing standards that are not only technically robust but also aligned with human rights.
- Mandatory Requirements: Establishing regulatory frameworks that adapt to AI advancements, such as human rights due diligence in standards development.
- Policy Innovation: Translating human rights principles into actionable technical guidelines, empowering both policymakers and technical communities.
Toward Human-Centric AI Development
A recurring theme throughout the consultation was the need for AI policies that prioritize human rights, balancing innovation with safeguards against potential harms. ITU’s initiatives, such as the development of watermarking standards for deepfake detection, exemplify how technology can serve humanity when guided by ethical considerations.
Conclusion
As digital technologies continue to reshape our world, the collaboration between states, technology companies, and international organizations becomes paramount. The ITU remains committed to its role as a bridge-builder, ensuring that AI and other emerging technologies uphold human rights while driving sustainable development.
This consultation was a step forward in fostering a collective understanding of how states can effectively fulfill their duty to protect in the context of AI. It reaffirmed the importance of inclusivity, regulatory agility, and a shared vision for a rights-respecting digital future.