in Academic Paper, Artificial Intelligence, Data Privacy, Human Rights, United Nations

The Unfinished Fight for Data Protection: Power, Privacy, and Control in the Age of AI

Data Privacy(Originally published on the Harvard Kennedy School website)

Data is not an abstract resource; it is an extension of who we are.

On December 10, 1948, the international community adopted the Universal Declaration of Human Rights. Article 12 is unequivocal: “No one shall be subjected to arbitrary interference with his privacy”. Nearly eighty years later, that promise feels increasingly hollow.

In the physical world, the boundaries of privacy are clear and widely accepted. We do not take our neighbor’s car without permission. We do not enter their home, sleep in their bed, or rummage through their drawers in their absence. If we do, the law intervenes. Privacy is understood as a basic condition of dignity and freedom.

So why have we failed to apply these same principles to the digital world?

Today, both private corporations and governments routinely extract, analyze, and monetize our personal data, often without meaningful consent and with limited accountability. Surveillance has become embedded in everyday digital life, justified alternately in the name of security, convenience, innovation, or economic growth. Meanwhile, the individuals whose data fuels this system are left exposed, their rights diluted by complexity, opacity, and profound power asymmetries.

Data is not an abstract resource; it is an extension of who we are. Our movements, preferences, relationships, beliefs, and vulnerabilities are increasingly rendered into datasets that can be exploited, manipulated, or weaponized. When privacy collapses, it triggers a cascade of rights violations, undermining expression, association, equality, due process, dignity, and democratic participation.

Beyond surveillance and data extraction, digital technologies increasingly shape behavior itself. Many platforms are deliberately designed to foster dependency, using algorithmic recommender systems, persuasive design, and AI-driven feedback loops to capture attention and maximize engagement. In this respect, the digital economy mirrors earlier industries built on addiction, such as tobacco or ultra-processed food. By cultivating dependency, these models enable the continuous extraction of personal data and the generation of ever-greater economic value for large technology companies.

Digital dependency is not merely a lifestyle concern; it raises fundamental questions about autonomy, dignity, and social protection, disproportionately affecting the most vulnerable. These consequences extend well beyond privacy. When children and adults alike are drawn into compulsive patterns of use, through endless notifications, rewards, and personalized content, autonomy, well-being, and cognitive development are at risk. Addressing this requires recognizing that technological design choices carry social and ethical responsibility.

Civil society organizations have long played a crucial role in defending these rights, particularly in international forums, such as the Internet Governance Forum or the World Summit on the Information Society, where governance frameworks are negotiated. Often funded by public money, they act as a counterweight to the immense lobbying power of both states and Big Tech. Yet this ecosystem is fragile. When governments shift their political priorities, especially powerful states, the consequences are immediate: funding disappears, advocacy weakens, and citizens lose defenders at precisely the moment they need them most.

What are the possible strategies moving forward?

Should governments renew and strengthen their support for civil society? Should people explore crowdfunding models to sustain independent watchdogs? Should people pressure legislators, boycott dominant technology platforms, or invest in open, decentralized alternatives inspired by projects like Wikipedia or Solid?

Reclaiming a rights-respecting, open digital ecosystem will not be easy. It requires political will, sustained civic engagement, and a rebalancing of power between individuals, states, and corporations.

The answer is likely not a single solution, but a combination of all of these.

While some governments actively defend the interests of powerful technology industries, others are largely positioned as consumers of digital services and technologies developed elsewhere. Moreover, many governments have walked away from designing, building, and maintaining digital services in-house for their citizens. In regions such as Europe, where authorities have attempted to address this situation through regulation, they have often found themselves constrained, or even penalized, for seeking to protect their citizens.

The response must be both political, by advocating for digital sovereignty, and technical, by building the capacity to develop and operate rights-respecting digital public infrastructure. Users, moreover, must once again become active participants in shaping the digital environment rather than passive sources of data extraction. In the early days of the internet, individuals could host servers in their bedrooms, develop digital services, and genuinely believe they might change the world. Today, cyberspace is increasingly locked down, centralized, and prohibitively expensive to enter. In the AI era, launching a new service often requires billions in capital, not creativity or public interest. Digital public infrastructures should restore space for users to understand, adapt, and meaningfully influence the technologies they rely on.

Reclaiming a rights-respecting, open digital ecosystem will not be easy. It requires political will, sustained civic engagement, and a rebalancing of power between individuals, states, and corporations. It also demands a fundamental re-engineering of how digital services are designed and governed.

On this point, standardization bodies have a critical role to play. Technical standards, which shape the architecture of digital technologies, should embed fundamental rights and democratic values at the core of their development. While this is no guarantee that digital services will fully respect human rights, it can help steer technological design in the right direction.

Digital architectural power cannot remain concentrated in the hands of a few private actors. Instead, we must invest in digital public infrastructures governed in the public interest and accountable to citizens, grounded in rights-based technical standards and human-centric design principles. Anchoring these infrastructures in democratic values is essential: it is the best guarantee that governments will use them to protect their citizens rather than to monitor or control them.

Humanity has faced similar challenges before. We have regulated the tobacco and the ultra-processed food industries, and we have agreed on the non-proliferation of nuclear weapons. Regulating the digital industry, firmly and in the public interest, is no less necessary today.

Write a Comment

Comment