Artificial intelligence
must be based
on human rights
Statements by the Office of the United Nations High Commissioner for Human Rights
July 12, 2023
Delivered by
Volker Türk, UN High Commissioner for Human Rights
“Where should the limits be?” – A human rights perspective on what lies ahead in artificial intelligence and new and emerging technologies”
It is a great idea that we are discussing human rights and AI.
We all know how our world, and human rights, are being tested right now. The triple planetary crisis threatens our existence. Old conflicts have been raging for years, with no end in sight. New conflicts continue to emerge, many with far-reaching consequences across the globe. We are still reeling from the fallout of the COVID-19 pandemic, which has exposed and deepened a set of inequalities that are occurring across the globe.
But the question we need to answer here today – where the boundaries of artificial intelligence and emerging technologies should be – is one of the most pressing issues for our society, for governments and for the private sector.
We have all seen and followed over the past few months the impressive advances in generative AI, with ChatGPT and other programs now offering immediate access to the general public.
We know that AI has the potential to provide enormous benefits for humanity. It could improve strategic forecasting, democratize access to knowledge, increase the pace of scientific progress, and increase the ability to process vast amounts of information.
But in order to harness all this potential, we need to make sure that the benefits outweigh the risks, and we also need to set boundaries.
When we talk about boundaries, what we are really talking about is regulation.
In order to fulfill this task, to behave like humans, to place people at the center of the development of new technologies, any solution, any regulation, must be based on respect for human rights.
Two different schools of thought are shaping the current development of AI regulation.
The first is risk-only, focusing primarily on self-regulation and self-assessment by AI developers. Rather than adhering to detailed rules, risk-based regulation puts the emphasis on identifying and mitigating risks in order to achieve results.
This approach places a great deal of responsibility on the private sector. Some might say that this responsibility is too high, and the private sector itself claims this.
It also leads to glaring gaps in regulatory standards.
The other approach integrates human rights throughout the entire AI lifecycle. From start to finish, human rights principles are incorporated into the collection and selection of data, as well as the design, development, deployment and use of the resulting models, tools and services.
This is not a wake-up call for the future – we can already see the negative consequences of AI today, and not just generative AI.
AI has the potential to entrench authoritarian governments.
It can wield lethal autonomous weapons.
It can create the foundation for even more powerful tools for societal control, surveillance, and censorship.
Facial recognition systems, for example, can become vehicles for mass surveillance in our public spaces, obliterating any concept of privacy.
AI systems used in the criminal justice system to predict future criminal behavior have already been shown to underpin discrimination and undermine rights, including the presumption of innocence.
Victims and experts, including many of those in this room today, have been sounding the alarm for quite some time, yet policymakers and AI developers have not acted decisively enough, or quickly enough, on these troubling issues.
We need urgent action from governments and businesses. And at the international level, the United Nations can play a crucial role in convening key actors and advising on the next steps.
We cannot waste a single second.
The world has waited too long to act on climate change. We cannot afford to repeat the same mistake.
What might regulation look like?
The starting point should be the negative consequences people are suffering and those likely to suffer.
This requires listening to those who are suffering from these consequences, as well as those who have already spent several years identifying and responding to these negative consequences. Women, minority groups, marginalized people, in particular, are disproportionately affected by the biases that AI entails. We must make serious efforts to involve them in any debate on governance.
Attention should also be paid to the use of AI in public and private services where there is an increased risk of abuse of power or intrusions into a person's privacy: in justice, law enforcement, migration, social protection, or financial services.
Second, regulatory initiatives need to require an assessment of the human rights risks and implications of AI systems before, during and after their use. There needs to be guarantees of transparency, independent oversight and access to effective remedies, especially when AI technologies are being used by the State itself.
AI technologies that cannot be operated in compliance with international human rights standards should be banned or suspended until adequate protections are put in place.
Third, existing regulations and protections need to be applied, for example, regulatory frameworks on data protection, competition law, as well as sectoral regulations, including in the fields of healthcare, technology or financial markets. A human rights perspective applied to the development and use of AI will have limited impact if it is not accompanied by adequate respect for human rights in the broader regulatory and institutional landscape.
And fourth, we must resist the temptation to allow the AI industry itself to convince us that self-regulation is sufficient, or to claim that it should be up to them to define the legal framework to be applied. I believe we have already learned this lesson from our experience with social media platforms. While their input is important, it is essential that the entire democratic process – laws designed by all stakeholders – is fully applied, in an issue where all people, everywhere in the world, will ultimately be affected in the future.
At the same time, companies must fulfil their responsibilities in respecting human rights in line with the Guiding Principles on Business and Human Rights. Companies are responsible for the products they strive to bring to market. My Office is already working with a number of companies, civil society organisations and AI experts to develop guidelines on how to approach generative AI. Nevertheless, much work remains to be done on all of these aspects.
Finally, while not an immediate solution, it may be valuable to explore the possibility of creating an international advisory body to govern technologies that pose special risks, one that could offer perspectives on how regulatory standards could be aligned with the universal human rights framework and the rule of law framework. This body could publicly share the results of its deliberations and offer recommendations on AI governance. This point has also been suggested by the UN Secretary-General as part of the Global Digital Compact for the Future Summit next year.
The human rights framework provides a fundamental foundation that can facilitate protections when employing efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks.
I look forward to discussing all of these issues with you all.