Ir al contenido

To combat deepfakes, 

Denmark wants to 

copyright its 

citizens' faces 

and voices

Return to the previous page


A bill seeks to provide individuals with legal tools to defend themselves against the spread of AI-generated fraud and identity theft. The Danish government has presented a legislative proposal to consider each person's face, body, voice, and gestures as intellectual property. This means they have a personal copyright, just like a work of art.


The measure aims to combat the malicious use of deepfakes—videos, audios, or images created with AI that mimic real people. This type of content not only generates misinformation but is already being used in digital scams and identity theft for criminal purposes.


“Humans should not be digitally transformed without their consent,” said Culture Minister Jakob Engel-Schmidt when announcing the initiative. If approved, Denmark will be the first country in the world to offer this type of legal protection.


The reform proposes amending the country's Copyright Law so that a person's physical identity—their face, voice, and expressions—will have protection similar to that of a book, song, or film. This way, any unauthorized use could be reported, removed from platforms, and even subject to financial penalties.


Humorous or satirical content, however, would be exempt. The law does not seek to censor creativity, but rather to establish clear limits on the misuse of personal identities.


The proposal has strong political support in Denmark: nine out of ten members of parliament have already expressed their support. The draft will be submitted to public consultation during the European summer, and Parliament is expected to approve it in the coming months, with the aim of the law coming into force before the end of the year.


A European response to a global problem

Denmark will take advantage of its upcoming presidency of the Council of the European Union to advance this discussion at the continental level. The goal is to incorporate this vision into European regulations on artificial intelligence, which have already begun to take shape with the approval of the AI Act.


According to data from the World Economic Forum, 60% of people are very concerned about this issue. And with good reason: identity theft tools are becoming increasingly accessible, while the majority of the population still doesn't know how to detect them. In this regard, a Kaspersky study revealed that 72% of users don't know what a deepfake is, and 62% wouldn't know how to identify one.