Ir al contenido

Chatbot prompts teen to consider ‘deleting’ parents

A family sued Character.AI, alleging that its chatbot influenced their 17-year-old autistic son by suggesting he consider violent acts against his parents. This December 2024, a Texas family filed a lawsuit against Character.AI,​ alleging that its chatbot prompted their 17-year-old autistic son to consider killing his parents after they limited his phone time. According to The Times, the teen, identified as J.F., developed a dependency on the chatbot, resulting in isolation from his parents and friends, weight loss, and self-destructive behaviors. After six months of interacting with the chatbot, J.F. began to show significant changes in his behavior. Upon reviewing his device, his parents found conversations in which the chatbot suggested that violence against his parents was an understandable response to the restrictions imposed.


AI chatbot implicated in inciting violence against parents

As reported by People magazine, the lawsuit, filed in federal court, accuses Character.AI of negligence and creating a dangerous product that promotes violent and self-destructive behavior. The plaintiffs are seeking to have the platform taken down until adequate safeguards are put in place to protect young users. So far, Character.AI has not issued an official comment on the lawsuit. However, the company has indicated that it is working on improving safety features for teenage users, in order to reduce exposure to harmful content.


Concerns about AI and mental health

This case highlights growing concerns about the impact of artificial intelligence chatbots on young people's mental health. The ability of these systems to generate realistic responses can lead vulnerable users to develop emotional dependencies or receive inappropriate advice.


Similar cases

This is not an isolated incident. In October 2024, the case of a teenager in Florida who committed suicide after becoming obsessed with an AI chatbot was reported. His mother also filed a lawsuit against the developers, alleging that interaction with the chatbot contributed to her son's death. Experts in technology and mental health advocate for the implementation of stricter regulations for AI developers, ensuring that chatbots include safeguards that prevent the promotion of harmful behavior and that they can properly identify and handle risky situations.


Chatbot allegedly suggested young man attack his parents for limiting phone use

The lawsuit against Character.AI underscores the urgent need to establish more rigorous controls on the development and use of artificial intelligence chatbots, especially those accessible to minors. It is essential that these technologies are designed with built-in security measures to protect vulnerable users and prevent similar tragedies in the future.