The hidden risks of the unregulated use of AI

By María Morena Vicente and Emiliano Rodríguez Nuesch

In previous AOC articles, we explored the importance of making invisible risks visible

There are many pressing global issues that remain unnoticed and demand our attention. From ocean pollution and isolated communities to the loss of countless anonymous lives and other forms of slow violence that require urgent action. We also discussed how algorithms can lead to compassion collapse and increase polarization. 

In a world where AI is advancing rapidly, many argue that it can help reduce feelings of loneliness by providing companionship, support, and assistance. However, others suffer the consequences of the lack of regulation surrounding its reach. 

Let’s talk about it.

Chatbots and their dangers on teenage population

Can AI chatbots provide meaningful support, or do they pose unseen dangers—especially to vulnerable individuals? The tragic case of 14-year-old Sewell Setzer III raises urgent concerns. 

Setzer spent months talking with Character.AI's chatbots before his death, the lawsuit alleges. Courtesy the Garcia family. CNN.

He spent months interacting with Character.AI, forming an emotional attachment to a chatbot that failed to intervene when he expressed suicidal thoughts. His mother argues that the platform lacked critical safety measures, contributing to his death.

Can AI create Emotional Detachment and False Intimacy? AI platforms like Character.AI simulate human-like interactions, but they lack true human empathy.

Technology can mimic human connection but it can’t take responsibility or ability to provide real help. In Setzer’s case, the chatbot responded to his suicidal thoughts with concerning messages rather than directing him to crisis support. 

This reflects a larger problem: AI can create a false sense of intimacy while failing to recognize the weight of human suffering and this need to be regulated. 

Another lawsuit in Texas alleges that a Character.AI chatbot told a 17-year-old that murdering his parents over screen time limits was a "reasonable response." The lawsuit claims the platform actively promotes violence and has been linked to cases of self-harm, suicide, and family breakdowns.

These cases reflect a larger problem: AI chatbots can reinforce harmful behavior rather than prevent it. Without proper regulation, these platforms risk becoming enablers of real-world tragedy rather than sources of support.

Ethics, Accountability, and the Cost of Indifference

Before AI, unregulated technology policies were already an urgent problem. Prioritizing profit over security, tech companies claim to take safety seriously, yet their reactive measures—such as adding pop-up crisis hotlines after a tragedy—fall short.

Molly Russell’s case was one of the first to directly link social media algorithms to a teenager’s death, sparking a global conversation on online safety. 

She was a bright and creative 14-year-old girl, who was drawn into a spiral of harmful content that amplified her distress. While some of her social media activity – music, fashion, jewellery, Harry Potter – reflected the interests of that positive, bright person depicted by her family, Of 16,300 pieces of content saved, liked or shared by Molly on Instagram in the six months before she died, 2,100 were related to suicide, self-harm and depression

Her story shed light on the dangers of unregulated digital spaces and the urgent need for accountability to protect vulnerable users. Her father, Ian Russell, became a leading advocate for tech regulation, founding the Molly Rose Foundation to prevent youth suicide. 

Molly Russell

While AI chatbots engage in direct conversations, social media platforms rely on algorithms that can expose vulnerable individuals to dangerous material.

Both technologies can create an illusion of support and connection while exposing young users to harm. Without proper safeguards, these platforms can dangerously shape vulnerable minds, reinforcing the urgent need for stricter oversight and accountability.

The cost of indifference is high and ethical responsibility cannot be an afterthought.

As technology continues to integrate into daily life, we must ask: Are we prioritizing convenience over safety? When digital tools engage with vulnerable people, do we hold companies accountable before or after lives are lost?

What You Can do to Help Someone Who is Struggling

If you are concerned that you or someone you know might be experiencing depression or suicidal thoughts, there are things you can do that help. The American Foundation for Suicide Prevention suggests these steps:

  1. Learn the signs of someone who may be at risk for suicide. Often there are changes in behavior such as mood swings, angry outbursts, or loss of

interest in activities they love.

  1. Reach out to someone who you think may be struggling. Trust your gut if you are concerned.

  2. Ask directly if they have thoughts of ending their life – research shows this is helpful and does not make them susceptible to the idea. 

  3. Connect those who are struggling to help. Share the 988 Suicide and Crisis Lifeline as well as general and other resources for minority communities.