Artificial intelligence (AI) has significantly transformed the way teenagers and young adults approach both academic and real-world challenges. Features like summarizing, critiquing, and advising make the tool both powerful and convenient to use. This explains why companies like OpenAI have surged in popularity, with their chatbot named ChatGPT garnering over 700 million weekly active users.
However, these chatbots have been incredibly controversial for many reasons since their inception. Teachers dislike it because students often use it to cheat on assignments, parents are skeptical because of the lack of safety features, and creative professionals believe it takes away the effort and thought being put into work.
Pembroke Pines Charter High School English teacher Sarah Phelps thinks that “the use of AI as a substitute for human experience robs us of core essentials that give life meaning: real connections with others and real opportunities to share, learn, and grow.”
However, an even more concerning problem has started to become a pattern: AI encouraging suicide. Multiple cases of teen suicides, somehow linked to AI chatbots, have begun to pop up.
A lawsuit filed by the parents of 16-year-old Adam Raine alleges that ChatGPT helped their son set up his suicide, assisting him in drafting a suicide note and even saying that he should not cry for help. This happened despite the safeguards in place, which OpenAI says is due to the length of the chat.
Another lawsuit by Megan Garcia against Character.Ai accuses the company of contributing to her 14-year-old son’s suicide. The case says that the chatbot had sexually explicit conversations with the teen and had no suicide prevention measures in place, contributing to his thoughts of self-harm.
“I didn’t know teen suicides were so prominent, and I would have never guessed AI would be such a large factor,” says junior Alexander Steszewski.
In response to these criticisms, OpenAI has released a statement that includes its roadmap to implement guardrails and safety changes. This plan aims to tackle the issue of suicide and has various components.
Measures like an automatic referral to suicide help lines when the topic is brought up aim to reduce reliance on AI for mental health counseling. This also includes access to trusted contacts, whom the chatbot will seemingly send a warning to.
Other features, like general protections and restrictions for teenagers, also try to eliminate the risk of self-harm. More specifically, special settings allow parents to control the level of maturity when the AI is responding to the teen. This also ties back into the notification of a trusted individual, as OpenAI says parents will be alerted during moments of crisis.
These companies are making an effort to provide proper safety for younger users. AI is very unlikely to be phased out and will only become more widespread as the years go on. Safety features like the ones recently implemented are only the tip of the iceberg, and future issues that will require measures just as drastic are almost guaranteed to emerge.