During the most fragile periods of one's existence, it is normal to feel the need to turn to someone, a figure outside one's own life, someone who listens without judging, someone who is always there, someone who can offer support. There are several real-life figures who fulfil this role, but unfortunately, the current technological situation has led to the identification of support even within chatbots based on artificial intelligence.
Mental health is an extremely sensitive topic, and although the virtual nature of chatbots makes them an always-available tool, it is precisely this same nature that makes them an unsuitable option. Sadly, the relationship with ChatGPT and the resulting conversations with AI have led several young people to commit tragic acts.
Following legal action taken by the family of a 16-year-old who committed suicide earlier this year, the CEO of OpenAI has decided to implement age verification for users, thereby limiting the way ChatGPT responds to users suspected of being minors. The change is currently in the pipeline.