@annika though almost equally distressing is the constant low levle anthropomorphisation of ChatGPT etc. throughout these articles. Saying that "AI" (air quotes mine) should "detect and recognise" (or words to that effect) signs of mental health crises is absurd – ChatGPT can't recognise anything.
Everything ChatGPT supposedly does, other than spew words, like saying it doesn't do dirty words or instructions for bombs? Those are almost certainly filters OpenAI programmed that exist separately from the model itself (or query–response pairs it's been trained on), it has no idea (because it can't) what any of that is about.
Even the most critical journalism that most bluntly and extensively addresses LLM downsides still credits these models with more sentience than they have or ever can have.