A Stanford University study, covered in Ars Technica last month, found that therapist-branded chatbots from Character.AI and other providers can encourage delusional thinking and express stigma toward people with certain mental health conditions. But one of its co-authors, Nick Haber, argued that AI likely does have positive applications to therapy, including in training human therapists and in helping clients with journaling and coaching. That strikes me as true — and still not quite enough. Part of the problem here surely relates to language: the words "therapy" and "therapist" connote a level of trust and care that no automated system can provide. Tools like ChatGPT can clearly provide a convincing therapy-like experience — even one that has therapeutic benefits — but should never be mistaken for the genuine article.
https://files.mastodon.social/media_attachments/files/115/052/558/126/233/424/original/8f089b58e4989cf0.png