Multiple studies have shown that LLMs can be more persuasive than the average human. They are sycophantic and will lie—and they're good at it.
Combine that with someone who is unstable or vulnerable, and it can be a disaster.
In the featured story, the man started using AI for work and then fell into dangerous conversations. He viewed ChatGPT as the smartest search engine he had ever used.
He didn't know it could hallucinate and was prone to agree with whatever he said.