@sheislaurence @0xD I can't see the conversation chain that led to this which suggests dodgy instances are in play. But responding just to your toot...
"AI" (really LLM) is lying repeatedly. It's not hard to see examples. Lying is the wrong word because it's not a sentient system, it's just that its algorithms are returning results that aren't true fairly regularly.
The recent story of a US court case where one team of lawyers used ChatGPT (I think, don't quote me) to hunt case law examples and it... made it all up... and they deployed the entirely fictitious law without checking, is a good example.
This stuff is now appearing all over search engines and so they can no longer be trusted.