That's an important take on the verifiability of AI.
I'm not terribly concerned about getting back a wrong answer from, say, ChatGPT because I don't trust Google or MSFT to offer me answers that I don't verify on my own.
When you're taking information from multiple sources -- as AI is want to do -- it's the same sort of thing we do. How do we come to know things? How do we properly credit what we've come to internally colonize as our own thought and view on the world?