@jherazob The crucial difference is that while humans may make mistakes or make things up occasionally, we still have actual knowledge and memories and understandings of concepts and principles and how they relate, that we can consult to at least attempt to say accurate things, and we are actually usually *trying* to be correct as well, and the reward function our brains have been trained under since birth is a connection with a real objective world that rewards actually understanding how things work and punishes not, because if you're wrong about something it simply won't actually fly if you try it in the real world, so we are still systemically capable of and oriented towards truth. Whereas, AIs have no understanding of concepts or principles and have no actual knowledge or memories or anything — it's all just thrown into a statistical blender, there's no memory storage portion of their neural network, and their reward function only rewards them for the plausibility (in the sense of looking superficially, statistically right) of an assemblage of words given a context. In other words they aren't a system designed to be consistently capable of being accurate, or even "caring" if they're accurate. It's basically purely orthogonal or incidental to what the AI is trying to do that it's right sometimes, it's an accident. It's like saying pathological liars (who only care if something "sounds right") and regular people are the same because regular people make mistakes or exaggerate sometimes. Like, no, one is fundamentally not oriented toward producing accuracy, the other one at least is.