@jeff right, the model has to be "within scope" for it to be factually knowledgeable.
ChatGPT is trained on too much data to be considered knowledgeable and as it's the most used model, that's what I refer to when discussing things like this.
Sure, if I train a model on a paragraph of a specific book, it'd probably be pretty damn good at knowing the information of that chapter.
On the contrary, if I train it on 100 books, it probably won't know a damn thing about a specific one and would get jumbled up, quite often.
It's all about scope of training in those cases.