Next we asked specific questions about the content in the public domain regarding A'.
This should have been a slam dunk copy & paste! Nope, not even close.
ChatGPT hallucinated answers using language from A to fake the answers about A' even though it had precise answers in the training set.
Answers seemed credible, but semantically were significant misses, only detectable if you knew both A & A' domains.
There was no cue whatsoever that ChatGPT was out of its accuracy zone.