The first series of questions were prompts to summarize an overview of A', which there is sufficient public domain info to accomplish.
ChatGPT did it, but weirdly. It described A' using the semantics of A, which made for a super weird description. It was highly believable. Not wrong. But also not right either.
Keep in mind there are public domain documents that describe A' precisely. But ChatGPT didn't use them exactly, because the A language documents overwhelm in volume the A' documents.