"AI" (large language models) doesn't have understanding and this is trivially verifiable by asking it about something niche. It will not have sufficient data to autocomplete what you're asking it and no amount of cajoling will get it closer to a real solution, instead it will spin forever spitting out wrong solutions that are in its training set (because humans have posted them sometimes)
For example. In qbasic you have subs, and functions. Subs do not return a value. Functions do. If you ask an LLM how to call a function in qbasic while discarding the return value the "AI" will never give you the right answer. It will cycle through answers for related basic dialects where you can do it, but it will never say "you can do that in other flavors but not qbasic specifically" no matter how many times you point out its wrong
Why will it never do that? Because the statistically likely answer for "how do I X" is "you do Y", not "you absolutely cannot do that"
LLMs have zero understanding