The thing to keep in mind about Large Language Models (LLMs, what people refer to as AI, currently) is even though human knowledge in the form of language is fed into them for their training, they are only storing statistical models of language, not the actual human knowledge. Their responses are constructed from statistical analysis of context of prior language used.
Any appearance of knowledge is pure coincidence. Even on the most “advanced” models.
Language is how we convey knowledge, not the knowledge itself. This is why a language model can never actually know anything.
And this is why they’re so easy to manipulate into conveying objectively false information, in some cases, maliciously so. ChatGPT and all the other big vendors do manipulate their models, and yes, in part, with malice.
#LargeLanguageModels #LLM #AI #NotAI #ChatGPT #ChatGPTIsNotAI #MaliciousAI #NotIntelligent #ArtificialIntelligence