The vast majority of people don't understand the basics of how LLMs work, or that they're just next word (next token) prediction machines. Once you get a grasp of that, it makes a lot more sense how they [don't] work. Whenever an LLM gets something "right," it's by accident. For simple things, that accident rate might be like 80%, which is good enough for people to be impressed by a chatbot.
But when you see interviews where they ask a chatbot how much it has grown since the start of the interview and it says "50x" or whatever, it's literally bullshit. The model is static. It doesn't change. Now OpenAI and others might do some nightly updates or whatever, but the model isn't changed in the prompt phase. Why does it say that? Who the fuck knows. It ingested a shit ton of Sci-Fi books during the training phase?
3Blue1Brow does a really good brief video on LLMs: https://youtu.be/LPZh9BOjkQs
and he also did some deep dives if you know computer science and remember the back propagation algorithm from machine learning class:
https://youtu.be/wjZofJX0v4M
https://youtu.be/eMlx5fFNoYc
https://youtu.be/9-Jl0dxWQs8
...the real danger of LLM is not the "AI" part, but the fact that people are trusting the bullshit machines. The machine is not "hallucinating." It's just predicting the next part of the word using a transformer with a series of attention and perceptron blocks. It can't set goals. It can't reason. It's just predicting what text looks good to a human based on a few hundred people hired to sit at desks for a year clicking on the auto generated text that looks least retarded, for hours a day.