@crazyeddie @felipe @zak @futurebird @ronaldtootall @hannu_ikonen No it's not. This is a grossly inaccurate description of how LLMs are trained and used. The models users interact with are completely static. They are only changed when their overlords decide to change them, not by self discovery that they were wrong. They don't even have any conception of what "wrong" could mean because there is no world model only a language model.