@felipe @zak @futurebird @ronaldtootall @hannu_ikonen The world model you speak of corresponds to empirically testable things and is updated when it fails to do so. The language models don't and aren't.
Conversation
Notices
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Tuesday, 26-Nov-2024 20:46:22 JST Rich Felker -
Embed this notice
Wyatt H Knott (whknott@mastodon.social)'s status on Wednesday, 27-Nov-2024 08:50:34 JST Wyatt H Knott @dalias @felipe @zak @futurebird @ronaldtootall @hannu_ikonen This. The evidence of your senses is correlated to the effectiveness of your behaviors. Since LLMs don't HAVE behaviors, they don't have the functionality to create the feedback loops necessary for understanding.
-
Embed this notice
crazyeddie (crazyeddie@mastodon.social)'s status on Wednesday, 27-Nov-2024 08:54:26 JST crazyeddie @dalias @felipe @zak @futurebird @ronaldtootall @hannu_ikonen They do and are though.
That's the training part. The model is trained and then used. It may or may not be training while it's used.
That training is fed a context, just like you do with experimentation. The model it tested against that context, as you do empirically. The model is then adjusted if it needs to be. This is exactly the empirical process.
-
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Wednesday, 27-Nov-2024 08:54:26 JST Rich Felker @crazyeddie @felipe @zak @futurebird @ronaldtootall @hannu_ikonen No it's not. This is a grossly inaccurate description of how LLMs are trained and used. The models users interact with are completely static. They are only changed when their overlords decide to change them, not by self discovery that they were wrong. They don't even have any conception of what "wrong" could mean because there is no world model only a language model.
-
Embed this notice