Are there any LLMs yet that are able to kick questions over to a physics model? Like, it seems that at least for some questions, the way we get an answer isn't by thinking about what we've seen or learned or said before, but literally imagining the world. For kids, this seems to include things like finger counting for addition.
Conversation
Notices
-
Embed this notice
Zach Weinersmith (zachweinersmith@mastodon.social)'s status on Saturday, 27-Jul-2024 23:37:53 JST Zach Weinersmith - Doughnut Lollipop 【記録係】:blobfoxgooglymlem: likes this.
-
Embed this notice
Zach Weinersmith (zachweinersmith@mastodon.social)'s status on Saturday, 27-Jul-2024 23:57:58 JST Zach Weinersmith Like, GPT-3 failed questions of the form "I'm in the basement and look at the sky. What do I see?" GPT-4 fixed this by having humans correct its mistakes. I imagine if I were a kid getting this question for the first time, especially in a place where there aren't typically basements, what I'd do is probably imagine being in a basement.
-
Embed this notice
Zach Weinersmith (zachweinersmith@mastodon.social)'s status on Saturday, 27-Jul-2024 23:57:58 JST Zach Weinersmith And the model I use could be fairly stupid. Just a sort of underground box. No need for deep physics or even an understanding of what the point of a basement is.
-
Embed this notice
Jake Hildreth (acorn) :blacker_heart_outline: (horse@infosec.exchange)'s status on Saturday, 27-Jul-2024 23:57:58 JST Jake Hildreth (acorn) :blacker_heart_outline: @ZachWeinersmith From what I understand, LLMs maintain no model of the physical world of any sort, so they wouldn’t even be able to identify a question that needed to be referred to a physics model.
-
Embed this notice
Jake Hildreth (acorn) :blacker_heart_outline: (horse@infosec.exchange)'s status on Saturday, 27-Jul-2024 23:59:20 JST Jake Hildreth (acorn) :blacker_heart_outline: @ZachWeinersmith I’m most relying on this episode of “Better Offline” for my comment: https://overcast.fm/+ABGz6_D9Lk8