@charliejane This is absolutely correct. It's not just that we don't really understand human consciousness enough to replicate it in silico (though that is correct). The bigger point here is about LLMs in particular. They have fooled a lot of people into thinking that we have made a big leap down the path towards machine intelligence/consciousness, but that's an illusion. The major accomplishment of LLMs, in my view, is to invalidate the Turing test.
Conversation
Notices
-
Embed this notice
Todd Horowitz (toddhorowitz@fediscience.org)'s status on Tuesday, 05-Dec-2023 10:31:10 JST Todd Horowitz
- clacke likes this.
-
Embed this notice
Smörhuvud (he/surprise me) (guncelawits@mastodon.social)'s status on Tuesday, 05-Dec-2023 19:29:00 JST Smörhuvud (he/surprise me)
@toddhorowitz @charliejane @clacke Yes, and that foregrounds that we don’t know what we mean when we say “intelligence.”
clacke likes this. -
Embed this notice
Smörhuvud (he/surprise me) (guncelawits@mastodon.social)'s status on Tuesday, 05-Dec-2023 19:29:03 JST Smörhuvud (he/surprise me)
@toddhorowitz @charliejane @clacke The Guncelawits Test is a test of a machine's ability to earn a dog’s friendship.
clacke likes this. -
Embed this notice
Todd Horowitz (toddhorowitz@fediscience.org)'s status on Tuesday, 05-Dec-2023 23:52:14 JST Todd Horowitz
@guncelawits @charliejane @clacke Well obviously you start with a treat-dispensing machine...
clacke likes this.