Super useful and thought-provoking --- this is a great way of thinking about ChatGPT
Conversation
Notices
-
Embed this notice
The Trandalorian 🏳️⚧️ (charliejane@wandering.shop)'s status on Saturday, 27-May-2023 15:19:50 JST The Trandalorian 🏳️⚧️ -
Embed this notice
Greg Bell (ferrix@mastodon.online)'s status on Saturday, 27-May-2023 15:19:49 JST Greg Bell @charliejane to me it's a lot of words to only slightly modify the Chinese Room thought experiment, but definitely provocative
-
Embed this notice
clacke (clacke@libranet.de)'s status on Saturday, 27-May-2023 15:19:49 JST clacke @ferrix @charliejane The Chinese Room thought experiment asks whether intelligence can be implemented as a program, and in my opinion the supposed "obvious" answer "no" is wrong.
If the instructions in my books are allowed to be arbitrarily complex I don't see why they couldn't simulate a brain, its experiences and its state. It's just that the pen and pencil implementation would be excrutiatingly slow, and it wouldn't mean that I as the executing processor would understand anything.
The National Library of Thailand experiment asks whether with access only to the text of a language without any connection to anything else you can develop an understanding of the language. In this case my opinion is that the answer is obviously "no".
-
Embed this notice
clacke (clacke@libranet.de)'s status on Saturday, 27-May-2023 19:07:06 JST clacke "A chinese room computer would still lack understanding the concept behind a word, symbol or term."
@jaddy Why? If meat can model it, why can't silicon? What's the precious ingredient missing?
-
Embed this notice
Jaddy (jaddy@tech.lgbt)'s status on Saturday, 27-May-2023 19:07:10 JST Jaddy A chinese room computer would still lack understanding the concept behind a word, symbol or term.
By concept I mean the blurry networks of associations, our brain activates each time we hear a word. (sorry, no native speaker; in german, it's the distinction between Symbol, Begriff (concept) and Ding (the real or abstract thing))
LLMs could reproduce analogies if it had them in their training set, but since they don't have the blurry abstractions (concepts) we have, every analogy or pun they'd produce would have the same probabilty. They have no real world reference to weight their sense.
While a (very sparse) kind of concept might be present in their pobability matrix, I'm also sceptic wether LLMs could ever introspect these layers. Neither when asked, nor as an inner "though process".
-
Embed this notice
clacke (clacke@libranet.de)'s status on Saturday, 27-May-2023 19:08:16 JST clacke @jaddy An LLM obviously cannot understand and doesn't have concepts, as it simply isn't programmed for it.
-
Embed this notice
clacke (clacke@libranet.de)'s status on Saturday, 27-May-2023 19:18:13 JST clacke @jaddy The huge difference between the Chinese Room and the National Library is that the latter is explicitly about an LLM, whereas the former is about an arbitrary program.
-
Embed this notice
Jaddy (jaddy@tech.lgbt)'s status on Saturday, 27-May-2023 19:18:14 JST Jaddy I didn't say it can't be reproduced in silicon. I said that LLMs can't model it, because they simply don't (yet) have the circuit / program for a model of interconnected abstract semantic concepts that our brains have.
Once this can be realized, there could be at least an emulation of intelligence, while I'd leave the discussion wether an emulation in silicon is completely equivalent to the biological model - say: personhood - to the philosophers.
-
Embed this notice