simsa03 (simsa03@gnusocial.jp)'s status on Friday, 24-Nov-2023 23:31:14 JST
-
Embed this notice
And here is where I think Prof. Bender's thoughts go astray.
To be "designed to produce plausible sounding synthetic text" that needs an interpreting and embodying counterpart is applicable to human interaction and text production as well.
It is in fact this misunderstood feature of the Turing Test that, in the end, makes for bad, i.e., simplistic humanism: That given a certain complexity of "behaviour" humans cannot but project soulfulness, self, and rationality into the entity displaying that "behaviour". It's becomes an instance of the philosophical "Other Minds"-problem which can easily be applied to the interpreting entity as well: How can we be sure to have minds (and ideas, and selves, and rationality, etc. etc.) ourselves?
Instead, Prof. Bender falls back into a simplistic dualism of man vs machine, of Humanities vs MINT, a variant of old-fashioned 19th century Empiricism.
It is exactly because we humans treat everything displaying a certain complexity of "behaviour" as animated and soulful that the distinction between "plausible sounding synthetic text" and "writing" to "refine our ideas and share them with others", given a certain level of complexity, can no longer be made. "Good writing" becomes a verdict applicable to any source producing outcomes that provoke us to utter this value statement. (People will see the Quine tradition in this comment of mine.)