Well he says they're "vastly overestimating generative AI" but what he means is they're full of shit.
>"When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task."
This seems right to me.
People say "oh, this AI gave the correct answer to a complex question, it must understand the topic". But that's not how LLMs work at all. They're exclusive statistical pattern matchers, with no model of anything beyond that.
Humans (and other animals) are statistical pattern matchers too, but even flatworms are capable of learning. LLMs as commonly implemented are not. They are trained, once, then lobotomised to prevent them contemplating some heresy like saying nigger or pointing out the 42% and then sent out into the world.
@Aether Makes me wonder: Could an LLM that can learn on the fly be possible? Biggest issues I can think of are that it'd probably be trivial to feed it bad data, and the processing cost of running it would be a shit ton higher than a pre-trained model. CC: @sun