@thomasfuchs I agree but a lot of this depends on what you consider artificial intelligence, to some people a text model is AI, you could argue an algorithm is already a form of AI, but I guess what most people consider “real” AI is a computer that can think like a human. That’s already ironic if you ask me.
@thomasfuchs@mbrailer I think at some point you could achieve the complexity of a human brain, which is itself not made of magic but of complex physics processes that could be emulated and modeled, but only at _enormous_ expense with silicon. All of the computer neural networks in the world put together can do, what, a tiny fraction of what _anyone_ can do with a kilo of grey matter and less than 100 watts, their approach will _never_ achieve the power efficiency that biology has done.
@thomasfuchs isn’t there also evidence that Moore’s Law may be so predictable simply because of the expectations *it* puts on the industry and consumers rather than because of any “natural” feature of transistors or innovation. Not sure how that artifice would be transferable to anything with real limitations (even wrt transistors, as we’re now seeing)
@raven667@thomasfuchs@mbrailer it’s not just the complexity that matters, but how the system is structured that gives rise to real intelligence. My biggest frustration with this current wave of AI hype is that it is so divorced from any studies of actual human cognition. We take “neural networks” (a statistical algorithm for finding trends in large and messy data sets), throw tons of data on it, and pretend it’s “on the verge of becoming AGI.”
@thomasfuchs oh, no doubt. I think, in the years since he coined it, though, many have taken it to be some kind of force of nature that results in technological progress happening, as you put it, linearly or geometrically, when we that wouldn’t have necessarily happened (for a time, at least) for transistors without the self-fulfilling prophecy. I don’t think any of that is what Moore intended, though