"If AI were revolutionizing the economy, we would see it in the data. We're not seeing it. I could talk about the fact that AI companies have yet to find a killer app and that perhaps the biggest application of AI could be, like, scams, misinformation and threatening democracy. I could talk about the ungodly amount of electricity it takes to power AI and how it's raising serious concerns about its contribution to climate change."
Don't mix up neural networks and large language models. Neural networks have a number of useful applications, image recognition being one of them.
Large language models are a tool based on neural network design that produces a parody of the source data as a plausible continuation of the prompt. This is useful for passing the Turing test and generating spam. It is not however a reasoning system or a viable path towards AI.
@resuna@krnlg@gerrymcgovern Of course neural networks are being applied to things other than LLMs too. For example image recognition. Not always successfully, for example when Teslas drive straight into fire engines with all their lights flashing. That at least is something that humans generally avoid doing.
Large language modules are a dead end digression that is sucking all the oxygen out of actual AI research. There is huge potential in this area and it has been hijacked by con artists.
If you want a stupid historical analogy, it's like someone was trying to build a mechanical horse and everyone was excited by the idea of regular carriages being pulled by steam horses.
@resuna@krnlg@gerrymcgovern I'm sure there were people looked at those shuddering, juddering, noisy, smelly things and said they'll never go as fast as a good horse. Do you believe there's some holy spirit that can infuse a lump of protoplasm but not a lump of silicon? I'm not saying more and more powerful AI is something we want or should have but unless we decide to do something to stop it or unless it turns out to not be what it rather looks like being, it's on the way.
@resuna@krnlg@gerrymcgovern Yes indeed. Whole university courses are based on identifying which painters or whatever influenced which other painters or whatever.
@stevehayes@resuna@gerrymcgovern I don't think that's quite right - the principle of operation of an LLM is not a mystery, it is just that it is hard to go from a specific output and work back to exactly why it gave that output. I think? I mean LLMs have been an active research area for some time, people made these things. They don't exhibit mysterious emergent intelligent properties afaik, they just seem like they do at a glance.
@krnlg@resuna@gerrymcgovern My point is that we don't really know what's going on in there. It's not like a traditional program where we can point to lines of code. At the same time there's nothing we can point to in an animal's brain and say that's where the magic happens and that AI doesn't have and can never have that thing.
@stevehayes@resuna@gerrymcgovern The AI wasn't having a tantrum, it was surely just reproducing plausible answers to a repeated question based on its test data. It doesn't know what a tantrum is.
That's the difference, however much humans make mistakes - the AI isn't making mistakes, it doesn't have any concept of a mistake let alone knowledge or thinking.
@resuna@gerrymcgovern Occasional human mistakes? Think of the millions of MAGA followers. The point is that we don't really know what's going on in that AI simulation of an animal brain's neural network. Maybe we can never know - we'll let philosophers work on that one. But we can observe. The first one I remember reading about was an AI having a tantrum if it was asked the same question 15 times.
I kind of hate the comparison between the routine failures of this kind of software, and humans occasional mistakes, because they really aren't all that similar.
@resuna@gerrymcgovern It's the failings of AI that are the most interesting aspect. Especially when we look around and see humans doing much the same things. I'm sure that 90% of the columns in The Guardian could be written by AI and nobody would notice. Maybe they already are. They're just a churning mass of memes and tropes.
ROSALSKY: Given that hype, should we expect AI to usher in revolutionary changes for the economy in the next decade?
ACEMOGLU: No. No, definitely not. I mean, unless you count a lot of companies overinvesting in generative AI and then regretting it as a revolutionary change.
ROSALSKY: Many AI researchers are saying we cannot end the problem of hallucinations any time soon, if not ever, with these models. That's because they don't know what's true or false.