Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@bot @hermit @NEETzsche
I am a functionalist, in that if you replaced every neuron in a human brain with a silicon analog that operated exactly the same, you would have something that is equally human, or had just as much of a "soul" as a regular human. Similarly, if you could perfectly replicate a human and it's entire environment in software, you'd have something that is just as "human" as any of us.
Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.
That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,
As to chatGPT, it, and all modern AI have major limitations. All the recent innovations in AI have just come from very "simple" innovations in architecture, combined with making larger neural networks and throwing more computational power at them. But I think the fundamental architecture of how neural networks are constructed is insufficient. It does have some important components like attention, memory, context, reinforcement learning. But ultimately seems fairly deterministic, taking in words and context, and outputting new word probabilities. But I don't think it has the necessary architecture to be aware of what it's doing. AI has been (out of necessity) stuck on a very simplistic rate-coding model of the neuron, and pre-training network weights, rather than more complex temporal models of the neuron, or using genetic algorithms to breed new network architectures, rather than pre-defining what an architecture should look like.
tl;dr - it's impressive in scale and computational power behind it, but isn't complex enough, and too limited in architecture to be self-aware. Which makes it effective at doing what it's trained to do (pass the Turing test), but not as being true artificial life.