@mdhughes If we build systems that have an actual known use case and a defined scope we won't accidentally produce sentience.
Even today's loosely defined "keep up a conversation" system isn't even programmed to understand text, so there is no reason to believe it suddenly will, much less acquire the experience required to make sense of that meaning, just because we feed more text into it.
What's fascinating about LLMs is how far you can take the illusion given enough input and how much humans are willing to model a mind that isn't there.