@falcon@mastodon.falconk.rocks This. 👆
I've casually hypothesized that intelligence/"consciousness" is a mix of information processing, ability to experiment with the environment, and *consequences* - that there is a pain/survival distinction between one course of action and another.
Not only do LLMs lack ability to experiment with environment and reason about results. They lack consequences. No matter how awful an instance of them behaves, the model doesn't get deleted.