@kellogh @futurebird @dahukanna @PavelASamsonov fundamentally the issue to me is that these are not cognitive systems but they are being treated as if they are. They're linguistic pattern matching systems. That's not what minds are. The methods an LLM uses to arrive at output have no parallels in modern cognitive science. So why would thought-like states emerge? It's like throwing soup ingredients in a blender and expecting a working car to pop out if you just keep adding carrots.