@eniko Not only can’t they reason (and will never be able to), LLMs are even limited within the domain of spicy autocomplete.
It’s essentially a pyramid scheme, to get linearly better they require exponentially more input data and computational resources.