@mrcompletely @futurebird @dahukanna @PavelASamsonov yep, agreed. what LLMs do today is just “system 1” with a little faking “system 2”, if that makes sense. but it’s hard to say if those other aspects won’t spontaneously emerge with scale. then again, are there easier ways to develop those systems? like, maybe symbolic reasoning will emerge, but why not just wire in our existing systems that do it?