@ct_bergstrom Someone wrote it's a system that does good most of the time, but makes dramatic errors when wrong (like with autonomous driving, this is catastrophic). But because most people simply believe what is in text on a screen, no one ever bothers to check the answers. So no one else would count the Fibonaccis to check which place 987 lands. Chat-GPT seemingly works for many of us, because so many of us are already stupid in front of screens.