AI should be a decision support system, not an autonomous one. We’ve seen time and time again how it doesn’t “think”, it just optimises for the end goal. Anthropomorphising it is dangerous because it lulls us into thinking it can imitate human decision making. https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test