The idea that "AI safety" is about AI not becoming fucking skynet and taking over the world is very much something being pushed by AI companies, because on the one hand it helps feed a misconception of AI as being all-powerful and all-useful, even on the verge of sentience, on the other, it distracts from ways AI is already being used to harm people, from hiring algorithms to erasing the commons, to safety-critical machinery