Contrast this with the Effective Accelerationists, who also believe that #LLMs will someday become superintelligences with the potential to annihilate or enslave humanity - but they nevertheless advocate for *faster* AI development, with *fewer* "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
4/