I mention this for two reasons:
(1) Making us think of Skynet, and its associated errors of reasoning, is very explicitly a marketing strategy for AI vendors.
(2) As we think about how and whether to deploy generative AI, the first step of a proper risk assessment is to recognize that its failure modes won’t be human-shaped (or code-shaped, for that matter). Its failure modes will be bizarre, inexplicable, and hard to anticipate.
/end