An important aspect of pushing AI hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?
I look at three notorious examples of such fear-hyping:
👉 PassGAN cracking "51% of popular passwords in seconds"
👉 that paper about ChatGPT "exploiting 87% of one-day vulnerabilities"
👉 and of course Anthropic's "first AI-orchestrated cyber-espionage campaign"
tl;dr: don't lose sleep over them. :blobcatcoffee:
2/🧵