"#AI" as Artificial Intelligence is not real. But the stuff being called "AI" most certainly is. Simply telling people "AI is not real" is not helpful in the slightest. You're telling them: "Your perception of reality is wrong", but the machine does things that are 'indistinguishable from magic' (Hi Clarke!) to them. We won't convince people by telling them "Your god isn't real! Follow me!" (I'm exaggerating), but by empowering them to emerge from their self-imposed immaturity.
@lavaeolus I’m thinking more informally—like “talking to family over Xmas dinner” where I can’t assign them academic reading 😉
What I often get when pointing out “the limitations, the weirdness, the brokenness” is that it’s early days and these problems will go away as the tech matures (a perspective, of course, predicated on a failure to understand the tech).
@tkinias@lavaeolus It drives me nuts arguing that it isn't early days. LLMs are just machine learning (scaled ludicrously up), & we've had that since the 1990s. Or '60s!
@lavaeolus so... how *do* we explain to laypeople that “AI” (which for most people now means ChatGPT and other LLMs) doesn’t do what the marketing says it does? (different, if subtly so, from “it’s not real”)
@alcinnz 100% yes! I love using this graph from Zhao, Wayne Xin, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, et al. 2024. ‘A Survey of Large Language Models’. arXiv. https://doi.org/10.48550/arXiv.2303.18223. @tkinias
@lavaeolus I had neural nets specifically in mind. If I recall correctly we were playing with perceptrons in the '60s, got disappointed, & resuscitated the ideas at a larger scale the '90s.