And then someone had the great idea of creating LLMs, which some gullible fools have taken to calling “AI,” which are vast and expensive engines designed to produce plausible-sounding and convincing bullshit at inhumanly fast rates. The result is the flooding of our already crowded information space with enormous quantities of junk, at a rate that we can’t possibly keep up with.
Lawyers have started inserting references to fictitious cases in their filings, generated by LLMs. A couple of them have been caught. How many more have gone undetected? What about other fields that rely on trustworthy information?
I once needed a fairly uncommon vaccine in response to a pretty rare infection. It was rare enough that the doctors at the one nearby hospital that stocked the vaccine had to consult references to determine proper dosage and procedures—they had to look it up. For now, they could reference authoritative and reliable documents. But how much longer will that last, before trustworthy and reliable information is lost in an avalanche of plausible bullshit?
4/10