The more complexity, the more information we have, the more resources we must expend to locate, access, and process the information we actually need. We have to spend additional resources—time, attention, money—to confirm that the product we’re ordering online really is the thing we want and not fake junk that could unexpectedly hurt us. This is nothing new, of courses; people have been dealing with misinformation and information overload for as long as there have been people. But we’ve reached the point where we can automate the mass production of bullshit that can easily fool almost everyone.
Will LLMs bring us to the point where life-saving medical information is buried in masses of bullshit, requiring additional resources to parse the information first?
I was inspired down this line of thought by the op-ed below, on the threat posed by LLMs to the study of history. LLMs can generate plausible bullshit versions of old photographs and historical documents. Will we start losing access to the past as a result?
I am, by academic training, an historian. I also rely heavily on historical and archeological information for my understanding of hidden mechanisms of coercion, the space of possible human social forms, and methods of resistance and liberation. If the historical record is flood with plausible bullshit, we’ll lose so much more than just some sense of the past.
https://www.nytimes.com/2024/01/28/opinion/ai-history-deepfake-watermark.html
5/10