A large language model may tell you things that are not true, but it will never, ever lie to you. Because a language model has no conception of truth or falsehood, only probabilities of particular words appearing together. At scale that produces incredible things, but there is no inner model of true-ness or false-ness with which it can evaluate its own outputs. Only the rate at which we accept particular outputs.
Conversation
Notices
-
Embed this notice
Eaton (eaton@phire.place)'s status on Tuesday, 11-Apr-2023 17:38:33 JST Eaton
- clacke likes this.
-
Embed this notice
Scott, Drowning in Information (vortex_egg@ioc.exchange)'s status on Tuesday, 11-Apr-2023 17:38:34 JST Scott, Drowning in Information
@eaton We really need to reframe the discourse around LLMs such that we stop saying that LLMs hallucinate and start saying that LLMs cause the people who use them to hallucinate.
clacke likes this. -
Embed this notice
Eaton (eaton@phire.place)'s status on Tuesday, 11-Apr-2023 17:38:35 JST Eaton
In a sense, if a LLM deceives us we only have ourselves to blame: humanity is shouting into a well and listening to the echoes, and learning that differently-shaped wells make different kinds of echoes. That isn't bad or good, but it is not "asking the well for advice."
On the other hand, these tools are being put in the hands of people who don't actually understand the distinction, and are just being told that "Smart Wells Can Answer Your Questions."
-
Embed this notice
I am Jack's Lost 404 (float13@hackers.town)'s status on Tuesday, 11-Apr-2023 17:38:37 JST I am Jack's Lost 404
ChatGPT is just 1000 monkeys with typewriters in a trenchcoat
clacke likes this. -
Embed this notice
Eaton (eaton@phire.place)'s status on Tuesday, 11-Apr-2023 17:51:36 JST Eaton
@float13 @vortex_egg I mean, I am really impresssed by the monkeys!
clacke likes this.