Conversation
Notices
-
Embed this notice
feld (feld@bikeshed.party)'s status on Saturday, 01-Jul-2023 05:15:05 JST feld It's not an automatic lie, the user has to press a button to request it. Please correct this erroneous bug report -
Embed this notice
feld (feld@bikeshed.party)'s status on Saturday, 01-Jul-2023 07:26:03 JST feld An LLM can absolutely satisfy this goal. You will be proven wrong. -
Embed this notice
j.r / Julian (jr@social.anoxinon.de)'s status on Saturday, 01-Jul-2023 07:26:04 JST j.r / Julian @feld @eevee WTF? If the user clicks the "explain" button, they would/should expect a correct explanation of what they see, but a LLM based "AI" will never satisfy this goal
-
Embed this notice
feld (feld@bikeshed.party)'s status on Saturday, 01-Jul-2023 07:44:46 JST feld And it will be augmented to do so much more -
Embed this notice
j.r / Julian (jr@social.anoxinon.de)'s status on Saturday, 01-Jul-2023 07:44:47 JST j.r / Julian @feld @eevee a LLM is not made to give you factually correct answers, it is only designed to generate answers that seem plausible because it's linguistically correct...
-
Embed this notice
feld (feld@bikeshed.party)'s status on Saturday, 01-Jul-2023 07:45:17 JST feld It won't be an LLM alone -
Embed this notice
Wouter Verhelst (wouter@pleroma.debian.social)'s status on Saturday, 01-Jul-2023 07:45:25 JST Wouter Verhelst @feld
@eevee @jr
Only in the 'million monkeys with typewriter' sense. Yes, eventually there will be something that, by pure chance, ends up being correct.
But an LLM is just advanced statistics. It makes plausible-sounding lies, by definition.
-
Embed this notice