This is your regular reminder that LLMs don't answer your question "X" but rather the closely related question "What might an answer to X look like?".
Sometimes this might be useful, but there is no way it can tell if it's answer is at all correct.
Conversation
Notices
-
Embed this notice
Paul Lalonde (flux@wandering.shop)'s status on Saturday, 24-May-2025 08:01:19 JST Paul Lalonde
- Rich Felker repeated this.
-
Embed this notice
ErosBlog Bacchus (erosblog@kinkyelephant.com)'s status on Saturday, 24-May-2025 08:01:17 JST ErosBlog Bacchus
@Flux There are LOTS of places, especially in the business world, where "I need an extremely low-effort way to generate a shiny thing that looks like a very plausible answer to X" is a common circumstance, and so a tool to do that is super handy. But of course most of the use cases involve bullshittery. People who care about marketing and pleasing higher management will love the tool; people who care about truth and understanding will have very little use for it, or indeed, sourest disdain.
Haelwenn /элвэн/ :triskell: likes this. -
Embed this notice
Rich Felker (dalias@hachyderm.io)'s status on Saturday, 24-May-2025 08:01:51 JST Rich Felker
@ErosBlog @Flux This. Where "AI" fits the "needs" of the business world, it's because their "needs" are WRONG and HOSTILE TO HUMANITY already.
-
Embed this notice
Jean-Baptiste "JBQ" Quéru (jbqueru@floss.social)'s status on Saturday, 24-May-2025 08:02:12 JST Jean-Baptiste "JBQ" Quéru
@Flux I tend to think it's more like "What output would best fool this human into thinking I understood the question and know the answer?"
Fun thing, this is exactly how an incompetent human would behave in a job interview, so I call LLMs "Artificial Incompetence."