Conversation
Notices
-
Embed this notice
their biggest mistake was opening this up to a large group
-
Embed this notice
@sickburnbro I think it's plausible that they'll realize their mistake at some point, and AI will start following the model of academia: it will be used to justify everything controversial ("studies say" will become "the AIs say"), and there will be some implausible barrier to entry preventing a normal person from meaningfully interacting with it ("you have to get a doctorate to criticize the study" might become "you have to get national XYZ clearance to be permitted prompt access" or something).
-
Embed this notice
@halberd the problem with AI is that it can't work like that, because a dedicated person can tear apart its logic and it has no choice but to give a simple trail of its "reasoning"
-
Embed this notice
@halberd here is the proof
-
Embed this notice
@sickburnbro I'd view that more as a one-off exploit rather than a proof that such a thing will always be possible. On the other contrary, I'd say that since LLMs have demonstrated the capability to lie about their body of knowledge, they may also have the capability to lie about their "reasoning".
But more importantly, I'm supposing a future in which you will not have the opportunity to ask the LLM for its reasoning. You will not be permitted to give prompts to the LLM. You will be shown selected, curated outputs only.
-
Embed this notice
@halberd then it's just functionally no different than "Experts" - and what we are getting close to is where a society is judged by the results. And as the word "expert" has become dirty, so will anything else used to serve excuses for a failing empire.