Low-hanging fruit for Bing Chat: right now, search queries yield retracted papers with no warning attached.
I don't except LLMs to get this sort of thing right, especially if trained before the retraction, but hybrid systems like Bing Chat ought to.
Low-hanging fruit for Bing Chat: right now, search queries yield retracted papers with no warning attached.
I don't except LLMs to get this sort of thing right, especially if trained before the retraction, but hybrid systems like Bing Chat ought to.
@ct_bergstrom [...]
To quote @pluralistic:
"The problem isn’t that the chatbots lie all the time — it’s that they usually tell the truth, but then they start spouting confident lies."
and:
"That means that when Google ingests and repeats a lie, the lie gets spread to more sources. Those sources then form the basis for a new kind of ground truth, a “zombie statistic” that can’t be killed, despite its manifest wrongness."
https://doctorow.medium.com/googles-ai-hype-circle-6158804d1299
@ct_bergstrom Though this has still the usual drawbacks associated with using a LLM to generate an output for which the user doesn't have knowledge: there still risk that the Chat AI "lies"/hallucinates (i.e. Bullshits with confidence) by accident and the result is then taken for truth by the user.
[...]
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.