Finally, as is usual and *completely unacceptable* the public does not have information about the training data used to build this thing, just the info that Microsoft made it.
Conversation
Notices
-
Embed this notice
Prof. Emily M. Bender(she/her) (emilymbender@dair-community.social)'s status on Saturday, 30-Mar-2024 05:05:08 JST Prof. Emily M. Bender(she/her) -
Embed this notice
Prof. Emily M. Bender(she/her) (emilymbender@dair-community.social)'s status on Saturday, 30-Mar-2024 05:05:09 JST Prof. Emily M. Bender(she/her) It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.
Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)
>>
GreenSkyOverMe (Monika) repeated this. -
Embed this notice
Prof. Emily M. Bender(she/her) (emilymbender@dair-community.social)'s status on Saturday, 30-Mar-2024 05:05:10 JST Prof. Emily M. Bender(she/her) There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
>>
https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/
GreenSkyOverMe (Monika) repeated this.
-
Embed this notice